A strategy for low cost development of incremental oil in legacy reservoirs
Attanasi, E.D.
2016-01-01
The precipitous decline in oil prices during 2015 has forced operators to search for ways to develop low-cost and low-risk oil reserves. This study examines strategies to low cost development of legacy reservoirs, particularly those which have already implemented a carbon dioxide enhanced oil recovery (CO2 EOR) program. Initially the study examines the occurrence and nature of the distribution of the oil resources that are targets for miscible and near-miscible CO2 EOR programs. The analysis then examines determinants of technical recovery through the analysis of representative clastic and carbonate reservoirs. The economic analysis focusses on delineating the dominant components of investment and operational costs. The concluding sections describe options to maximize the value of assets that the operator of such a legacy reservoir may have that include incremental expansion within the same producing zone and to producing zones that are laterally or stratigraphically near main producing zones. The analysis identified the CO2 recycle plant as the dominant investment cost item and purchased CO2 and liquids management as a dominant operational cost items. Strategies to utilize recycle plants for processing CO2 from multiple producing zones and multiple reservoir units can significantly reduce costs. Industrial sources for CO2 should be investigated as a possibly less costly way of meeting EOR requirements. Implementation of tapered water alternating gas injection schemes can partially mitigate increases in fluid lifting costs.
Dilger, Mathias Georg; Jovanović, Tanja; Voigt, Kai-Ingo
2017-08-01
Practice and theory have proven the relevance of energy co-operatives for civic participation in the energy turnaround. However, due to a still low awareness and changing regulation, there seems an unexploited potential of utilizing the legal form 'co-operative' in this context. The aim of this study is therefore to investigate the crowdfunding implementation in the business model of energy co-operatives in order to cope with the mentioned challenges. Based on a theoretical framework, we derive a Business Model Innovation (BMI) through crowdfunding including synergies and differences. A qualitative study design, particularly a multiple-case study of energy co-operatives, was chosen to prove the BMI and to reveal barriers. The results show that although most co-operatives are not familiar with crowdfunding, there is strong potential in opening up predominantly local structures to a broader group of members. Building on this, equity-based crowdfunding is revealed to be suitable for energy co-operatives as BMI and to accompany other challenges in the same way. Copyright © 2017 Elsevier Ltd. All rights reserved.
Implementing Distributed Operations: A Comparison of Two Deep Space Missions
NASA Technical Reports Server (NTRS)
Mishkin, Andrew; Larsen, Barbara
2006-01-01
Two very different deep space exploration missions--Mars Exploration Rover and Cassini--have made use of distributed operations for their science teams. In the case of MER, the distributed operations capability was implemented only after the prime mission was completed, as the rovers continued to operate well in excess of their expected mission lifetimes; Cassini, designed for a mission of more than ten years, had planned for distributed operations from its inception. The rapid command turnaround timeline of MER, as well as many of the operations features implemented to support it, have proven to be conducive to distributed operations. These features include: a single science team leader during the tactical operations timeline, highly integrated science and engineering teams, processes and file structures designed to permit multiple team members to work in parallel to deliver sequencing products, web-based spacecraft status and planning reports for team-wide access, and near-elimination of paper products from the operations process. Additionally, MER has benefited from the initial co-location of its entire operations team, and from having a single Principal Investigator, while Cassini operations have had to reconcile multiple science teams distributed from before launch. Cassini has faced greater challenges in implementing effective distributed operations. Because extensive early planning is required to capture science opportunities on its tour and because sequence development takes significantly longer than sequence execution, multiple teams are contributing to multiple sequences concurrently. The complexity of integrating inputs from multiple teams is exacerbated by spacecraft operability issues and resource contention among the teams, each of which has their own Principal Investigator. Finally, much of the technology that MER has exploited to facilitate distributed operations was not available when the Cassini ground system was designed, although later adoption of web-based and telecommunication tools has been critical to the success of Cassini operations.
Walsh, Kenneth; Bothe, Janine; Edgar, Denise; Beaven, Geraldine; Burgess, Bernadette; Dickson, Vhari; Dunn, Stephen; Horning, Lynda; Jensen, Janice; Kandl, Bronia; Nonu, Miriam; Owen, Fran; Moss, Cheryle
2015-01-01
The impetus for this research came from a group of 11 Clinical Nurse Consultants (CNCs) within a health service in NSW, Australia, who wanted to investigate the CNC role from multiple stakeholder perspectives. With support from academic researchers, the CNCs designed and implemented the study. The aim of this research project was to investigate the role of the CNC from the multiple perspectives of CNCs and other stakeholders who work with CNCs in the Health District. This was a co-operative inquiry that utilised qualitative descriptive research approach. Co-operative inquiry methods enabled 11 CNCs to work as co-researchers and to conduct the investigation. The co-researchers implemented a qualitative descriptive design for the research and used interviews (7) and focus groups (16) with CNC stakeholders (n = 103) to gather sufficient data to investigate the role of the CNC in the organisation. Thematic analysis was undertaken to obtain the results. The CNC role is invaluable to all stakeholders and it was seen as the "glue" which holds teams together. Stakeholder expectations of the CNC role were multiple and generally agreed. Five themes derived from the data are reported as "clinical leadership as core", "making a direct difference to patient care", "service development as an outcome", "role breadth or narrowness and boundaries", and "career development". There was clear appreciation of the work that CNCs do in their roles, and the part that the CNC role plays in achieving quality health outcomes. The role of the CNC is complex and the CNCs themselves often negotiate these complexities to ensure beneficial outcomes for the patient and organisation. For the wider audience this study has given further insights into the role of these nurses and the perspectives of those with whom they work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Yunshan; DeVore, Peter T. S.; Jalali, Bahram
Optical computing accelerators help alleviate bandwidth and power consumption bottlenecks in electronics. In this paper, we show an approach to implementing logarithmic-type analog co-processors in silicon photonics and use it to perform the exponentiation operation and the recovery of a signal in the presence of multiplicative distortion. Finally, the function is realized by exploiting nonlinear-absorption-enhanced Raman amplification saturation in a silicon waveguide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Universal Common Communication Substrate (UCCS) is a low-level communication substrate that exposes high-performance communication primitives, while providing network interoperability. It is intended to support multiple upper layer protocol (ULPs) or programming models including SHMEM,UPC,Titanium,Co-Array Fortran,Global Arrays,MPI,GASNet, and File I/O. it provides various communication operations including one-sided and two-sided point-to-point, collectives, and remote atomic operations. In addition to operations for ULPs, it provides an out-of-band communication channel required typically required to wire-up communication libraries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hummon, M.; Jorgenson, J.; Denholm, P.
2013-10-01
Concentrating solar power with thermal energy storage (CSP-TES) can provide multiple benefits to the grid, including low marginal cost energy and the ability to levelize load, provide operating reserves, and provide firm capacity. It is challenging to properly value the integration of CSP because of the complicated nature of this technology. Unlike completely dispatchable fossil sources, CSP is a limited energy resource, depending on the hourly and daily supply of solar energy. To optimize the use of this limited energy, CSP-TES must be implemented in a production cost model with multiple decision variables for the operation of the CSP-TES plant.more » We develop and implement a CSP-TES plant in a production cost model that accurately characterizes the three main components of the plant: solar field, storage tank, and power block. We show the effect of various modelling simplifications on the value of CSP, including: scheduled versus optimized dispatch from the storage tank and energy-only operation versus co-optimization with ancillary services.« less
Modelling Concentrating Solar Power with Thermal Energy Storage for Integration Studies: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hummon, M.; Denholm, P.; Jorgenson, J.
2013-10-01
Concentrating solar power with thermal energy storage (CSP-TES) can provide multiple benefits to the grid, including low marginal cost energy and the ability to levelize load, provide operating reserves, and provide firm capacity. It is challenging to properly value the integration of CSP because of the complicated nature of this technology. Unlike completely dispatchable fossil sources, CSP is a limited energy resource, depending on the hourly and daily supply of solar energy. To optimize the use of this limited energy, CSP-TES must be implemented in a production cost model with multiple decision variables for the operation of the CSP-TES plant.more » We develop and implement a CSP-TES plant in a production cost model that accurately characterizes the three main components of the plant: solar field, storage tank, and power block. We show the effect of various modelling simplifications on the value of CSP, including: scheduled versus optimized dispatch from the storage tank and energy-only operation versus co-optimization with ancillary services.« less
Analog optical computing primitives in silicon photonics
Jiang, Yunshan; DeVore, Peter T. S.; Jalali, Bahram
2016-03-15
Optical computing accelerators help alleviate bandwidth and power consumption bottlenecks in electronics. In this paper, we show an approach to implementing logarithmic-type analog co-processors in silicon photonics and use it to perform the exponentiation operation and the recovery of a signal in the presence of multiplicative distortion. Finally, the function is realized by exploiting nonlinear-absorption-enhanced Raman amplification saturation in a silicon waveguide.
Construction of a fuzzy and Boolean logic gates based on DNA.
Zadegan, Reza M; Jepsen, Mette D E; Hildebrandt, Lasse L; Birkedal, Victoria; Kjems, Jørgen
2015-04-17
Logic gates are devices that can perform logical operations by transforming a set of inputs into a predictable single detectable output. The hybridization properties, structure, and function of nucleic acids can be used to make DNA-based logic gates. These devices are important modules in molecular computing and biosensing. The ideal logic gate system should provide a wide selection of logical operations, and be integrable in multiple copies into more complex structures. Here we show the successful construction of a small DNA-based logic gate complex that produces fluorescent outputs corresponding to the operation of the six Boolean logic gates AND, NAND, OR, NOR, XOR, and XNOR. The logic gate complex is shown to work also when implemented in a three-dimensional DNA origami box structure, where it controlled the position of the lid in a closed or open position. Implementation of multiple microRNA sensitive DNA locks on one DNA origami box structure enabled fuzzy logical operation that allows biosensing of complex molecular signals. Integrating logic gates with DNA origami systems opens a vast avenue to applications in the fields of nanomedicine for diagnostics and therapeutics. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A universal heliostat control system
NASA Astrophysics Data System (ADS)
Gross, Fabian; Geiger, Mark; Buck, Reiner
2017-06-01
This paper describes the development of a universal heliostat control system as part of the AutoR project [1]. The system can control multiple receivers and heliostat types in a single application. The system offers support for multiple operators on different machines and is designed to be as adaptive as possible. Thus, the system can be used for different heliostat field setups with only minor adaptations of the system's source code. This is achieved by extensive usage of modern programming techniques like reflection and dependency injection. Furthermore, the system features co-simulation of a ray tracer, a reference PID-controller implementation for open volumetric receivers and methods for heliostat calibration and monitoring.
1998-01-01
still more higher . Consequently, it is appropriate to distribute rationally the researches complex and the applied tasks between different types of...manufacturing of satellites through its spin -off company. Surrey Satellite Technology Ltd. A new 350kg minisatellite (UoSAT-12) is being built at Surrey for...implement feedback from the Joint Science Team. INTRODUCTION RAMOS is a research program with multiple scientific objectives included in the dual mis
Design and Implementation of the Retinoblastoma Collaborative Laboratory.
Qaiser, Seemi; Limo, Alice; Gichana, Josiah; Kimani, Kahaki; Githanga, Jessie; Waweru, Wairimu; Dimba, Elizabeth A O; Dimaras, Helen
2017-01-01
The purpose of this work was to describe the design and implementation of a digital pathology laboratory, the Retinoblastoma Collaborative Laboratory (RbCoLab) in Kenya. The RbCoLab is a central lab in Nairobi that receives retinoblastoma specimens from all over Kenya. Specimens were processed using evidence-based standard operating procedures. Images were produced by a digital scanner, and pathology reports were disseminated online. The lab implemented standard operating procedures aimed at improving the accuracy, completeness, and timeliness of pathology reports, enhancing the care of Kenyan retinoblastoma patients. Integration of digital technology to support pathology services supported knowledge transfer and skills transfer. A bidirectional educational network of local pathologists and other clinicians in the circle of care of the patients emerged and served to emphasize the clinical importance of cancer pathology at multiple levels of care. A 'Robin Hood' business model of health care service delivery was developed to support sustainability and scale-up of cancer pathology services. The application of evidence-based protocols, comprehensive training, and collaboration were essential to bring improvements to the care of retinoblastoma patients in Kenya. When embraced as an integrated component of retinoblastoma care, digital pathology offers the opportunity for frequent connection and consultation for development of expertise over time.
Design and Implementation of the Retinoblastoma Collaborative Laboratory
Qaiser, Seemi; Limo, Alice; Gichana, Josiah; Kimani, Kahaki; Githanga, Jessie; Waweru, Wairimu; Dimba, Elizabeth A.O.; Dimaras, Helen
2017-01-01
Purpose The purpose of this work was to describe the design and implementation of a digital pathology laboratory, the Retinoblastoma Collaborative Laboratory (RbCoLab) in Kenya. Method The RbCoLab is a central lab in Nairobi that receives retinoblastoma specimens from all over Kenya. Specimens were processed using evidence-based standard operating procedures. Images were produced by a digital scanner, and pathology reports were disseminated online. Results The lab implemented standard operating procedures aimed at improving the accuracy, completeness, and timeliness of pathology reports, enhancing the care of Kenyan retinoblastoma patients. Integration of digital technology to support pathology services supported knowledge transfer and skills transfer. A bidirectional educational network of local pathologists and other clinicians in the circle of care of the patients emerged and served to emphasize the clinical importance of cancer pathology at multiple levels of care. A ‘Robin Hood’ business model of health care service delivery was developed to support sustainability and scale-up of cancer pathology services. Discussion The application of evidence-based protocols, comprehensive training, and collaboration were essential to bring improvements to the care of retinoblastoma patients in Kenya. When embraced as an integrated component of retinoblastoma care, digital pathology offers the opportunity for frequent connection and consultation for development of expertise over time. PMID:28275608
NASA Astrophysics Data System (ADS)
Obland, M. D.; Nehrir, A. R.; Lin, B.; Harrison, F. W.; Kooi, S. A.; Choi, Y.; Plant, J.; Yang, M. M.; Antill, C.; Campbell, J. F.; Ismail, S.; Browell, E. V.; Meadows, B.; Dobler, J. T.; Zaccheo, T. S.; Moore, B., III; Crowell, S.
2014-12-01
The ASCENDS CarbonHawk Experiment Simulator (ACES) is an Intensity-Modulated Continuous-Wave lidar system recently developed at NASA Langley Research Center that seeks to advance technologies and techniques critical to measuring atmospheric column carbon dioxide (CO2) mixing ratios in support of the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission. These advancements include: (1) increasing the power-aperture product to approach ASCENDS mission requirements by implementing multi-aperture telescopes and multiple co-aligned laser transmitters; (2) incorporating high-efficiency, high-power Erbium-Doped Fiber Amplifiers (EDFAs); (3) developing and incorporating a high-bandwidth, low-noise HgCdTe detector and transimpedence amplifier (TIA) subsystem capable of long-duration operation on Global Hawk aircraft, and (4) advancing algorithms for cloud and aerosol discrimination. The ACES instrument architecture is being developed for operation on high-altitude aircraft and will be directly scalable to meet the ASCENDS mission requirements. ACES simultaneously transmits five laser beams: three from commercial EDFAs operating near 1571 nm, and two from the Exelis oxygen (O2) Raman fiber laser amplifier system operating near 1260 nm. The Integrated-Path Differential Absorption (IPDA) lidar approach is used at both wavelengths to independently measure the CO2 and O2 column number densities and retrieve the average column CO2 mixing ratio. The outgoing laser beams are aligned to the field of view of ACES' three fiber-coupled 17.8-cm diameter athermal telescopes. The backscattered light collected by the three telescopes is sent to the detector/TIA subsystem, which has a bandwidth of 4.7 MHz and operates service-free using a tactical dewar and cryocooler. Two key laser modulation approaches are being tested to significantly mitigate the effects of thin clouds on the retrieved CO2 column amounts. Full instrument development concluded in the spring of 2014. After ground range tests of the instrument, ACES successfully completed six test flights on the Langley Hu-25 aircraft in July, 2014, and recorded data at multiple altitudes over land and ocean surfaces with and without intervening clouds. Preliminary results from these flights will be presented in this paper.
An Effect of the Co-Operative Network Model for Students' Quality in Thai Primary Schools
ERIC Educational Resources Information Center
Khanthaphum, Udomsin; Tesaputa, Kowat; Weangsamoot, Visoot
2016-01-01
This research aimed: 1) to study the current and desirable states of the co-operative network in developing the learners' quality in Thai primary schools, 2) to develop a model of the co-operative network in developing the learners' quality, and 3) to examine the results of implementation of the co-operative network model in the primary school.…
NASA Technical Reports Server (NTRS)
Obland, Michael D.; Nehrir, Amin R.; Lin, Bing; Harrison, F. Wallace; Kooi, Susan; Choi, Yonghoon; Plant, James; Yang, Melissa; Antill, Charles; Campbell, Joel;
2015-01-01
The ASCENDS CarbonHawk Experiment Simulator (ACES) is a newly developed lidar developed at NASA Langley Research Center and funded by NASA's Earth Science Technology Office (ESTO) Instrument Incubator Program (IIP) that seeks to advance technologies critical to measuring atmospheric column carbon dioxide (CO2) mixing ratios in support of the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission. The technology advancements targeted include: (1) increasing the power-aperture product to approach ASCENDS mission requirements by implementing multi-aperture telescopes and multiple co-aligned laser transmitters; (2) incorporating high-efficiency, high-power Erbium-Doped Fiber Amplifiers (EDFAs); (3) developing and incorporating a high-bandwidth, low-noise HgCdTe detector and transimpedence amplifier (TIA) subsystem capable of long-duration autonomous operation on Global Hawk aircraft, and (4) advancing algorithms for cloud and aerosol discrimination. The ACES instrument architecture is being developed for operation on high-altitude aircraft and will be directly scalable to meet the ASCENDS mission requirements. These technologies are critical towards developing not only spaceborne instruments but also their airborne simulators, with lower platform requirements for size, mass, and power, and with improved instrument performance for the ASCENDS mission. ACES transmits five laser beams: three from commercial EDFAs operating near 1.57 microns, and two from the Exelis oxygen (O2) Raman fiber laser amplifier system operating near 1.26 microns. The three EDFAs are capable of transmitting up to 10 watts average optical output power each and are seeded by compact, low noise, stable, narrow-linewidth laser sources stabilized with respect to a CO2 absorption line using a multi-pass gas absorption cell. The Integrated-Path Differential Absorption (IPDA) lidar approach is used at both wavelengths to independently measure the CO2 and O2 column number densities and retrieve the average column CO2 mixing ratio. The ACES receiver uses three fiber-coupled 17.8-cm diameter athermal telescopes. The transmitter assembly consists of five fiber-coupled laser collimators and an associated Risley prism pair for each laser to co-align the outgoing laser beams and to align them with the telescope field of view. The backscattered return signals collected by the three telescopes are combined in a fiber bundle and sent to a single low noise detector. The detector/TIA development has improved the existing detector subsystem by increasing its bandwidth to 4.7 MHz from 500 kHz and increasing the duration of autonomous, service-free operation periods from 4 hours to >24 hours. The new detector subsystem enables the utilization of higher laser modulation rates, which provides greater flexibility for implementing advanced thin-cloud discrimination algorithms as well as improving range-determination resolution and error reduction. The cloud/aerosol discrimination algorithm development by Langley and Exelis features a new suite of algorithms for the minimization/elimination of bias errors in the return signal induced by the presence of intervening thin clouds. Multiple laser modulation schemes are being tested in an effort to significantly mitigate the effects of thin clouds on the retrieved CO2 column amounts. Full instrument development concluded in the spring of 2014. After ground range tests of the instrument, ACES successfully completed six test flights on the Langley Hu-25 aircraft in July, 2014, and recorded data at multiple altitudes over land and ocean surfaces with and without intervening clouds. Preliminary results from these test flights will be presented in this paper.
ERIC Educational Resources Information Center
Lavonen, Jari; Lattu, Matti; Juuti, Kalle; Meisalo, Veijo
2006-01-01
An ICT strategy and an implementation plan for teacher education were created in a co-operative process. Visions and expectations of staff members and students were registered by questionnaires and by making notes during sessions in which the strategy was created. Thereafter, an implementation document, where the staff development programme and…
2011-01-01
Background The paper combines the analytical and instrumental perspectives on communities of practice (CoPs) to reflect on potential challenges that may arise in the process of interprofessional and inter-organisational joint working within the Collaborations for Leaderships in Applied Health Research and Care (CLAHRCs)--partnerships between the universities and National Health Service (NHS) Trusts aimed at conducting applied health research and translating its findings into day-to-day clinical practice. Discussion The paper discusses seminal theoretical literature on CoPs as well as previous empirical research on the role of these communities in healthcare collaboration, which is organised around the following three themes: knowledge sharing within and across CoPs, CoP formation and manageability, and identity building in CoPs. It argues that the multiprofessional and multi-agency nature of the CLAHRCs operating in the traditionally demarcated organisational landscape of the NHS may present formidable obstacles to knowledge sharing between various professional groupings, formation of a shared 'collaborative' identity, and the development of new communities within the CLAHRCs. To cross multiple boundaries between various professional and organisational communities and hence enable the flow of knowledge, the CLAHRCs will have to create an effective system of 'bridges' involving knowledge brokers, boundary objects, and cross-disciplinary interactions as well as address a number of issues related to professional and organisational identification. Summary The CoP approach can complement traditional 'stage-of-change' theories used in the field of implementation research and provide a basis for designing theory-informed interventions and evaluations. It can help to illuminate multiple boundaries that exist between professional and organisational groups within the CLAHRCs and suggest ways of crossing those boundaries to enable knowledge transfer and organisational learning. Achieving the aims of the CLAHRCs and producing a sustainable change in the ways applied health research is conducted and implemented may be influenced by how effectively these organisations can navigate through the multiple CoPs involved and promote the development of new multiprofessional and multi-organisational communities united by shared practice and a shared sense of belonging--an assumption that needs to be explored by further empirical research. PMID:21699712
Kislov, Roman; Harvey, Gill; Walshe, Kieran
2011-06-23
The paper combines the analytical and instrumental perspectives on communities of practice (CoPs) to reflect on potential challenges that may arise in the process of interprofessional and inter-organisational joint working within the Collaborations for Leaderships in Applied Health Research and Care (CLAHRCs)--partnerships between the universities and National Health Service (NHS) Trusts aimed at conducting applied health research and translating its findings into day-to-day clinical practice. The paper discusses seminal theoretical literature on CoPs as well as previous empirical research on the role of these communities in healthcare collaboration, which is organised around the following three themes: knowledge sharing within and across CoPs, CoP formation and manageability, and identity building in CoPs. It argues that the multiprofessional and multi-agency nature of the CLAHRCs operating in the traditionally demarcated organisational landscape of the NHS may present formidable obstacles to knowledge sharing between various professional groupings, formation of a shared 'collaborative' identity, and the development of new communities within the CLAHRCs. To cross multiple boundaries between various professional and organisational communities and hence enable the flow of knowledge, the CLAHRCs will have to create an effective system of 'bridges' involving knowledge brokers, boundary objects, and cross-disciplinary interactions as well as address a number of issues related to professional and organisational identification. The CoP approach can complement traditional 'stage-of-change' theories used in the field of implementation research and provide a basis for designing theory-informed interventions and evaluations. It can help to illuminate multiple boundaries that exist between professional and organisational groups within the CLAHRCs and suggest ways of crossing those boundaries to enable knowledge transfer and organisational learning. Achieving the aims of the CLAHRCs and producing a sustainable change in the ways applied health research is conducted and implemented may be influenced by how effectively these organisations can navigate through the multiple CoPs involved and promote the development of new multiprofessional and multi-organisational communities united by shared practice and a shared sense of belonging--an assumption that needs to be explored by further empirical research.
International Space Station Payload Operations Integration Center (POIC) Overview
NASA Technical Reports Server (NTRS)
Ijames, Gayleen N.
2012-01-01
Objectives and Goals: Maintain and operate the POIC and support integrated Space Station command and control functions. Provide software and hardware systems to support ISS payloads and Shuttle for the POIF cadre, Payload Developers and International Partners. Provide design, development, independent verification &validation, configuration, operational product/system deliveries and maintenance of those systems for telemetry, commanding, database and planning. Provide Backup Control Center for MCC-H in case of shutdown. Provide certified personnel and systems to support 24x7 facility operations per ISS Program. Payloads CoFR Implementation Plan (SSP 52054) and MSFC Payload Operations CoFR Implementation Plan (POIF-1006).
Saeedi, Ehsan; Kong, Yinan
2017-01-01
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance (1Area×Time=1AT) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature. PMID:28459831
Hossain, Md Selim; Saeedi, Ehsan; Kong, Yinan
2017-01-01
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text]) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baranowski, S.T.; Culp, L.R.; Jonas, T.S.
1995-12-31
The Clean Air Act Amendments of 1990 require all Phase 1 and 2 utilities to install Continuous Emissions Monitoring Systems (CEMS), which can consist of instrumentation including analyzers and a Data Acquisition and Handling System (DAHS). To meet these requirements, Basin Electric Power Cooperative contracted with Black and Veatch to design and develop a DAHS which would meet all 40 CFR Part 75 requirements. Basin Electric provided the specifications for the DAHS and the graphical user interface, and B and V designed, developed, and installed a DAHS at 3 units at Laramie River Station, 2 units at Antelope Valley Station,more » and 2 units at Leland Olds Station. B and V utilized the existing equipment, adding to it a unique DAHS design. B and V designed and implemented the DAHS which operates in the OS/2 environment to interface with multiple programmable logic controllers. This unique software was written in modular form so that multiple programs run in unison, monitoring each other for errors to ensure continuous operation. The reporting structure is flexible to allow for a variety of formats, including those specifically required by the state to meet CAAA guidelines. Today, these seven units are in operation and comply with the CAAA. This paper describes the issues faced during specification, general design, compliance, and implementation of the DAHS at BEPC, including the lessons learned. The continuous emissions monitoring (CEM) system for each unit at BEPC consisted of a set of dilution probe analyzers for measuring SO{sub 2}, NO{sub x}, and CO{sub 2}.« less
Fast track in hip arthroplasty
Hansen, Torben Bæk
2017-01-01
‘Fast-track’ surgery was introduced more than 20 years ago and may be defined as a co-ordinated peri-operative approach aimed at reducing surgical stress and facilitating post-operative recovery.The fast-track programmes have now been introduced into total hip arthroplasty (THA) surgery with reduction in post-operative length of stay, shorter convalescence and rapid functional recovery without increased morbidity and mortality. This has been achieved by focusing on a multidisciplinary collaboration and establishing ‘fast-track’ units, with a well-defined organisational set-up tailored to deliver an accelerated peri-operative course of fast-track surgical THA procedures.Fast-track THA surgery now works extremely well in the standard THA patient. However, all patients are different and fine-tuning of the multiple areas in fast-track pathways to get patients with special needs or high co-morbidity burden through a safe and effective fast-track THA pathway is important.In this narrative review, the principles of fast-track THA surgery are presented together with the present status of implementation and perspectives for further improvements. Cite this article: EFORT Open Rev 2017;2. DOI: 10.1302/2058-5241.2.160060. Originally published online at www.efortopenreviews.org PMID:28630756
ERIC Educational Resources Information Center
Mary, Latisha
2014-01-01
The aim of this study was to investigate the role of co-operative games and circle time activities in fostering positive peer relations in two French Primary classrooms (N = 40). It presents French teachers' and pupils' perceptions of a set of co-operative games and circle time activities implemented within a year long study on personal, social…
NASA Astrophysics Data System (ADS)
Obland, M. D.; Antill, C.; Browell, E. V.; Campbell, J. F.; CHEN, S.; Cleckner, C.; Dijoseph, M. S.; Harrison, F. W.; Ismail, S.; Lin, B.; Meadows, B. L.; Mills, C.; Nehrir, A. R.; Notari, A.; Prasad, N. S.; Kooi, S. A.; Vitullo, N.; Dobler, J. T.; Bender, J.; Blume, N.; Braun, M.; Horney, S.; McGregor, D.; Neal, M.; Shure, M.; Zaccheo, T.; Moore, B.; Crowell, S.; Rayner, P. J.; Welch, W.
2013-12-01
The ASCENDS CarbonHawk Experiment Simulator (ACES) is a NASA Langley Research Center project funded by NASA's Earth Science Technology Office that seeks to advance technologies critical to measuring atmospheric column carbon dioxide (CO2) mixing ratios in support of the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission. The technologies being advanced are: (1) multiple transmitter and telescope-aperture operations, (2) high-efficiency CO2 laser transmitters, (3) a high bandwidth detector and transimpedance amplifier (TIA), and (4) advanced algorithms for cloud and aerosol discrimination. The instrument architecture is being developed for ACES to operate on a high-altitude aircraft, and it will be directly scalable to meet the ASCENDS mission requirements. The above technologies are critical for developing an airborne simulator and spaceborne instrument with lower platform consumption of size, mass, and power, and with improved performance. This design employs several laser transmitters and telescope-apertures to demonstrate column CO2 retrievals with alignment of multiple laser beams in the far-field. ACES will transmit five laser beams: three from commercial lasers operating near 1.57-microns, and two from the Exelis atmospheric oxygen (O2) fiber laser amplifier system operating near 1.26-microns. The Master Oscillator Power Amplifier at 1.57-microns measures CO2 column concentrations using an Integrated-Path Differential Absorption (IPDA) lidar approach. O2 column amounts needed for calculating the CO2 mixing ratio will be retrieved using the Exelis laser system with a similar IPDA approach. The three aperture telescope design was built to meet the constraints of the Global Hawk high-altitude unmanned aerial vehicle (UAV). This assembly integrates fiber-coupled transmit collimators for all of the laser transmitters and fiber-coupled optical signals from the three telescopes to the aft optics and detector package. The detector/TIA effort has improved the existing detector subsystem by: increasing its bandwidth to 5.4 MHz, exceeding the original goal of 5 MHz; reducing the overall mass from 18 lbs to <10 lbs; and increasing the duration of autonomous, service-free operation periods from 4 hrs to >24 hrs. The new detector subsystem will permit higher laser modulation rates, which provides greater flexibility for implementing thin-cloud discrimination algorithms as well as improving range resolution and error reduction, and will enable long-range flights on the Global Hawk. The cloud/aerosol discrimination work features development of new algorithms by Langley and Exelis for the avoidance of bias errors in the retrieval of column CO2 induced by the presence of thin clouds.
Secure Cloud Computing Implementation Study For Singapore Military Operations
2016-09-01
COMPUTING IMPLEMENTATION STUDY FOR SINGAPORE MILITARY OPERATIONS by Lai Guoquan September 2016 Thesis Advisor: John D. Fulp Co-Advisor...DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE SECURE CLOUD COMPUTING IMPLEMENTATION STUDY FOR SINGAPORE MILITARY OPERATIONS 5. FUNDING NUMBERS...addition, from the military perspective, the benefits of cloud computing were analyzed from a study of the U.S. Department of Defense. Then, using
Multiple multicontrol unitary operations: Implementation and applications
NASA Astrophysics Data System (ADS)
Lin, Qing
2018-04-01
The efficient implementation of computational tasks is critical to quantum computations. In quantum circuits, multicontrol unitary operations are important components. Here, we present an extremely efficient and direct approach to multiple multicontrol unitary operations without decomposition to CNOT and single-photon gates. With the proposed approach, the necessary two-photon operations could be reduced from O( n 3) with the traditional decomposition approach to O( n), which will greatly relax the requirements and make large-scale quantum computation feasible. Moreover, we propose the potential application to the ( n- k)-uniform hypergraph state.
Drew, Sarah; Judge, Andrew; May, Carl; Farmer, Andrew; Cooper, Cyrus; Javaid, M Kassim; Gooberman-Hill, Rachael
2015-04-23
National and international guidance emphasizes the need for hospitals to have effective secondary fracture prevention services, to reduce the risk of future fractures in hip fracture patients. Variation exists in how hospitals organize these services, and there remain significant gaps in care. No research has systematically explored reasons for this to understand how to successfully implement these services. The objective of this study was to use extended Normalization Process Theory to understand how secondary fracture prevention services can be successfully implemented. Forty-three semi-structured interviews were conducted with healthcare professionals involved in delivering secondary fracture prevention within 11 hospitals that receive patients with acute hip fracture in one region in England. These included orthogeriatricians, fracture prevention nurses and service managers. Extended Normalization Process Theory was used to inform study design and analysis. Extended Normalization Process Theory specifies four constructs relating to collective action in service implementation: capacity, potential, capability and contribution. The capacity of healthcare professionals to co-operate and co-ordinate their actions was achieved using dedicated fracture prevention co-ordinators to organize important processes of care. However, participants described effective communication with GPs as challenging. Individual potential and commitment to operationalize services was generally high. Shared commitments were promoted through multi-disciplinary team working, facilitated by fracture prevention co-ordinators. Healthcare professionals had capacity to deliver multiple components of services when co-ordinators 'freed up' time. As key agents in its intervention, fracture prevention coordinators were therefore indispensable to effective implementation. Aside from difficulty of co-ordination with primary care, the intervention was highly workable and easily integrated into practice. Nevertheless, implementation was threatened by under-staffed and under-resourced services, lack of capacity to administer scans and poor patient access. To ensure ongoing service delivery, the contributions of healthcare professionals were shaped by planning, in multi-disciplinary team meetings, the use of clinical databases to identify patients and define the composition of clinical work and monitoring to improve clinical practice. Findings identify and describe elements needed to implement secondary fracture prevention services successfully. The study highlights the value of Normalization Process Theory to achieve comprehensive understanding of healthcare professionals' experiences in enacting a complex intervention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zitney, S.E.
This paper highlights the use of the CAPE-OPEN (CO) standard interfaces in the Advanced Process Engineering Co-Simulator (APECS) developed at the National Energy Technology Laboratory (NETL). The APECS system uses the CO unit operation, thermodynamic, and reaction interfaces to provide its plug-and-play co-simulation capabilities, including the integration of process simulation with computational fluid dynamics (CFD) simulation. APECS also relies heavily on the use of a CO COM/CORBA bridge for running process/CFD co-simulations on multiple operating systems. For process optimization in the face of multiple and some time conflicting objectives, APECS offers stochastic modeling and multi-objective optimization capabilities developed to complymore » with the CO software standard. At NETL, system analysts are applying APECS to a wide variety of advanced power generation systems, ranging from small fuel cell systems to commercial-scale power plants including the coal-fired, gasification-based FutureGen power and hydrogen production plant.« less
VLBI observations to the APOD satellite
NASA Astrophysics Data System (ADS)
Sun, Jing; Tang, Geshi; Shu, Fengchun; Li, Xie; Liu, Shushi; Cao, Jianfeng; Hellerschmied, Andreas; Böhm, Johannes; McCallum, Lucia; McCallum, Jamie; Lovell, Jim; Haas, Rüdiger; Neidhardt, Alexander; Lu, Weitao; Han, Songtao; Ren, Tianpeng; Chen, Lue; Wang, Mei; Ping, Jinsong
2018-02-01
The APOD (Atmospheric density detection and Precise Orbit Determination) is the first LEO (Low Earth Orbit) satellite in orbit co-located with a dual-frequency GNSS (GPS/BD) receiver, an SLR reflector, and a VLBI X/S dual band beacon. From the overlap statistics between consecutive solution arcs and the independent validation by SLR measurements, the orbit position deviation was below 10 cm before the on-board GNSS receiver got partially operational. In this paper, the focus is on the VLBI observations to the LEO satellite from multiple geodetic VLBI radio telescopes, since this is the first implementation of a dedicated VLBI transmitter in low Earth orbit. The practical problems of tracking a fast moving spacecraft with current VLBI ground infrastructure were solved and strong interferometric fringes were obtained by cross-correlation of APOD carrier and DOR (Differential One-way Ranging) signals. The precision in X-band time delay derived from 0.1 s integration time of the correlator output is on the level of 0.1 ns. The APOD observations demonstrate encouraging prospects of co-location of multiple space geodetic techniques in space, as a first prototype.
Influence of System Operation Method on CO2 Emissions of PV/Solar Heat/Cogeneration System
NASA Astrophysics Data System (ADS)
Oke, Shinichiro; Kemmoku, Yoshishige; Takikawa, Hirofumi; Sakakibara, Tateki
A PV/solar heat/cogeneration system is assumed to be installed in a hotel. The system is operated with various operation methods: CO2 minimum operation, fees minimum operation, seasonal operation, daytime operation and heat demand following operation. Of these five operations, the former two are virtual operations that are operated with the dynamic programming method, and the latter three are actual operations. Computer simulation is implemented using hourly data of solar radiation intensity, atmospheric temperature, electric, cooling, heating and hot water supply demands for one year, and the life-cycle CO2 emission and the total cost are calculated for every operations. The calculation results show that the virtual two and the actual three operations reduce the life-cycle CO2 emission by 21% and 13% compared with the conventional system, respectively. In regard to both the CO2 emission and the cost, there is no significant difference between the virtual two operation methods or among actual three operation methods.
Salmon-Ceron, Dominique; Cohen, Julien; Winnock, Maria; Roux, Perrine; Sadr, Firouze Bani; Rosenthal, Eric; Martin, Isabelle Poizot; Loko, Marc-Arthur; Mora, Marion; Sogni, Philippe; Spire, Bruno; Dabis, François; Carrieri, Maria Patrizia
2012-03-12
Treatment for the hepatitis C virus (HCV) may be delayed significantly in HIV/HCV co-infected patients. Our study aims at identifying the correlates of access to HCV treatment in this population. We used 3-year follow-up data from the HEPAVIH ANRS-CO13 nationwide French cohort which enrolled patients living with HIV and HCV. We included pegylated interferon and ribavirin-naive patients (N = 600) at enrolment. Clinical/biological data were retrieved from medical records. Self-administered questionnaires were used for both physicians and their patients to collect data about experience and behaviors, respectively. Median [IQR] follow-up was 12[12-24] months and 124 patients (20.7%) had started HCV treatment. After multiple adjustment including patients' negative beliefs about HCV treatment, those followed up by a general practitioner working in a hospital setting were more likely to receive HCV treatment (OR[95%CI]: 1.71 [1.06-2.75]). Patients followed by general practitioners also reported significantly higher levels of alcohol use, severe depressive symptoms and poor social conditions than those followed up by other physicians. Hospital-general practitioner networks can play a crucial role in engaging patients who are the most vulnerable and in reducing existing inequities in access to HCV care. Further operational research is needed to assess to what extent these models can be implemented in other settings and for patients who bear the burden of multiple co-morbidities.
NASA Technical Reports Server (NTRS)
Vo, San C.; Biegel, Bryan (Technical Monitor)
2001-01-01
Scalar multiplication is an essential operation in elliptic curve cryptosystems because its implementation determines the speed and the memory storage requirements. This paper discusses some improvements on two popular signed window algorithms for implementing scalar multiplications of an elliptic curve point - Morain-Olivos's algorithm and Koyarna-Tsuruoka's algorithm.
2011-01-01
USA) 2011 Abstract The NOAA Great Lakes Operational Forecast System ( GLOFS ) uses near-real-time atmospheric observa- tions and numerical weather...Operational Oceanographic Products and Services (CO-OPS) in Silver Spring, MD. GLOFS has been making operational nowcasts and forecasts at CO-OPS... GLOFS ) uses near-real-time atmospheric observations and numerical weather prediction forecast guidance to produce three-dimensional forecasts of water
ERIC Educational Resources Information Center
Fielke, Simon J.; Botha, Neels; Reid, Janet; Gray, David; Blackett, Paula; Park, Nicola; Williams, Tracy
2018-01-01
Purpose: This paper highlights important lessons for co-innovation drawn from three ex-post case study innovation projects implemented within three sub-sectors of the primary industry sector in New Zealand. Design/methodology/approach: The characteristics that fostered co-innovation in each innovation project case study were identified from…
Kück, Patrick; Struck, Torsten H
2014-01-01
BaCoCa (BAse COmposition CAlculator) is a user-friendly software that combines multiple statistical approaches (like RCFV and C value calculations) to identify biases in aligned sequence data which potentially mislead phylogenetic reconstructions. As a result of its speed and flexibility, the program provides the possibility to analyze hundreds of pre-defined gene partitions and taxon subsets in one single process run. BaCoCa is command-line driven and can be easily integrated into automatic process pipelines of phylogenomic studies. Moreover, given the tab-delimited output style the results can be easily used for further analyses in programs like Excel or statistical packages like R. A built-in option of BaCoCa is the generation of heat maps with hierarchical clustering of certain results using R. As input files BaCoCa can handle FASTA and relaxed PHYLIP, which are commonly used in phylogenomic pipelines. BaCoCa is implemented in Perl and works on Windows PCs, Macs and Linux operating systems. The executable source code as well as example test files and a detailed documentation of BaCoCa are freely available at http://software.zfmk.de. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ogoshi, Yasuhiro; Nakai, Akio; Ogoshi, Sakiko; Mitsuhashi, Yoshinori; Araki, Chikahiro
A key aspect of the optimal support of students with special needs is co-ordination and co-operation between school, home and specialized agencies. Communication between these entities is of prime importance and can be facilitated through the use of a support system implementing ICF guidelines as outlined. This communication system can be considered to be a preventative rather than allopathic support.
Shieh, W; Yang, Q; Ma, Y
2008-04-28
Coherent optical OFDM (CO-OFDM) has emerged as an attractive modulation format for the forthcoming 100 Gb/s Ethernet. However, even the spectral-efficient implementation of CO-OFDM requires digital-to-analog converters (DAC) and analog-to-digital converters (ADC) to operate at the bandwidth which may not be available today or may not be cost-effective. In order to resolve the electronic bandwidth bottleneck associated with DAC/ADC devices, we propose and elucidate the principle of orthogonal-band-multiplexed OFDM (OBM-OFDM) to subdivide the entire OFDM spectrum into multiple orthogonal bands. With this scheme, the DAC/ADCs do not need to operate at extremely high sampling rate. The corresponding mapping to the mixed-signal integrated circuit (IC) design is also revealed. Additionally, we show the proof-of-concept transmission experiment through optical realization of OBM-OFDM. To the best of our knowledge, we present the first experimental demonstration of 107 Gb/s QPSK-encoded CO-OFDM signal transmission over 1000 km standard-single- mode-fiber (SSMF) without optical dispersion compensation and without Raman amplification. The demonstrated system employs 2x2 MIMO-OFDM signal processing and achieves high electrical spectral efficiency with direct-conversion at both transmitter and receiver.
A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus
NASA Astrophysics Data System (ADS)
Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir
2016-07-01
This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.
[Gastric mucosa tonometry in routine monitoring in the surgical intensive care unit].
Pestel, G; Uhlig, T; Götschl, A; Schmucker, P; Rothhammer, A
1998-06-01
Monitoring tissue oxygenation in the splanchnic region could be helpful for critically ill patients. In this study the postoperative course of gastric mucosal CO2 (prCO2) in 40 patients is shown. Following approval of the ethics committee, 24 patients schedulded for surgery with an expected large fluid turnover and 16 multiple injured patients were monitored with a gas tonometry device in addition to standard monitoring (ECG, pulse oximetry, capnometry, CVP, arterial pressure). Normoventilated patients with prCO2 > 50 for more than 30 minutes were treated with fluid therapy, followed by catecholamine therapy, followed by transfusion (fig. 1). All patients were admitted to the SICU post-operatively. The variation of prCO2-values was greater in multiple injured patients. Their prCO2-values began in a lower range compared to patients with scheduled operation, became higher at the end of the first SICU-day and remained higher thereafter. They had a higher fluid turnover and needed more catecholamines. Multiple injured patients with an arterio-intestinal CO2-Difference (CO2-Gap) > 10 had a higher ISS-Score, were longer mechanically ventilated, had a longer SICU-stay and a higher incidence of complications in comparison to patients with aCO2-Gap < 10. Perhaps a CO2-Gap > 10 could be predictive for a more severe course in intensive care patients.
UTM TCL2 Software Requirements
NASA Technical Reports Server (NTRS)
Smith, Irene S.; Rios, Joseph L.; McGuirk, Patrick O.; Mulfinger, Daniel G.; Venkatesan, Priya; Smith, David R.; Baskaran, Vijayakumar; Wang, Leo
2017-01-01
The Unmanned Aircraft Systems (UAS) Traffic Management (UTM) Technical Capability Level (TCL) 2 software implements the UTM TCL 2 software requirements described herein. These software requirements are linked to the higher level UTM TCL 2 System Requirements. Each successive TCL implements additional UTM functionality, enabling additional use cases. TCL 2 demonstrated how to enable expanded multiple operations by implementing automation for beyond visual line-of-sight, tracking operations, and operations flying over sparsely populated areas.
2012-01-01
Background Treatment for the hepatitis C virus (HCV) may be delayed significantly in HIV/HCV co-infected patients. Our study aims at identifying the correlates of access to HCV treatment in this population. Methods We used 3-year follow-up data from the HEPAVIH ANRS-CO13 nationwide French cohort which enrolled patients living with HIV and HCV. We included pegylated interferon and ribavirin-naive patients (N = 600) at enrolment. Clinical/biological data were retrieved from medical records. Self-administered questionnaires were used for both physicians and their patients to collect data about experience and behaviors, respectively. Results Median [IQR] follow-up was 12[12-24] months and 124 patients (20.7%) had started HCV treatment. After multiple adjustment including patients' negative beliefs about HCV treatment, those followed up by a general practitioner working in a hospital setting were more likely to receive HCV treatment (OR[95%CI]: 1.71 [1.06-2.75]). Patients followed by general practitioners also reported significantly higher levels of alcohol use, severe depressive symptoms and poor social conditions than those followed up by other physicians. Conclusions Hospital-general practitioner networks can play a crucial role in engaging patients who are the most vulnerable and in reducing existing inequities in access to HCV care. Further operational research is needed to assess to what extent these models can be implemented in other settings and for patients who bear the burden of multiple co-morbidities. PMID:22409788
Research on Sino-Australia Co-operative Program Teaching Management Pattern and Implementation Plan
ERIC Educational Resources Information Center
Yang, Dong
2008-01-01
With the rapid development of Chinese economy and education, deepening reform and open-up policy, more and more co-operative education programs are established in China. Among them, some programs are just copies of Western style or pattern, which has no Chinese characteristics. This article elaborates on the Sino-Australia program offered at…
Riedl, S
2002-02-01
The operating unit is one of the cost-intensive facilities in a surgical clinic with a pacemaking function for most of the internal procedures. The power of performance of the operating unit is based on the co-operation of all disciplines and professions involved. The key to management of the operating unit is not only to co-ordinate the daily procedures, but also to interact with support personnel. To ensure successful OR management, the internal structure of the OR must fit the clinical tasks and the available quantity of personnel in each profession must be co-ordinated. Sufficient utilization of resources and equipment must be guaranteed without cost-intensive over-capacities and patient flow must be orientated to OR capacities. The development of such a business structure requires the management to clearly define the goal, to know the actual on-site data in detail with regard to the idiosyncratic workings of each speciality and to clearly assign the competence of each member of the team working in the OR. Co-ordination of the operating unit is the main task of OR management, which must ensure the following: transparent and co-ordinated schedule management in the various operative specialities, goal-directed changes of the schedule with incorporation of emergencies, as well as effective organization of staff. In order to realize these tasks, it is reasonable to implement interdisciplinary rules of procedures. In addition, the assignment of a neutral decision-making body within the OR and the creation of an information center for all OR personnel. The challenge of OR organization in the future is to implement more effective documentation systems and active controlling within the OR. One can ensure adequate utilization of resources in the OR with prospectively oriented planning. Better transparence of operations in the OR contributes to increased efficiency. Implementation of quality management is the foundation for a successfully operating surgical hospital. Not only the productivity of individual members of the staff, but also the precise documentation of the quality of results will become important parameters in a successful surgical hospital, whose nucleus is the OR.
NASA Astrophysics Data System (ADS)
Tatiara, R.; Fajar, A. N.; Siregar, B.; Gunawan, W.
2018-03-01
The purpose of this research is to determine multi factors that inhibiting the implementation of the ISMS based on ISO 2700. It is also to propose a follow-up recommendation on the factors that inhibit the implementation of the ISMS. Data collection is derived from questionnaires to 182 respondents from users in data center operation (DCO) at bca, Indonesian telecommunication international (telin), and data centre division at Indonesian Ministry of Health. We analysing data collection with multiple linear regression analysis and paired t-test. The results are multiple factors which inhibiting the implementation of the ISMS from the three organizations which has implement and operate the ISMS, ISMS documentation management, and continual improvement. From this research, we concluded that the processes of implementation in ISMS is the necessity of the role of all parties in succeeding the implementation of the ISMS continuously.
Modular multiplication in GF(p) for public-key cryptography
NASA Astrophysics Data System (ADS)
Olszyna, Jakub
Modular multiplication forms the basis of modular exponentiation which is the core operation of the RSA cryptosystem. It is also present in many other cryptographic algorithms including those based on ECC and HECC. Hence, an efficient implementation of PKC relies on efficient implementation of modular multiplication. The paper presents a survey of most common algorithms for modular multiplication along with hardware architectures especially suitable for cryptographic applications in energy constrained environments. The motivation for studying low-power and areaefficient modular multiplication algorithms comes from enabling public-key security for ultra-low power devices that can perform under constrained environments like wireless sensor networks. Serial architectures for GF(p) are analyzed and presented. Finally proposed architectures are verified and compared according to the amount of power dissipated throughout the operation.
NASA Astrophysics Data System (ADS)
Majidi, Pasha; Pickup, Peter G.
2014-12-01
A direct ethanol fuel cell has been operated under sinusoidal (AC) potential cycling conditions in order to increase the yield of carbon dioxide and thereby increase cell efficiency relative to operation at a fixed potential. At 80 °C, faradaic yields of CO2 as high as 25% have been achieved with a PtRu anode catalyst, while the maximum CO2 production at constant potential was 13%. The increased yields under cycling conditions have been attributed to periodic oxidative stripping of adsorbed CO. These results will be important in the optimization of operating conditions for direct ethanol fuel cells, where the benefits of potential cycling are projected to increase as catalysts that produce CO2 more efficiently are implemented.
Genetic Algorithms for Multiple-Choice Problems
NASA Astrophysics Data System (ADS)
Aickelin, Uwe
2010-04-01
This thesis investigates the use of problem-specific knowledge to enhance a genetic algorithm approach to multiple-choice optimisation problems.It shows that such information can significantly enhance performance, but that the choice of information and the way it is included are important factors for success.Two multiple-choice problems are considered.The first is constructing a feasible nurse roster that considers as many requests as possible.In the second problem, shops are allocated to locations in a mall subject to constraints and maximising the overall income.Genetic algorithms are chosen for their well-known robustness and ability to solve large and complex discrete optimisation problems.However, a survey of the literature reveals room for further research into generic ways to include constraints into a genetic algorithm framework.Hence, the main theme of this work is to balance feasibility and cost of solutions.In particular, co-operative co-evolution with hierarchical sub-populations, problem structure exploiting repair schemes and indirect genetic algorithms with self-adjusting decoder functions are identified as promising approaches.The research starts by applying standard genetic algorithms to the problems and explaining the failure of such approaches due to epistasis.To overcome this, problem-specific information is added in a variety of ways, some of which are designed to increase the number of feasible solutions found whilst others are intended to improve the quality of such solutions.As well as a theoretical discussion as to the underlying reasons for using each operator,extensive computational experiments are carried out on a variety of data.These show that the indirect approach relies less on problem structure and hence is easier to implement and superior in solution quality.
FastGCN: A GPU Accelerated Tool for Fast Gene Co-Expression Networks
Liang, Meimei; Zhang, Futao; Jin, Gulei; Zhu, Jun
2015-01-01
Gene co-expression networks comprise one type of valuable biological networks. Many methods and tools have been published to construct gene co-expression networks; however, most of these tools and methods are inconvenient and time consuming for large datasets. We have developed a user-friendly, accelerated and optimized tool for constructing gene co-expression networks that can fully harness the parallel nature of GPU (Graphic Processing Unit) architectures. Genetic entropies were exploited to filter out genes with no or small expression changes in the raw data preprocessing step. Pearson correlation coefficients were then calculated. After that, we normalized these coefficients and employed the False Discovery Rate to control the multiple tests. At last, modules identification was conducted to construct the co-expression networks. All of these calculations were implemented on a GPU. We also compressed the coefficient matrix to save space. We compared the performance of the GPU implementation with those of multi-core CPU implementations with 16 CPU threads, single-thread C/C++ implementation and single-thread R implementation. Our results show that GPU implementation largely outperforms single-thread C/C++ implementation and single-thread R implementation, and GPU implementation outperforms multi-core CPU implementation when the number of genes increases. With the test dataset containing 16,000 genes and 590 individuals, we can achieve greater than 63 times the speed using a GPU implementation compared with a single-thread R implementation when 50 percent of genes were filtered out and about 80 times the speed when no genes were filtered out. PMID:25602758
FastGCN: a GPU accelerated tool for fast gene co-expression networks.
Liang, Meimei; Zhang, Futao; Jin, Gulei; Zhu, Jun
2015-01-01
Gene co-expression networks comprise one type of valuable biological networks. Many methods and tools have been published to construct gene co-expression networks; however, most of these tools and methods are inconvenient and time consuming for large datasets. We have developed a user-friendly, accelerated and optimized tool for constructing gene co-expression networks that can fully harness the parallel nature of GPU (Graphic Processing Unit) architectures. Genetic entropies were exploited to filter out genes with no or small expression changes in the raw data preprocessing step. Pearson correlation coefficients were then calculated. After that, we normalized these coefficients and employed the False Discovery Rate to control the multiple tests. At last, modules identification was conducted to construct the co-expression networks. All of these calculations were implemented on a GPU. We also compressed the coefficient matrix to save space. We compared the performance of the GPU implementation with those of multi-core CPU implementations with 16 CPU threads, single-thread C/C++ implementation and single-thread R implementation. Our results show that GPU implementation largely outperforms single-thread C/C++ implementation and single-thread R implementation, and GPU implementation outperforms multi-core CPU implementation when the number of genes increases. With the test dataset containing 16,000 genes and 590 individuals, we can achieve greater than 63 times the speed using a GPU implementation compared with a single-thread R implementation when 50 percent of genes were filtered out and about 80 times the speed when no genes were filtered out.
ERIC Educational Resources Information Center
Moni, Roger W.; Depaz, Iris; Lluka, Lesley J.
2008-01-01
We report findings from a case study of co-operative, group-based assessment in Pharmacology for second-year undergraduates at The University of Queensland, Australia. Students enrolled in the 2005 Bachelor of Science and 2006 Bachelor of Pharmacy degree programs, were early users of the university's new Collaborative Teaching and Learning Centre…
Coinductive Logic Programming with Negation
NASA Astrophysics Data System (ADS)
Min, Richard; Gupta, Gopal
We introduce negation into coinductive logic programming (co-LP) via what we term Coinductive SLDNF (co-SLDNF) resolution. We present declarative and operational semantics of co-SLDNF resolution and present their equivalence under the restriction of rationality. Co-LP with co-SLDNF resolution provides a powerful, practical and efficient operational semantics for Fitting's Kripke-Kleene three-valued logic with restriction of rationality. Further, applications of co-SLDNF resolution are also discussed and illustrated where Co-SLDNF resolution allows one to develop elegant implementations of modal logics. Moreover it provides the capability of non-monotonic inference (e.g., predicate Answer Set Programming) that can be used to develop novel and effective first-order modal non-monotonic inference engines.
NASA Technical Reports Server (NTRS)
Patel, Deepak
2014-01-01
Thermal and Fluids Analysis Workshop, Cleveland OH. NCTS 19701-14. On Dec 2013 a Loop Heat Pipe (LHP) test was performed as part of the integral Laser Thermal Control System (LTCS). During the balance portion of this testing it was noticed that the LHP was not going to be able to maintain temperature on the operational thermal mass. The test was stopped. After multiple meetings with the LTCS designers, LHP experts (in house and external) it was concluded that gravity was preventing the control heaters to maintain control on the reservoir. A heater was installed onto the liquid return line as part of the fix. After implementing the fix on the liquid return line, the test on May 2014 proved that the system works in vertical orientation using the liquid line heater. Through this testing, the correlation of the Deepak Condenser Model (DeCoM) was possible. This paper describes how well DeCoM predicts the condenser behavior in comparison to the test results of LTCS test.
Nickel, Stefan; Trojan, Alf; Kofahl, Christopher
2017-04-01
The importance of patient participation and involvement is now widely acknowledged; in the past, few systematic health-care institution policies existed to establish sustainable co-operation. In 2004, in Germany, the initiative 'Self-Help Friendliness (SHF) and Patient-Centeredness in Health Care' was launched to establish and implement quality criteria related to collaboration with patient groups. The objective of this study was to describe (i) how patients were involved in the development of SHF by summarizing a number of studies and (ii) a new survey on the importance and feasibility of SHF. In a series of participative studies, SHF was shaped, tested and implemented in 40 health-care institutions in Germany. Representatives from 157 self-help groups (SHGs), 50 self-help organizations and 17 self-help clearing houses were actively involved. The second objective was reached through a survey of 74 of the 115 member associations of the biggest self-help umbrella organization at federal level (response rate: 64 %). Patient involvement included the following: identification of the needs and wishes of SHGs regarding co-operation, their involvement in the definition of quality criteria of co-operation, having a crucial role during the implementation of SHF and accrediting health-care institutions as self-help friendly. The ten criteria in total were positively valued and perceived as moderately practicable. Through the intensive involvement of self-help representatives, it was feasible to develop SHF as a systematic approach to closer collaboration of professionals and SHGs. Some challenges have to be taken into account involving patients and the limitations of our empirical study. © 2016 The Authors. Health Expectations published by John Wiley & Sons Ltd.
Blankush, Joseph M; Freeman, Robbie; McIlvaine, Joy; Tran, Trung; Nassani, Stephen; Leitman, I Michael
2017-10-01
Modified Early Warning Scores (MEWS) provide real-time vital sign (VS) trending and reduce ICU admissions in post-operative patients. These early warning calculations classically incorporate oxygen saturation, heart rate, respiratory rate, systolic blood pressure, and temperature but have not previously included end-tidal CO2 (EtCO 2 ), more recently identified as an independent predictor of critical illness. These systems may be subject to failure when physiologic data is incorrectly measured, leading to false alarms and increased workload. This study investigates whether the implementation of automated devices that utilize ongoing vital signs monitoring and MEWS calculations, inclusive of a score for end-tidal CO 2 (EtCO 2 ), can be feasibly implemented on the general care hospital floor and effectively identify derangements in a post-operative patient's condition while limiting the amount of false alarms that would serve to increase provider workload. From July to November 2014, post-operative patients meeting the inclusion criteria (BMI > 30 kg/m 2 , history of obstructive sleep apnea, or the use of patient-controlled analgesia (PCA) or epidural narcotics) were monitored using automated devices that record minute-by-minute VS included in classic MEWS calculations as well as EtCO 2 . Automated messages via pagers were sent to providers for instances when the device measured elevated MEWS, abnormal EtCO 2 , and oxygen desaturations below 85 %. Data, including alarm and message details from the first 133 patients, were recorded and analyzed. Overall, 3.3 alarms and pages sounded per hour of monitoring. Device-only alarms sounded 2.7 times per hour-21 % were technical alarms. The remaining device-only alarms for concerning VS sounded 2.0/h, 70 % for falsely recorded VS. Pages for abnormal EtCO 2 sounded 0.4/h (82 % false recordings) while pages for low blood oxygen saturation sounded 0.1/h (55 % false alarms). 143 times (0.1 pages/h) the devices calculated a MEWS warranting a page (rise in MEWS by 2 or 5 or greater)-62 % were false scores inclusive of falsely recorded VS. An abnormal EtCO 2 value resulted in or added to an elevated MEWS score in 29 % of notifications, but 50 % of these included a falsely abnormal EtCO 2 value. To date, no adverse events have occurred. There were no statistically significant demographic, post-operative condition, or pre-existing comorbidity differences between patients who had a majority of true alarms from those who had mostly false-positive alarms. Although not statistically significant, the group of patients in whom automated MEWS suggested greater utility included those with a history of hypertension (p = 0.072) and renal disease (p = 0.084). EtCO 2 monitoring was more likely to be useful in patients with a history of type 2 diabetes, coronary artery disease, and obstructive sleep apnea (p < 0.05). These patients were also more likely to have been on a PCA post-operatively (p < 0.05). Overall, non-invasive physiologic monitoring incorporating an automated MEWS system, modified to include end-tidal CO2 can be feasibly implemented in a hospital ward. Further study is needed to evaluate its clinical utility, including an end-tidal CO 2 score, is feasibly implemented and can be useful in monitoring select post-operative patients for derangements in physiologic metrics. Like any other monitoring system, false alarms may occur at high rates. While further study is needed to determine the additive utility of EtCO 2 in MEWS calculations, this study suggests utility of EtCO 2 in select post-operative patients.
NASA Astrophysics Data System (ADS)
Cody, B. M.; Gonzalez-Nicolas, A.; Bau, D. A.
2011-12-01
Carbon capture and storage (CCS) has been proposed as a method of reducing global carbon dioxide (CO2) emissions. Although CCS has the potential to greatly retard greenhouse gas loading to the atmosphere while cleaner, more sustainable energy solutions are developed, there is a possibility that sequestered CO2 may leak and intrude into and adversely affect groundwater resources. It has been reported [1] that, while CO2 intrusion typically does not directly threaten underground drinking water resources, it may cause secondary effects, such as the mobilization of hazardous inorganic constituents present in aquifer minerals and changes in pH values. These risks must be fully understood and minimized before CCS project implementation. Combined management of project resources and leakage risk is crucial for the implementation of CCS. In this work, we present a method of: (a) minimizing the total CCS cost, the summation of major project costs with the cost associated with CO2 leakage; and (b) maximizing the mass of injected CO2, for a given proposed sequestration site. Optimization decision variables include the number of CO2 injection wells, injection rates, and injection well locations. The capital and operational costs of injection wells are directly related to injection well depth, location, injection flow rate, and injection duration. The cost of leakage is directly related to the mass of CO2 leaked through weak areas, such as abandoned oil wells, in the cap rock layers overlying the injected formation. Additional constraints on fluid overpressure caused by CO2 injection are imposed to maintain predefined effective stress levels that prevent cap rock fracturing. Here, both mass leakage and fluid overpressure are estimated using two semi-analytical models based upon work by [2,3]. A multi-objective evolutionary algorithm coupled with these semi-analytical leakage flow models is used to determine Pareto-optimal trade-off sets giving minimum total cost vs. maximum mass of CO2 sequestered. This heuristic optimization method is chosen because of its robustness in optimizing large-scale, highly non-linear problems. Trade-off curves are developed for multiple fictional sites with the intent of clarifying how variations in domain characteristics (aquifer thickness, aquifer and weak cap rock permeability, the number of weak cap rock areas, and the number of aquifer-cap rock layers) affect Pareto-optimal fronts. Computational benefits of using semi-analytical leakage models are explored and discussed. [1] Birkholzer, J. (2008) "Research Project on CO2 Geological Storage and Groundwater Resources: Water Quality Effects Caused by CO2 Intrusion into Shallow Groundwater" Berkeley (CA): Lawrence Berkeley National Laboratory (US); 2008 Oct. 473 p. Report No.: 510-486-7134. [2] Celia, M.A. and Nordbotten, J.M. (2011) "Field-scale application of a semi-analytical model for estimation of CO2 and brine leakage along old wells" International Journal of Greenhouse Gas Control, 5 (2011), 257-269. [3] Nordbotten, J.M. and Celia, M.A. (2009) "Model for CO2 leakage including multiple geological layers and multiple leaky wells" Environ. Sci. Technol., 43, 743-749.
ERIC Educational Resources Information Center
Murphy, Ellen; Grey, Ian M.; Honan, Rita
2005-01-01
As part of a larger study regarding the inclusion of children with disabilities in mainstream classroom settings, Ellen Murphy, of the D Clin Psych programme at NUI Galway, with Ian Grey and Rita Honan, from Trinity College, Dublin, reviewed existing literature on co-operative learning in the classroom. In this article, they identify four models…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loring, John S.; Ilton, Eugene S.; Chen, Jeffrey
Shale formations play fundamental roles in large-scale geologic carbon sequestration (GCS) aimed primarily to mitigate climate change, and in smaller-scale GCS targeted mainly for CO2-enhanced gas recovery operations. In both technologies, CO2 is injected underground as a supercritical fluid (scCO2), where interactions with shale minerals could influence successful GCS implementation. Reactive components of shales include expandable clays, such as montmorillonites and mixed-layer illite/smectite clays. In this work, we used in situ X-ray diffraction (XRD) and in situ infrared (IR) spectroscopy to investigate the swelling/shrinkage and water/CO2 sorption of a pure montmorillonite, Na-SWy-2, when the clay is exposed to variably hydratedmore » scCO2 at 50 °C and 90 bar. Measured interlayer spacings and sorbed water concentrations at varying levels of scCO2 hydration are similar to previously reported values measured in air at ambient pressure over a range of relative humidities. IR spectra show evidence of both water and CO2 intercalation, and variations in peak shapes and positions suggest multiple sorbed types with distinct chemical environments. Based on the intensity of the asymmetric CO stretching band of the CO2 associated with the Na-SWy-2, we observed a significant increase in sorbed CO2 as the clay expands from a 0W to a 1W state, suggesting that water props open the interlayer so that CO2 can enter. However, as the clay transitions from a 1W to a 2W state, CO2 desorbs sharply. These observations were placed in the context of two conceptual models concerning hydration mechanisms for expandable clays and were also discussed in light of recent theoretical studies on CO2-H2O-clay interactions. The swelling/shrinkage of expandable clays could affect solid volume, porosity, and permeability of shales. Consequently, the results from this work could aid predictions of shale caprock integrity in large-scale GCS, as well as methane transmissivity in enhanced gas recovery operations.« less
Kellom, Katherine S; Matone, Meredith; Adejare, Aderinola; Barg, Frances K; Rubin, David M; Cronholm, Peter F
2018-06-01
Objectives The aim of this paper is to explore the process and impact of co-locating evidence-based maternal and child service models to inform future implementation efforts. Methods As part of a state-wide evaluation of maternal and child home visiting programs, we conducted semi-structured interviews with administrators and home visitors from home visiting agencies across Pennsylvania. We collected 33 interviews from 4 co-located agencies. We used the Consolidated Framework for Implementation Research (CFIR) to describe the key elements mitigating implementation of multiple home visiting models. Results A primary advantage of co-location described by participants was the ability to increase the agency's base of eligible clients through the implementation of a model with different program eligibility (e.g. income, child age) than the existing agency offering. Model differences related to curriculum (e.g. content or intensity/meeting frequency) enabled programs to more selectively match clients to models. To recruit eligible clients, new models were able to build upon the existing service networks of the initial program. Co-location provided organizational opportunities for shared trainings, enabling administrative efficiencies and collaborative staff learning. Programs implemented strategies to build synergies with complementary model features, for instance using the additional program option to serve waitlisted clients and to transition services after one model is completed. Conclusions for Practice Considerable benefits are experienced when home visiting models co-locate. This research builds on literature encouraging collaboration among community agencies and provides insight on a specific facilitative approach. This implementation strategy informs policy across the social services spectrum and competitive funding contexts.
Wallin, Carl-Johan; Kalman, Sigridur; Sandelin, Annika; Färnert, May-Lena; Dahlstrand, Ursula; Jylli, Leena
2015-03-01
Positive safety and a teamwork climate in the training environment may be a precursor for successful teamwork training. This pilot project aimed to implement and test whether a new interdisciplinary and team-based approach would result in a positive training climate in the operating theatre. A 3-day educational module for training the complete surgical team of specialist nursing students and residents in safe teamwork skills in an authentic operative theatre, named Co-Op, was implemented in a university hospital. Participants' (n=22) perceptions of the 'safety climate' and the 'teamwork climate', together with their 'readiness for inter-professional learning', were measured to examine if the Co-Op module produced a positive training environment compared with the perceptions of a control group (n=11) attending the conventional curriculum. The participants' perceptions of 'safety climate' and 'teamwork climate' and their 'readiness for inter-professional learning' scores were significantly higher following the Co-Op module compared with their perceptions following the conventional curriculum, and compared with the control group's perceptions following the conventional curriculum. The Co-Op module improved 'safety climate' and 'teamwork climate' in the operating theatre, which suggests that a deliberate and designed educational intervention can shape a learning environment as a model for the establishment of a safety culture.
Multiple Site Action Research Case Studies: Practical and Theoretical Benefits and Challenges
ERIC Educational Resources Information Center
Pereira, Mary Delfin; Vallance, Roger
2006-01-01
A curriculum initiative project was implemented in four schools in Singapore over a span of five to six weeks during 2004. The project employed a number of different schools: girls only, boys only and co-educational schools; different levels of performance in a graded situation; multiple teachers and classes within each site; and control and…
Using Multiple FPGA Architectures for Real-time Processing of Low-level Machine Vision Functions
Thomas H. Drayer; William E. King; Philip A. Araman; Joseph G. Tront; Richard W. Conners
1995-01-01
In this paper, we investigate the use of multiple Field Programmable Gate Array (FPGA) architectures for real-time machine vision processing. The use of FPGAs for low-level processing represents an excellent tradeoff between software and special purpose hardware implementations. A library of modules that implement common low-level machine vision operations is presented...
A Model for Communications Satellite System Architecture Assessment
2011-09-01
This is shown in Equation 4. The total system cost includes all development, acquisition, fielding, operations, maintenance and upgrades, and system...protection. A mathematical model was implemented to enable the analysis of communications satellite system architectures based on multiple system... implemented to enable the analysis of communications satellite system architectures based on multiple system attributes. Utilization of the model in
NASA Technical Reports Server (NTRS)
Harrison, P. Ann
1992-01-01
The NASA VEGetation Workbench (VEG) is a knowledge based system that infers vegetation characteristics from reflectance data. The VEG subgoal PROPORTION.GROUND.COVER has been completed and a number of additional techniques that infer the proportion ground cover of a sample have been implemented. Some techniques operate on sample data at a single wavelength. The techniques previously incorporated in VEG for other subgoals operated on data at a single wavelength so implementing the additional single wavelength techniques required no changes to the structure of VEG. Two techniques which use data at multiple wavelengths to infer proportion ground cover were also implemented. This work involved modifying the structure of VEG so that multiple wavelength techniques could be incorporated. All the new techniques were tested using both the VEG 'Research Mode' and the 'Automatic Mode.'
Implementation of collisions on GPU architecture in the Vorpal code
NASA Astrophysics Data System (ADS)
Leddy, Jarrod; Averkin, Sergey; Cowan, Ben; Sides, Scott; Werner, Greg; Cary, John
2017-10-01
The Vorpal code contains a variety of collision operators allowing for the simulation of plasmas containing multiple charge species interacting with neutrals, background gas, and EM fields. These existing algorithms have been improved and reimplemented to take advantage of the massive parallelization allowed by GPU architecture. The use of GPUs is most effective when algorithms are single-instruction multiple-data, so particle collisions are an ideal candidate for this parallelization technique due to their nature as a series of independent processes with the same underlying operation. This refactoring required data memory reorganization and careful consideration of device/host data allocation to minimize memory access and data communication per operation. Successful implementation has resulted in an order of magnitude increase in simulation speed for a test-case involving multiple binary collisions using the null collision method. Work supported by DARPA under contract W31P4Q-16-C-0009.
Waterman, Heather; Boaden, Ruth; Burey, Lorraine; Howells, Brook; Harvey, Gill; Humphreys, John; Rothwell, Katy; Spence, Michael
2015-02-13
Facilitators are known to be influential in the implementation of evidence-based health care (EBHC). However, little evidence exists on what it is that they do to support the implementation process. This research reports on how knowledge transfer associates (KTAs) working as part of the UK National Institute for Health Research 'Collaboration for Leadership in Applied Health Research and Care' for Greater Manchester (GM CLAHRC) facilitated the implementation of EBHC across several commissioning and provider health care agencies. A prospective co-operative inquiry with eight KTAs was carried out comprising of 11 regular group meetings where they reflected critically on their experiences. Twenty interviews were also conducted with other members of the GM CLAHRC Implementation Team to gain their perspectives of the KTAs facilitation role and process. There were four phases to the facilitation of EBHC on a large scale: (1) Assisting with the decision on what EBHC to implement, in this phase, KTAs pulled together people and disparate strands of information to facilitate a decision on which EBHC should be implemented; (2) Planning of the implementation of EBHC, in which KTAs spent time gathering additional information and going between key people to plan the implementation; (3) Coordinating and implementing EBHC when KTAs recruited general practices and people for the implementation of EBHC; and (4) Evaluating the EBHC which required the KTAs to set up (new) systems to gather data for analysis. Over time, the KTAs demonstrated growing confidence and skills in aspects of facilitation: research, interpersonal communication, project management and change management skills. The findings provide prospective empirical data on the large scale implementation of EBHC in primary care and community based organisations focusing on resources and processes involved. Detailed evidence shows facilitation is context dependent and that 'one size does not fits all'. Co-operative inquiry was a useful method to enhance KTAs learning. The evidence shows that facilitators need tailored support and education, during the process of implementation to provide them with a well-rounded skill-set. Our study was not designed to demonstrate how facilitators contribute to patient health outcomes thus further prospective research is required.
Software Framework for Advanced Power Plant Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
John Widmann; Sorin Munteanu; Aseem Jain
2010-08-01
This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. Thesemore » include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.« less
Tatters, Avery O; Howard, Meredith D A; Nagoda, Carey; Busse, Lilian; Gellene, Alyssa G; Caron, David A
2017-03-09
Blooms of toxic cyanobacteria in freshwater ecosystems have received considerable attention in recent years, but their occurrence and potential importance at the land-sea interface has not been widely recognized. Here we present the results of a survey of discrete samples conducted in more than fifty brackish water sites along the coastline of southern California. Our objectives were to characterize cyanobacterial community composition and determine if specific groups of cyanotoxins (anatoxins, cylindrospermopsins, microcystins, nodularins, and saxitoxins) were present. We report the identification of numerous potentially harmful taxa and the co-occurrence of multiple toxins, previously undocumented, at several locations. Our findings reveal a potential health concern based on the range of organisms present and the widespread prevalence of recognized toxic compounds. Our results raise concerns for recreation, harvesting of finfish and shellfish, and wildlife and desalination operations, highlighting the need for assessments and implementation of monitoring programs. Such programs appear to be particularly necessary in regions susceptible to urban influence.
Tatters, Avery O.; Howard, Meredith D.A.; Nagoda, Carey; Busse, Lilian; Gellene, Alyssa G.; Caron, David A.
2017-01-01
Blooms of toxic cyanobacteria in freshwater ecosystems have received considerable attention in recent years, but their occurrence and potential importance at the land-sea interface has not been widely recognized. Here we present the results of a survey of discrete samples conducted in more than fifty brackish water sites along the coastline of southern California. Our objectives were to characterize cyanobacterial community composition and determine if specific groups of cyanotoxins (anatoxins, cylindrospermopsins, microcystins, nodularins, and saxitoxins) were present. We report the identification of numerous potentially harmful taxa and the co-occurrence of multiple toxins, previously undocumented, at several locations. Our findings reveal a potential health concern based on the range of organisms present and the widespread prevalence of recognized toxic compounds. Our results raise concerns for recreation, harvesting of finfish and shellfish, and wildlife and desalination operations, highlighting the need for assessments and implementation of monitoring programs. Such programs appear to be particularly necessary in regions susceptible to urban influence. PMID:28282935
ERIC Educational Resources Information Center
Ayres, Marie-Louise
2005-01-01
AustLit: Australian Literature Gateway--the world's first major FRBR implementation--was developed as a co-operative service involving eight universities and the National Library of Australia in 2000-2001. This paper traces the reasons for adopting the FRBR information model, implementation experiences, and user responses to the service. The paper…
VLSI implementation of RSA encryption system using ancient Indian Vedic mathematics
NASA Astrophysics Data System (ADS)
Thapliyal, Himanshu; Srinivas, M. B.
2005-06-01
This paper proposes the hardware implementation of RSA encryption/decryption algorithm using the algorithms of Ancient Indian Vedic Mathematics that have been modified to improve performance. The recently proposed hierarchical overlay multiplier architecture is used in the RSA circuitry for multiplication operation. The most significant aspect of the paper is the development of a division architecture based on Straight Division algorithm of Ancient Indian Vedic Mathematics and embedding it in RSA encryption/decryption circuitry for improved efficiency. The coding is done in Verilog HDL and the FPGA synthesis is done using Xilinx Spartan library. The results show that RSA circuitry implemented using Vedic division and multiplication is efficient in terms of area/speed compared to its implementation using conventional multiplication and division architectures.
40 CFR 75.35 - Missing data procedures for CO2.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Missing data procedures for CO2. 75.35... (CONTINUED) CONTINUOUS EMISSION MONITORING Missing Data Substitution Procedures § 75.35 Missing data... the 720 quality-assured monitor operating hours preceding implementation of the standard missing data...
40 CFR 75.35 - Missing data procedures for CO2.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Missing data procedures for CO2. 75.35... (CONTINUED) CONTINUOUS EMISSION MONITORING Missing Data Substitution Procedures § 75.35 Missing data... the 720 quality-assured monitor operating hours preceding implementation of the standard missing data...
40 CFR 75.35 - Missing data procedures for CO 2.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Missing data procedures for CO 2. 75... (CONTINUED) CONTINUOUS EMISSION MONITORING Missing Data Substitution Procedures § 75.35 Missing data... the 720 quality-assured monitor operating hours preceding implementation of the standard missing data...
40 CFR 75.35 - Missing data procedures for CO2.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Missing data procedures for CO2. 75.35... (CONTINUED) CONTINUOUS EMISSION MONITORING Missing Data Substitution Procedures § 75.35 Missing data... the 720 quality-assured monitor operating hours preceding implementation of the standard missing data...
40 CFR 75.35 - Missing data procedures for CO 2.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Missing data procedures for CO 2. 75... (CONTINUED) CONTINUOUS EMISSION MONITORING Missing Data Substitution Procedures § 75.35 Missing data... the 720 quality-assured monitor operating hours preceding implementation of the standard missing data...
Integrity Verification for Multiple Data Copies in Cloud Storage Based on Spatiotemporal Chaos
NASA Astrophysics Data System (ADS)
Long, Min; Li, You; Peng, Fei
Aiming to strike for a balance between the security, efficiency and availability of the data verification in cloud storage, a novel integrity verification scheme based on spatiotemporal chaos is proposed for multiple data copies. Spatiotemporal chaos is implemented for node calculation of the binary tree, and the location of the data in the cloud is verified. Meanwhile, dynamic operation can be made to the data. Furthermore, blind information is used to prevent a third-party auditor (TPA) leakage of the users’ data privacy in a public auditing process. Performance analysis and discussion indicate that it is secure and efficient, and it supports dynamic operation and the integrity verification of multiple copies of data. It has a great potential to be implemented in cloud storage services.
Implementing Audio Digital Feedback Loop Using the National Instruments RIO System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, G.; Byrd, J. M.
2006-11-20
Development of system for high precision RF distribution and laser synchronization at Berkeley Lab has been ongoing for several years. Successful operation of these systems requires multiple audio bandwidth feedback loops running at relatively high gains. Stable operation of the feedback loops requires careful design of the feedback transfer function. To allow for flexible and compact implementation, we have developed digital feedback loops on the National Instruments Reconfigurable Input/Output (RIO) platform. This platform uses an FPGA and multiple I/Os that can provide eight parallel channels running different filters. We present the design and preliminary experimental results of this system.
Datta, Asit K; Munshi, Soumika
2002-03-10
Based on the negabinary number representation, parallel one-step arithmetic operations (that is, addition and subtraction), logical operations, and matrix-vector multiplication on data have been optically implemented, by use of a two-dimensional spatial-encoding technique. For addition and subtraction, one of the operands in decimal form is converted into the unsigned negabinary form, whereas the other decimal number is represented in the signed negabinary form. The result of operation is obtained in the mixed negabinary form and is converted back into decimal. Matrix-vector multiplication for unsigned negabinary numbers is achieved through the convolution technique. Both of the operands for logical operation are converted to their signed negabinary forms. All operations are implemented by use of a unique optical architecture. The use of a single liquid-crystal-display panel to spatially encode the input data, operational kernels, and decoding masks have simplified the architecture as well as reduced the cost and complexity.
Al Sadat, Wajdi I; Archer, Lynden A
2016-07-01
Economical and efficient carbon capture, utilization, and sequestration technologies are a requirement for successful implementation of global action plans to reduce carbon emissions and to mitigate climate change. These technologies are also essential for longer-term use of fossil fuels while reducing the associated carbon footprint. We demonstrate an O2-assisted Al/CO2 electrochemical cell as a new approach to sequester CO2 emissions and, at the same time, to generate substantial amounts of electrical energy. We report on the fundamental principles that guide operations of these cells using multiple intrusive electrochemical and physical analytical methods, including chronopotentiometry, cyclic voltammetry, direct analysis in real-time mass spectrometry, energy-dispersive x-ray spectroscopy, x-ray photoelectron spectroscopy, and coupled thermogravimetric analysis-Fourier transform infrared spectroscopy. On this basis, we demonstrate that an electrochemical cell that uses metallic aluminum as anode and a carbon dioxide/oxygen gas mixture as the active material in the cathode provides a path toward electrochemical generation of a valuable (C2) species and electrical energy. Specifically, we show that the cell first reduces O2 at the cathode to form superoxide intermediates. Chemical reaction of the superoxide with CO2 sequesters the CO2 in the form of aluminum oxalate, Al2(C2O4)3, as the dominant product. On the basis of an analysis of the overall CO2 footprint, which considers emissions associated with the production of the aluminum anode and the CO2 captured/abated by the Al/CO2-O2 electrochemical cell, we conclude that the proposed process offers an important strategy for net reduction of CO2 emissions.
Al Sadat, Wajdi I.; Archer, Lynden A.
2016-01-01
Economical and efficient carbon capture, utilization, and sequestration technologies are a requirement for successful implementation of global action plans to reduce carbon emissions and to mitigate climate change. These technologies are also essential for longer-term use of fossil fuels while reducing the associated carbon footprint. We demonstrate an O2-assisted Al/CO2 electrochemical cell as a new approach to sequester CO2 emissions and, at the same time, to generate substantial amounts of electrical energy. We report on the fundamental principles that guide operations of these cells using multiple intrusive electrochemical and physical analytical methods, including chronopotentiometry, cyclic voltammetry, direct analysis in real-time mass spectrometry, energy-dispersive x-ray spectroscopy, x-ray photoelectron spectroscopy, and coupled thermogravimetric analysis–Fourier transform infrared spectroscopy. On this basis, we demonstrate that an electrochemical cell that uses metallic aluminum as anode and a carbon dioxide/oxygen gas mixture as the active material in the cathode provides a path toward electrochemical generation of a valuable (C2) species and electrical energy. Specifically, we show that the cell first reduces O2 at the cathode to form superoxide intermediates. Chemical reaction of the superoxide with CO2 sequesters the CO2 in the form of aluminum oxalate, Al2(C2O4)3, as the dominant product. On the basis of an analysis of the overall CO2 footprint, which considers emissions associated with the production of the aluminum anode and the CO2 captured/abated by the Al/CO2-O2 electrochemical cell, we conclude that the proposed process offers an important strategy for net reduction of CO2 emissions. PMID:27453949
Multiple Attempts for Online Assessments in an Operations Management Course: An Exploration
ERIC Educational Resources Information Center
Orchard, Ryan K.
2016-01-01
In learning management systems, tools for online homework assessments include a number of alternatives for the assessment settings, including the ability to permit students to attempt an assessment multiple times, with options for how the multiple attempts are administered. A specific implementation of online assessments in an introductory…
Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Terry R
2011-01-01
This paper describes a kernel scheduling algorithm that is based on co-scheduling principles and that is intended for parallel applications running on 1000 cores or more where inter-node scalability is key. Experimental results for a Linux implementation on a Cray XT5 machine are presented.1 The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.
NASA Astrophysics Data System (ADS)
Franchetti, Franz; Sandryhaila, Aliaksei; Johnson, Jeremy R.
2014-06-01
In this paper we introduce High Assurance SPIRAL to solve the last mile problem for the synthesis of high assurance implementations of controllers for vehicular systems that are executed in today's and future embedded and high performance embedded system processors. High Assurance SPIRAL is a scalable methodology to translate a high level specification of a high assurance controller into a highly resource-efficient, platform-adapted, verified control software implementation for a given platform in a language like C or C++. High Assurance SPIRAL proves that the implementation is equivalent to the specification written in the control engineer's domain language. Our approach scales to problems involving floating-point calculations and provides highly optimized synthesized code. It is possible to estimate the available headroom to enable assurance/performance trade-offs under real-time constraints, and enables the synthesis of multiple implementation variants to make attacks harder. At the core of High Assurance SPIRAL is the Hybrid Control Operator Language (HCOL) that leverages advanced mathematical constructs expressing the controller specification to provide high quality translation capabilities. Combined with a verified/certified compiler, High Assurance SPIRAL provides a comprehensive complete solution to the efficient synthesis of verifiable high assurance controllers. We demonstrate High Assurance SPIRALs capability by co-synthesizing proofs and implementations for attack detection and sensor spoofing algorithms and deploy the code as ROS nodes on the Landshark unmanned ground vehicle and on a Synthetic Car in a real-time simulator.
Herath, Damayanthi; Tang, Sen-Lin; Tandon, Kshitij; Ackland, David; Halgamuge, Saman Kumara
2017-12-28
In metagenomics, the separation of nucleotide sequences belonging to an individual or closely matched populations is termed binning. Binning helps the evaluation of underlying microbial population structure as well as the recovery of individual genomes from a sample of uncultivable microbial organisms. Both supervised and unsupervised learning methods have been employed in binning; however, characterizing a metagenomic sample containing multiple strains remains a significant challenge. In this study, we designed and implemented a new workflow, Coverage and composition based binning of Metagenomes (CoMet), for binning contigs in a single metagenomic sample. CoMet utilizes coverage values and the compositional features of metagenomic contigs. The binning strategy in CoMet includes the initial grouping of contigs in guanine-cytosine (GC) content-coverage space and refinement of bins in tetranucleotide frequencies space in a purely unsupervised manner. With CoMet, the clustering algorithm DBSCAN is employed for binning contigs. The performances of CoMet were compared against four existing approaches for binning a single metagenomic sample, including MaxBin, Metawatt, MyCC (default) and MyCC (coverage) using multiple datasets including a sample comprised of multiple strains. Binning methods based on both compositional features and coverages of contigs had higher performances than the method which is based only on compositional features of contigs. CoMet yielded higher or comparable precision in comparison to the existing binning methods on benchmark datasets of varying complexities. MyCC (coverage) had the highest ranking score in F1-score. However, the performances of CoMet were higher than MyCC (coverage) on the dataset containing multiple strains. Furthermore, CoMet recovered contigs of more species and was 18 - 39% higher in precision than the compared existing methods in discriminating species from the sample of multiple strains. CoMet resulted in higher precision than MyCC (default) and MyCC (coverage) on a real metagenome. The approach proposed with CoMet for binning contigs, improves the precision of binning while characterizing more species in a single metagenomic sample and in a sample containing multiple strains. The F1-scores obtained from different binning strategies vary with different datasets; however, CoMet yields the highest F1-score with a sample comprised of multiple strains.
NASA Technical Reports Server (NTRS)
Schubert, F. H.; Wynveen, R. A.; Hallick, T. M.
1976-01-01
Regenerative processes for the revitalization of spacecraft atmospheres require an Oxygen Reclamation System (ORS) for the collection of carbon dioxide and water vapor and the recovery of oxygen from these metabolic products. Three life support subsystems uniquely qualified to form such an ORS are an Electrochemical CO2 Depolarized Concentrator (EDC), a CO2 Reduction Subsystem (BRS) and a Water Electrolysis Subsystem (WES). A program to develop and test the interface hardware and control concepts necessary for integrated operation of a four man capacity EDC with a four man capacity BRS was successfully completed. The control concept implemented proved successful in operating the EDC with the BRS for both constant CO2 loading as well as variable CO2 loading, based on a repetitive mission profile of the Space Station Prototype (SSP).
NASA Astrophysics Data System (ADS)
Cruz Jiménez, Miriam Guadalupe; Meyer Baese, Uwe; Jovanovic Dolecek, Gordana
2017-12-01
New theoretical lower bounds for the number of operators needed in fixed-point constant multiplication blocks are presented. The multipliers are constructed with the shift-and-add approach, where every arithmetic operation is pipelined, and with the generalization that n-input pipelined additions/subtractions are allowed, along with pure pipelining registers. These lower bounds, tighter than the state-of-the-art theoretical limits, are particularly useful in early design stages for a quick assessment in the hardware utilization of low-cost constant multiplication blocks implemented in the newest families of field programmable gate array (FPGA) integrated circuits.
Solving large sparse eigenvalue problems on supercomputers
NASA Technical Reports Server (NTRS)
Philippe, Bernard; Saad, Youcef
1988-01-01
An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.
Laser driving and data processing concept for mobile trace gas sensing: Design and implementation
NASA Astrophysics Data System (ADS)
Liu, Chang; Tuzson, Béla; Scheidegger, Philipp; Looser, Herbert; Bereiter, Bernhard; Graf, Manuel; Hundt, Morten; Aseev, Oleg; Maas, Deran; Emmenegger, Lukas
2018-06-01
High precision mobile sensing of multi-species gases is greatly demanded in a wide range of applications. Although quantum cascade laser absorption spectroscopy demonstrates excellent field-deployment capabilities for gas sensing, the implementation of this measurement technique into sensor-like portable instrumentation still remains challenging. In this paper, two crucial elements, the laser driving and data acquisition electronics, are addressed. Therefore, we exploit the benefits of the time-division multiplexed intermittent continuous wave driving concept and the real-time signal pre-processing capabilities of a commercial System-on-Chip (SoC, Red Pitaya). We describe a re-designed current driver that offers a universal solution for operating a wide range of multi-wavelength quantum cascade laser device types and allows stacking for the purpose of multiple laser configurations. Its adaptation to the various driving situations is enabled by numerous field programmable gate array (FPGA) functionalities that were developed on the SoC, such as flexible generation of a large variety of synchronized trigger signals and digital inputs/outputs (DIOs). The same SoC is used to sample the spectroscopic signal at rates up to 125 MS/s with 14-bit resolution. Additional FPGA functionalities were implemented to enable on-board averaging of consecutive spectral scans in real-time, resulting in optimized memory bandwidth and hardware resource utilisation and autonomous system operation. Thus, we demonstrate how a cost-effective, compact, and commercial SoC can successfully be adapted to obtain a fully operational research-grade laser spectrometer. The overall system performance was examined in a spectroscopic setup by analyzing low pressure absorption features of CO2 at 4.3 μm.
Exploration Analysis of Carbon Dioxide Levels and Ultrasound Measures of the Eye During ISS Missions
NASA Technical Reports Server (NTRS)
Young, M.; Mason, S.; Schaefer, C.; Wear, M. L.; Sargsyan, A.; Garcia, K.; Coble, C.; Gruschkus, S.; Law, J.; Alexander, D.;
2016-01-01
Enhanced screening for the Visual Impairment/Intracranial Pressure (VIIP) Syndrome, including in-flight ultrasound, was implemented in 2010 to better characterize the changes in vision observed in some long-duration crewmembers. Suggested possible risk factors for VIIP include cardiovascular changes, diet, anatomical and genetic factors, and environmental conditions. As a potent vasodilator, carbon dioxide (CO (sub 2)), which is chronically elevated on the International Space Station (ISS) relative to typical indoor and outdoor ambient levels on Earth, seems a plausible contributor to VIIP. In an effort to understand the possible associations between CO (sub 2) and VIIP, this study analyzes the relationship between ambient CO (sub 2) levels on ISS and ultrasound measures of the eye obtained from ISS fliers. CO (sub 2) measurements will be pulled directly from Operational Data Reduction Complex for the Lab and Node 3 major constituent analyzers (MCAs) on ISS or from sensors located in the European Columbus module, as available. CO (sub 2) measures between ultrasound sessions will be summarized using standard time series class metrics in MATLAB including time-weighted means and variances. Cumulative CO (sub 2) exposure metrics will also be developed. Regression analyses will be used to quantify the relationships between the CO (sub 2) metrics and specific ultrasound measures. Generalized estimating equations will adjust for the repeated measures within individuals. Multiple imputation techniques will be used to adjust for any possible biases in missing data for either CO (sub 2) or ultrasound measures. These analyses will elucidate the possible relationship between CO (sub 2) and changes in vision and also inform future analysis of inflight VIIP data.
Catalyst for carbon monoxide oxidation
NASA Technical Reports Server (NTRS)
Upchurch, Billy T. (Inventor); Miller, Irvin M. (Inventor); Brown, David R. (Inventor); Davis, Patricia (Inventor); Schryer, David R. (Inventor); Brown, Kenneth G. (Inventor); Vannorman, John D. (Inventor)
1990-01-01
A catalyst is disclosed for the combination of CO and O2 to form CO2, which includes a platinum group metal (e.g., platinum); a reducable metal oxide having multiple valence states (e.g., SnO2); and a compound which can bind water to its structure (e.g., silica gel). This catalyst is ideally suited for application to high-powered pulsed, CO2 lasers operating in a sealed or closed-cycle condition.
Foglia, Robert P; Alder, Adam C; Ruiz, Gardito
2013-01-01
Perioperative services require the orchestration of multiple staff, space and equipment. Our aim was to identify whether the implementation of operations management and an electronic health record (EHR) improved perioperative performance. We compared 2006, pre operations management and EHR implementation, to 2010, post implementation. Operations management consisted of: communication to staff of perioperative vision and metrics, obtaining credible data and analysis, and the implementation of performance improvement processes. The EHR allows: identification of delays and the accountable service or person, collection and collation of data for analysis in multiple venues, including operational, financial, and quality. Metrics assessed included: operative cases, first case on time starts; reason for delay, and operating revenue. In 2006, 19,148 operations were performed (13,545 in the Main Operating Room (OR) area, and 5603, at satellite locations); first case on time starts were 12%; reasons for first case delay were not identifiable; and operating revenue was $115.8M overall, with $78.1M in the Main OR area. In 2010, cases increased to 25,856 (+35%); Main OR area increased to 13,986 (+3%); first case on time starts improved to 46%; operations outside the Main OR area increased to 11,870 (112%); case delays were ascribed to nurses 7%, anesthesiologists 22%, surgeons 33%, and other (patient, hospital) 38%. Five surgeons (7%) accounted for 29% of surgical delays and 4 anesthesiologists (8%) for 45% of anesthesiology delays; operating revenue increased to $177.3M (+53%) overall, and in the Main OR area rose to $101.5M (+30%). The use of operations management and EHR resulted in improved processes, credible data, promptly sharing the metrics, and pinpointing individual provider performance. Implementation of these strategies allowed us to shift cases between facilities, reallocate OR blocks, increase first case on time starts four fold and operative cases by 35%, and these changes were associated with a 53% increase in operating revenue. The fact that revenue increase was greater than case volume (53% vs. 35%) speaks for improved performance. Copyright © 2013 Elsevier Inc. All rights reserved.
CHD associated with syndromic diagnoses: peri-operative risk factors and early outcomes
Landis, Benjamin J.; Cooper, David S.; Hinton, Robert B.
2016-01-01
CHD is frequently associated with a genetic syndrome. These syndromes often present specific cardiovascular and non-cardiovascular co-morbidities that confer significant peri-operative risks affecting multiple organ systems. Although surgical outcomes have improved over time, these co-morbidities continue to contribute substantially to poor peri-operative mortality and morbidity outcomes. Peri-operative morbidity may have long-standing ramifications on neurodevelopment and overall health. Recognising the cardiovascular and non-cardiovascular risks associated with specific syndromic diagnoses will facilitate expectant management, early detection of clinical problems, and improved outcomes – for example, the development of syndrome-based protocols for peri-operative evaluation and prophylactic actions may improve outcomes for the more frequently encountered syndromes such as 22q11 deletion syndrome. PMID:26345374
Rycroft-Malone, Jo; Burton, Christopher R; Wilkinson, Joyce; Harvey, Gill; McCormack, Brendan; Baker, Richard; Dopson, Sue; Graham, Ian D; Staniszewska, Sophie; Thompson, Carl; Ariss, Steven; Melville-Richards, Lucy; Williams, Lynne
2016-02-09
Increasingly, it is being suggested that translational gaps might be eradicated or narrowed by bringing research users and producers closer together, a theory that is largely untested. This paper reports a national study to fill a gap in the evidence about the conditions, processes and outcomes related to collaboration and implementation. A longitudinal realist evaluation using multiple qualitative methods case studies was conducted with three Collaborations for Leadership in Applied Health Research in Care (England). Data were collected over four rounds of theory development, refinement and testing. Over 200 participants were involved in semi-structured interviews, non-participant observations of events and meetings, and stakeholder engagement. A combined inductive and deductive data analysis process was focused on proposition refinement and testing iteratively over data collection rounds. The quality of existing relationships between higher education and local health service, and views about whether implementation was a collaborative act, created a path dependency. Where implementation was perceived to be removed from service and there was a lack of organisational connections, this resulted in a focus on knowledge production and transfer, rather than co-production. The collaborations' architectures were counterproductive because they did not facilitate connectivity and had emphasised professional and epistemic boundaries. More distributed leadership was associated with greater potential for engagement. The creation of boundary spanning roles was the most visible investment in implementation, and credible individuals in these roles resulted in cross-boundary work, in facilitation and in direct impacts. The academic-practice divide played out strongly as a context for motivation to engage, in that 'what's in it for me' resulted in variable levels of engagement along a co-operation-collaboration continuum. Learning within and across collaborations was patchy depending on attention to evaluation. These collaborations did not emerge from a vacuum, and they needed time to learn and develop. Their life cycle started with their position on collaboration, knowledge and implementation. More impactful attempts at collective action in implementation might be determined by the deliberate alignment of a number of features, including foundational relationships, vision, values, structures and processes and views about the nature of the collaboration and implementation.
Coffman, Kirsten E.; Taylor, Bryan J.; Carlson, Alex R.; Wentz, Robert J.; Johnson, Bruce D.
2015-01-01
Alveolar-capillary membrane conductance (DM,CO) and pulmonary-capillary blood volume (VC) are calculated via lung diffusing capacity for carbon monoxide (DLCO) and nitric oxide (DLNO) using the single breath, single oxygen tension (single-FiO2) method. However, two calculation parameters, the reaction rate of carbon monoxide with blood (θCO) and the DM,NO/DM,CO ratio (α-ratio), are controversial. This study systematically determined optimal θCO and α-ratio values to be used in the single-FiO2 method that yielded the most similar DM,CO and VC values compared to the ‘gold-standard’ multiple-FiO2 method. Eleven healthy subjects performed single breath DLCO/DLNO maneuvers at rest and during exercise. DM,CO and VC were calculated via the single-FiO2 and multiple-FiO2 methods by implementing seven θCO equations and a range of previously reported α-ratios. The RP θCO equation (Reeves and Park, Respiration physiology 88:1–21, 1992.) and an α-ratio of 4.0–4.4 yielded DM,CO and VC values that were most similar between methods. The RP θCO equation and an experimental α-ratio should be used in future studies. PMID:26521031
Space Shuttle Avionics: a Redundant IMU On-Board Checkout and Redundancy Management System
NASA Technical Reports Server (NTRS)
Mckern, R. A.; Brown, D. G.; Dove, D. W.; Gilmore, J. P.; Landey, M. E.; Musoff, H.; Amand, J. S.; Vincent, K. T., Jr.
1972-01-01
A failure detection and isolation philosophy applicable to multiple off-the-shelf gimbaled IMUs are discussed. The equations developed are implemented and evaluated with actual shuttle trajectory simulations. The results of these simulations are presented for both powered and unpowered flight phases and at operational levels of four, three, and two IMUs. A multiple system checkout philosophy is developed and simulation results presented. The final task develops a laboratory test plan and defines the hardware and software requirements to implement an actual multiple system and evaluate the interim study results for space shuttle application.
Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang
2016-01-01
Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938
Global interrupt and barrier networks
Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E; Heidelberger, Philip; Kopcsay, Gerard V.; Steinmacher-Burow, Burkhard D.; Takken, Todd E.
2008-10-28
A system and method for generating global asynchronous signals in a computing structure. Particularly, a global interrupt and barrier network is implemented that implements logic for generating global interrupt and barrier signals for controlling global asynchronous operations performed by processing elements at selected processing nodes of a computing structure in accordance with a processing algorithm; and includes the physical interconnecting of the processing nodes for communicating the global interrupt and barrier signals to the elements via low-latency paths. The global asynchronous signals respectively initiate interrupt and barrier operations at the processing nodes at times selected for optimizing performance of the processing algorithms. In one embodiment, the global interrupt and barrier network is implemented in a scalable, massively parallel supercomputing device structure comprising a plurality of processing nodes interconnected by multiple independent networks, with each node including one or more processing elements for performing computation or communication activity as required when performing parallel algorithm operations. One multiple independent network includes a global tree network for enabling high-speed global tree communications among global tree network nodes or sub-trees thereof. The global interrupt and barrier network may operate in parallel with the global tree network for providing global asynchronous sideband signals.
Evaluating OpenSHMEM Explicit Remote Memory Access Operations and Merged Requests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boehm, Swen; Pophale, Swaroop S; Gorentla Venkata, Manjunath
The OpenSHMEM Library Specification has evolved consid- erably since version 1.0. Recently, non-blocking implicit Remote Memory Access (RMA) operations were introduced in OpenSHMEM 1.3. These provide a way to achieve better overlap between communication and computation. However, the implicit non-blocking operations do not pro- vide a separate handle to track and complete the individual RMA opera- tions. They are guaranteed to be completed after either a shmem quiet(), shmem barrier() or a shmem barrier all() is called. These are global com- pletion and synchronization operations. Though this semantic is expected to achieve a higher message rate for the applications, themore » drawback is that it does not allow fine-grained control over the completion of RMA operations. In this paper, first, we introduce non-blocking RMA operations with requests, where each operation has an explicit request to track and com- plete the operation. Second, we introduce interfaces to merge multiple requests into a single request handle. The merged request tracks multiple user-selected RMA operations, which provides the flexibility of tracking related communication operations with one request handle. Lastly, we explore the implications in terms of performance, productivity, usability and the possibility of defining different patterns of communication via merging of requests. Our experimental results show that a well designed and implemented OpenSHMEM stack can hide the overhead of allocating and managing the requests. The latency of RMA operations with requests is similar to blocking and implicit non-blocking RMA operations. We test our implementation with the Scalable Synthetic Compact Applications (SSCA #1) benchmark and observe that using RMA operations with requests and merging of these requests outperform the implementation using blocking RMA operations and implicit non-blocking operations by 49% and 74% respectively.« less
BLAS- BASIC LINEAR ALGEBRA SUBPROGRAMS
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1994-01-01
The Basic Linear Algebra Subprogram (BLAS) library is a collection of FORTRAN callable routines for employing standard techniques in performing the basic operations of numerical linear algebra. The BLAS library was developed to provide a portable and efficient source of basic operations for designers of programs involving linear algebraic computations. The subprograms available in the library cover the operations of dot product, multiplication of a scalar and a vector, vector plus a scalar times a vector, Givens transformation, modified Givens transformation, copy, swap, Euclidean norm, sum of magnitudes, and location of the largest magnitude element. Since these subprograms are to be used in an ANSI FORTRAN context, the cases of single precision, double precision, and complex data are provided for. All of the subprograms have been thoroughly tested and produce consistent results even when transported from machine to machine. BLAS contains Assembler versions and FORTRAN test code for any of the following compilers: Lahey F77L, Microsoft FORTRAN, or IBM Professional FORTRAN. It requires the Microsoft Macro Assembler and a math co-processor. The PC implementation allows individual arrays of over 64K. The BLAS library was developed in 1979. The PC version was made available in 1986 and updated in 1988.
ERIC Educational Resources Information Center
Simmons, Ben; Watson, Debbie
2015-01-01
Children with profound and multiple learning disabilities (PMLD) are said to experience severe congenital impairments to consciousness and cognition stemming from neurological damage. Such children are understood as operating at the pre-verbal stages of development, and research in the field typically draws conceptual resources from psychology to…
Real-time associative memory with photorefractive crystal KNSBN and liquid-crystal optical switches
NASA Astrophysics Data System (ADS)
Xu, Haiying; Yuan, Yang Y.; Yu, Youlong; Xu, Kebin; Xu, Yuhuan; Zhu, De-Rui
1990-05-01
We present a real-time holographic associative memory implemented with photorefractive KNSBN : Co crystal as memory element and liquid crystal electrooptical switches as reflective thresholding device. The experimental results show that the system has real-time multiple-image storage and recall function.
New Mexico Math Remediation Taskforce Report
ERIC Educational Resources Information Center
New Mexico Higher Education Department, 2016
2016-01-01
The Math Remediation Task Force is comprised of faculty from two-year, four-year comprehensive, and four-year flagship higher education institutions throughout the state of New Mexico. Its members have varying levels of experience with designing/implementing multiple math remediation programs including traditional, co-requisite and acceleration…
Coffman, Kirsten E; Taylor, Bryan J; Carlson, Alex R; Wentz, Robert J; Johnson, Bruce D
2016-01-15
Alveolar-capillary membrane conductance (D(M,CO)) and pulmonary-capillary blood volume (V(C)) are calculated via lung diffusing capacity for carbon monoxide (DL(CO)) and nitric oxide (DL(NO)) using the single breath, single oxygen tension (single-FiO2) method. However, two calculation parameters, the reaction rate of carbon monoxide with blood (θ(CO)) and the D(M,NO)/D(M,CO) ratio (α-ratio), are controversial. This study systematically determined optimal θ(CO) and α-ratio values to be used in the single-FiO2 method that yielded the most similar D(M,CO) and V(C) values compared to the 'gold-standard' multiple-FiO2 method. Eleven healthy subjects performed single breath DL(CO)/DL(NO) maneuvers at rest and during exercise. D(M,CO) and V(C) were calculated via the single-FiO2 and multiple-FiO2 methods by implementing seven θ(CO) equations and a range of previously reported α-ratios. The RP θ(CO) equation (Reeves, R.B., Park, H.K., 1992. Respiration Physiology 88 1-21) and an α-ratio of 4.0-4.4 yielded DM,CO and VC values that were most similar between methods. The RP θ(CO) equation and an experimental α-ratio should be used in future studies. Copyright © 2015 Elsevier B.V. All rights reserved.
Operational viewpoint of the X-29A digital flight control system
NASA Technical Reports Server (NTRS)
Chacon, Vince; Mcbride, David
1988-01-01
In the past few years many flight control systems have been implemented as full-authority, full-time digital systems. The digital design has allowed flight control systems to make use of many enhanced elements that are generally considered too complex to implement in an analog system. Examples of these elements are redundant information exchanged between channels to allow for continued operation after multiple failures and multiple variable gain schedules to optimize control of the aircraft throughout its flight envelope and in all flight modes. The introduction of the digital system for flight control also created the problem of obtaining information from the system in an understandable and useful format. This paper presents how the X-29A was dealt with during its operations at NASA Ames-Dryden Flight Research Facility. A brief description of the X-29A control system, a discussion of the tools developed to aid in daily operations, and the troubleshooting of the aircraft are included.
"Development Radar": The Co-Configuration of a Tool in a Learning Network
ERIC Educational Resources Information Center
Toiviainen, Hanna; Kerosuo, Hannele; Syrjala, Tuula
2009-01-01
Purpose: The paper aims to argue that new tools are needed for operating, developing and learning in work-life networks where academic and practice knowledge are intertwined in multiple levels of and in boundary-crossing across activities. At best, tools for learning are designed in a process of co-configuration, as the analysis of one tool,…
Object detection system using SPAD proximity detectors
NASA Astrophysics Data System (ADS)
Stark, Laurence; Raynor, Jeffrey M.; Henderson, Robert K.
2011-10-01
This paper presents an object detection system based upon the use of multiple single photon avalanche diode (SPAD) proximity sensors operating upon the time-of-flight (ToF) principle, whereby the co-ordinates of a target object in a coordinate system relative to the assembly are calculated. The system is similar to a touch screen system in form and operation except that the lack of requirement of a physical sensing surface provides a novel advantage over most existing touch screen technologies. The sensors are controlled by FPGA-based firmware and each proximity sensor in the system measures the range from the sensor to the target object. A software algorithm is implemented to calculate the x-y coordinates of the target object based on the distance measurements from at least two separate sensors and the known relative positions of these sensors. Existing proximity sensors were capable of determining the distance to an object with centimetric accuracy and were modified to obtain a wide field of view in the x-y axes with low beam angle in z in order to provide a detection area as large as possible. Design and implementation of the firmware, electronic hardware, mechanics and optics are covered in the paper. Possible future work would include characterisation with alternative designs of proximity sensors, as this is the component which determines the highest achievable accur1acy of the system.
Development status of regenerable solid amine CO2 control systems
NASA Technical Reports Server (NTRS)
Colling, A. K., Jr.; Nalette, T. A.; Cusick, R. J.; Reysa, R. P.
1985-01-01
The development history of solid amine/water desorbed (SAWD) CO2 control systems is reviewed. The design of the preprototype SAWD I CO2 system on the basis of a three-man metabolic load at the 3.8 mm Hg ambient CO2 level, and the functions of the CO2 removal, CO2 storage/delivery, controller, and life test laboratory support packages are described. The development of a full-scale multiple canister SAWD II preprototype system, which is capable of conducting the CO2 removal/concentration function in a closed-loop atmosphere revitalization system during zero-gravity operation, is examined. The operation of the SAWD II system, including the absorption and desorption cycles, is analyzed. A reduction in the thermal mass of the canister and the system's energy transfer technique result in efficient energy use. The polyether foam, nylon felt, nickel foam, spring retained, and metal bellows bed tests performed to determine the design of the zero-gravity canister are studied; metal bellows are selected for the canister's configuration.
Focusing management in implementing a smoking ban in a university hospital in Sweden.
Ullén, H; Höijer, Y; Ainetdin, T; Tillgren, P
2002-04-01
To explore the impact of various steps when introducing a smoking ban at the Karolinska Hospital (1000 beds; 6000 employees) in Stockholm, Sweden, a multiple evaluation strategy was performed over 5 years. All heads of clinical departments (N = 41) and a random sample of employees (n = 517) and a convenience sample of hospital labour managers (n = 17) were separately addressed through questionnaire surveys at different time intervals after the introduction of the ban in 1992. An observational and interview study completed the follow-up. The implementation process was supplemented by a comprehensive information strategy over 5 years. The two most important steps during implementation were management support and focus on environmental tobacco. The ban was well known at introduction. Heads of clinical departments reported a third of staff to be satisfied with the restrictions. In contrast, the staff survey revealed 62% to be positive. A shift in favour of a radical tobacco-free hospital was perceived during follow-up. Co-operation between hospital board, heads of clinical departments and local labour managers proved successful. The consecutive evaluations served as tools in labour management and contributed to staff compliance. A total ban, including the selling of tobacco and smoking in the hospital grounds is still to be achieved.
Estimation of the laser cutting operating cost by support vector regression methodology
NASA Astrophysics Data System (ADS)
Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam
2016-09-01
Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.
Evolution from Packet Utilisation to Mission Operation Services
NASA Astrophysics Data System (ADS)
Cooper, Sam; Forwell, Stuart D.
2012-08-01
The ECSS Packet Utilisation Standard (PUS) and the forthcoming CCSDS Mission Operations (MO) Services occupy a very similar domain. This paper discusses the history of the two standards, their relationship and how the two can co-exist in the near term and long terms. It also covers implications with implementing MO services in current and future on-board architectures.
Program Evaluation of Math Factual Operations for Understanding
ERIC Educational Resources Information Center
Rouse, Julie A.
2013-01-01
Deficiencies in mathematics standardized test scores prompted school district policymakers to consider implementing a program designed to increase students' basic multiplication fact skills. This study was an evaluation of the Math Factual Operations for Understanding program. The program, marketed with a martial arts theme, was intended to…
Configuration Control of a Mobile Dextrous Robot: Real-Time Implementation and Experimentation
NASA Technical Reports Server (NTRS)
Lim, David; Seraji, Homayoun
1996-01-01
This paper describes the design and implementation of a real-time control system with multiple modes of operation for a mobile dexterous manipulator. The manipulator under study is a kinematically redundant seven degree-of-freedom arm from Robotics Research Corporation, mounted on a one degree-of-freedom motorized platform.
Mello, Michelle M; Armstrong, Sarah J; Greenberg, Yelena; McCotter, Patricia I; Gallagher, Thomas H
2016-12-01
To implement a communication-and-resolution program (CRP) in a setting in which liability insurers and health care facilities must collaborate to resolve incidents involving a facility and separately insured clinicians. Six hospitals and clinics and a liability insurer in Washington State. Sites designed and implemented CRPs and contributed information about cases and operational challenges over 20 months. Data were qualitatively analyzed. Data from interviews with personnel responsible for CRP implementation were triangulated with data on program cases collected by sites and notes recorded during meetings with sites and among project team members. Sites experienced small victories in resolving particular cases and streamlining some working relationships, but they were unable to successfully implement a collaborative CRP. Barriers included the insurer's distance from the point of care, passive rather than active support from top leaders, coordinating across departments and organizations, workload, nonparticipation by some physicians, and overcoming distrust. Operating CRPs where multiple organizations must collaborate can be highly challenging. Success likely requires several preconditions, including preexisting trust among organizations, active leadership engagement, physicians' commitment to participate, mechanisms for quickly transmitting information to insurers, tolerance for missteps, and clear protocols for joint investigations and resolutions. © Health Research and Educational Trust.
Real-time monitoring of CO2 storage sites: Application to Illinois Basin-Decatur Project
Picard, G.; Berard, T.; Chabora, E.; Marsteller, S.; Greenberg, S.; Finley, R.J.; Rinck, U.; Greenaway, R.; Champagnon, C.; Davard, J.
2011-01-01
Optimization of carbon dioxide (CO2) storage operations for efficiency and safety requires use of monitoring techniques and implementation of control protocols. The monitoring techniques consist of permanent sensors and tools deployed for measurement campaigns. Large amounts of data are thus generated. These data must be managed and integrated for interpretation at different time scales. A fast interpretation loop involves combining continuous measurements from permanent sensors as they are collected to enable a rapid response to detected events; a slower loop requires combining large datasets gathered over longer operational periods from all techniques. The purpose of this paper is twofold. First, it presents an analysis of the monitoring objectives to be performed in the slow and fast interpretation loops. Second, it describes the implementation of the fast interpretation loop with a real-time monitoring system at the Illinois Basin-Decatur Project (IBDP) in Illinois, USA. ?? 2011 Published by Elsevier Ltd.
Ball, David A; Lux, Matthew W; Graef, Russell R; Peterson, Matthew W; Valenti, Jane D; Dileo, John; Peccoud, Jean
2010-01-01
The concept of co-design is common in engineering, where it is necessary, for example, to determine the optimal partitioning between hardware and software of the implementation of a system features. Here we propose to adapt co-design methodologies for synthetic biology. As a test case, we have designed an environmental sensing device that detects the presence of three chemicals, and returns an output only if at least two of the three chemicals are present. We show that the logical operations can be implemented in three different design domains: (1) the transcriptional domain using synthetically designed hybrid promoters, (2) the protein domain using bi-molecular fluorescence complementation, and (3) the fluorescence domain using spectral unmixing and relying on electronic processing. We discuss how these heterogeneous design strategies could be formalized to develop co-design algorithms capable of identifying optimal designs meeting user specifications.
Spatial-Operator Algebra For Robotic Manipulators
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo; Kreutz, Kenneth K.; Milman, Mark H.
1991-01-01
Report discusses spatial-operator algebra developed in recent studies of mathematical modeling, control, and design of trajectories of robotic manipulators. Provides succinct representation of mathematically complicated interactions among multiple joints and links of manipulator, thereby relieving analyst of most of tedium of detailed algebraic manipulations. Presents analytical formulation of spatial-operator algebra, describes some specific applications, summarizes current research, and discusses implementation of spatial-operator algebra in the Ada programming language.
Hierarchical algorithms for modeling the ocean on hierarchical architectures
NASA Astrophysics Data System (ADS)
Hill, C. N.
2012-12-01
This presentation will describe an approach to using accelerator/co-processor technology that maps hierarchical, multi-scale modeling techniques to an underlying hierarchical hardware architecture. The focus of this work is on making effective use of both CPU and accelerator/co-processor parts of a system, for large scale ocean modeling. In the work, a lower resolution basin scale ocean model is locally coupled to multiple, "embedded", limited area higher resolution sub-models. The higher resolution models execute on co-processor/accelerator hardware and do not interact directly with other sub-models. The lower resolution basin scale model executes on the system CPU(s). The result is a multi-scale algorithm that aligns with hardware designs in the co-processor/accelerator space. We demonstrate this approach being used to substitute explicit process models for standard parameterizations. Code for our sub-models is implemented through a generic abstraction layer, so that we can target multiple accelerator architectures with different programming environments. We will present two application and implementation examples. One uses the CUDA programming environment and targets GPU hardware. This example employs a simple non-hydrostatic two dimensional sub-model to represent vertical motion more accurately. The second example uses a highly threaded three-dimensional model at high resolution. This targets a MIC/Xeon Phi like environment and uses sub-models as a way to explicitly compute sub-mesoscale terms. In both cases the accelerator/co-processor capability provides extra compute cycles that allow improved model fidelity for little or no extra wall-clock time cost.
NASA Technical Reports Server (NTRS)
1972-01-01
The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.
Selection and Implementation of Single Building EMCS (Energy Monitoring and Control Systems).
1983-08-01
Setpoint Night Setback 161 Figure 20: Dual Setpoint Night Setback/up 162 Figure 21: Centrifugal Chiller Reset 166 Figure 22: Centrifugal Chiller Capacity...Program outputs. Hot water temperature. Application notes. A dedicated local loop controller may be implemented. Chiller optimization . The chiller ... optimization program can be implemented in chilled water plants with multiple chillers . Based on chiller operating data and the energy input requirements
Family co-operation programme description.
Peine, H A; Terry, T
1990-01-01
Current parenting practices indicate a continuing trend towards less family interaction. Institutional attempts to intervene with parents often fail. The 'Family Co-operation Programme' provides a tangible method for families and schools to work together in preventing alcohol and drug abuse, by utilising the positive influence of the home and strengthening family relationships. The Board of Education for the State of Utah has tested and is currently implementing a unique, low-cost, alternative to impact on the home. Utilising a K-12 alcohol/drug abuse school-based curriculum, the child, based on his/her inclass training, becomes the resource for family co-operation activities. These include training in coping skills, decision-making, resistance to peer persuasion, increased self-esteem and alcohol/drug information. Grade level materials go home with the child, who returns a requested parent evaluation. Data for over one thousand families show the positive impact of the activities.
Concepts and data model for a co-operative neurovascular database.
Mansmann, U; Taylor, W; Porter, P; Bernarding, J; Jäger, H R; Lasjaunias, P; Terbrugge, K; Meisel, J
2001-08-01
Problems of clinical management of neurovascular diseases are very complex. This is caused by the chronic character of the diseases, a long history of symptoms and diverse treatments. If patients are to benefit from treatment, then treatment decisions have to rely on reliable and accurate knowledge of the natural history of the disease and the various treatments. Recent developments in statistical methodology and experience from electronic patient records are used to establish an information infrastructure based on a centralized register. A protocol to collect data on neurovascular diseases with technical as well as logistical aspects of implementing a database for neurovascular diseases are described. The database is designed as a co-operative tool of audit and research available to co-operating centres. When a database is linked to a systematic patient follow-up, it can be used to study prognosis. Careful analysis of patient outcome is valuable for decision-making.
Becerra, F E; Fan, J; Migdall, A
2013-01-01
Generalized quantum measurements implemented to allow for measurement outcomes termed inconclusive can perform perfect discrimination of non-orthogonal states, a task which is impossible using only measurements with definitive outcomes. Here we demonstrate such generalized quantum measurements for unambiguous discrimination of four non-orthogonal coherent states and obtain their quantum mechanical description, the positive-operator valued measure. For practical realizations of this positive-operator valued measure, where noise and realistic imperfections prevent perfect unambiguous discrimination, we show that our experimental implementation outperforms any ideal standard-quantum-limited measurement performing the same non-ideal unambiguous state discrimination task for coherent states with low mean photon numbers.
Delegation control of multiple unmanned systems
NASA Astrophysics Data System (ADS)
Flaherty, Susan R.; Shively, Robert J.
2010-04-01
Maturing technologies and complex payloads coupled with a future objective to reduce the logistics burden of current unmanned aerial systems (UAS) operations require a change to the 2-crew employment paradigm. Increased automation and operator supervisory control of unmanned systems have been advocated to meet the objective of reducing the crew requirements, while managing future technologies. Specifically, a delegation control employment strategy has resulted in reduced workload and higher situation awareness for single operators controlling multiple unmanned systems in empirical studies1,2. Delegation control is characterized by the ability for an operator to call a single "play" that initiates prescribed default actions for each vehicle and associated sensor related to a common mission goal. Based upon the effectiveness of delegation control in simulation, the U.S. Army Aeroflightdynamics Directorate (AFDD) developed a Delegation Control (DelCon) operator interface with voice recognition implementation for play selection, real-time play modification, and play status with automation transparency to enable single operator control of multiple unmanned systems in flight. AFDD successfully demonstrated delegation control in a Troops-in-Contact mission scenario at Ft. Ord in 2009. This summary showcases the effort as a beneficial advance in single operator control of multiple UAS.
Naeem, Muhammad Awais; Armutlulu, Andac; Imtiaz, Qasim; Donat, Felix; Schäublin, Robin; Kierzkowska, Agnieszka; Müller, Christoph R
2018-06-19
Calcium looping, a CO 2 capture technique, may offer a mid-term if not near-term solution to mitigate climate change, triggered by the yet increasing anthropogenic CO 2 emissions. A key requirement for the economic operation of calcium looping is the availability of highly effective CaO-based CO 2 sorbents. Here we report a facile synthesis route that yields hollow, MgO-stabilized, CaO microspheres featuring highly porous multishelled morphologies. As a thermal stabilizer, MgO minimized the sintering-induced decay of the sorbents' CO 2 capacity and ensured a stable CO 2 uptake over multiple operation cycles. Detailed electron microscopy-based analyses confirm a compositional homogeneity which is identified, together with the characteristics of its porous structure, as an essential feature to yield a high-performance sorbent. After 30 cycles of repeated CO 2 capture and sorbent regeneration, the best performing material requires as little as 11 wt.% MgO for structural stabilization and exceeds the CO 2 uptake of the limestone-derived reference material by ~500%.
Hinnant, J Benjamin; Nelson, Jackie A; O'Brien, Marion; Keane, Susan P; Calkins, Susan D
2013-01-01
We examined mother-child co-operative behaviour, children's emotion regulation and executive function, as well as combinations of these factors, as predictors of moral reasoning in 89 10-year-old children. Dyadic co-operation was coded from videotaped observations of laboratory puzzle and speech tasks. Emotion regulation was derived from maternal report, and executive functioning was assessed with the Tower of London task. Moral reasoning was coded during mother-child conversations about morally ambiguous, peer-conflict situations. Two significant interactions indicated that children from more co-operative dyads who also had higher executive function skills had higher moral reasoning scores than other children, and children lower in both emotion regulation and executive function had lower moral reasoning scores than other children. The results contribute to the literature on the multiple and interactive levels of influence on moral reasoning in childhood.
The Co-Construction of Cooperative Learning in Physical Education with Elementary Classroom Teachers
ERIC Educational Resources Information Center
Dyson, Ben P.; Colby, Rachel; Barratt, Mark
2016-01-01
The purpose of this study was to investigate generalist classroom elementary teachers' implementation of the Cooperative Learning (CL) pedagogical model into their physical education classes. The study used multiple sources of data drawing on qualitative data collection and data analysis research traditions (Miles, Huberman, & Saldana, 2014).…
USDA-ARS?s Scientific Manuscript database
A contact closure system has been constructed and implemented that utilizes two contact closure sender boards that communicate wirelessly to four contact closure receiver boards to distribute start signals from two or three liquid chromatographs to fourteen instruments, pumps, detectors, or other co...
Navigating Multiple ePortfolios: Lessons Learned from a Capstone Seminar
ERIC Educational Resources Information Center
Richards-Schuster, Katie; Galura, Joseph
2017-01-01
ePortfolios are a growing trend in higher education, implemented by an increasing number of curricular and co-curricular programs. Given the de-centralized nature of many colleges and universities, it is inevitable that faculty requiring ePortfolios, especially as capstone experiences, will engage with students who have completed one or more…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alonso, Jesus
Intelligent Optical Systems, Inc. has developed distributed intrinsic fiber optic sensors to directly quantify the concentration of dissolved or gas-phase CO 2 for leak detection or plume migration in carbon capture and sequestration (CCS). The capability of the sensor for highly sensitive detection of CO 2 in the pressure and temperature range of 15 to 2,000 psi and 25°C to 175°C was demonstrated, as was the capability of operating in highly corrosive and contaminated environments such as those often found in CO 2 injection sites. The novel sensor system was for the first time demonstrated deployed in a deep well,more » detecting multiple CO 2 releases, in real time, at varying depths. Early CO 2 release detection, by means of a sensor cable integrating multiple sensor segments, was demonstrated, as was the capability of quantifying the leak. The novel fiber optic sensor system exhibits capabilities not achieved by any other monitoring technology. This project represents a breakthrough in monitoring capabilities for CCS applications.« less
The relational database model and multiple multicenter clinical trials.
Blumenstein, B A
1989-12-01
The Southwest Oncology Group (SWOG) chose to use a relational database management system (RDBMS) for the management of data from multiple clinical trials because of the underlying relational model's inherent flexibility and the natural way multiple entity types (patients, studies, and participants) can be accommodated. The tradeoffs to using the relational model as compared to using the hierarchical model include added computing cycles due to deferred data linkages and added procedural complexity due to the necessity of implementing protections against referential integrity violations. The SWOG uses its RDBMS as a platform on which to build data operations software. This data operations software, which is written in a compiled computer language, allows multiple users to simultaneously update the database and is interactive with respect to the detection of conditions requiring action and the presentation of options for dealing with those conditions. The relational model facilitates the development and maintenance of data operations software.
NASA Astrophysics Data System (ADS)
Happonen, Ari; Stepanov, Alexander; Hirvimäki, Marika; Manninen, Matti; Dennisuk, William; Piili, Heidi; Salminen, Antti
This study is based on observed outcomes of motivation sources and collaboration elements from a living lab style co-operation project. In this project, researchers of engineering science and an individual artist co-operated closely. The goal was to create an artwork made from corrugated board by utilizing laser cutting technology. In the context of this study, the scientist and the artist participated in the whole process and the research was done in living lab style arrangement. The research process integrated multiple experts from different scientific fields and experts from practical contexts to develop a new art design and art forming process with utilization of laser cutting technology. The purpose of this study was to find out and discuss about the key elements for high motivation to work together and then reveal the best practice findings in this co-operative development process. Elements were studied from three different points of view: artists view, collaboration motivation view and practical cutting point of view. The elements were analysed by utilizing an active documentation collection methodology, during the whole process, and by using story-telling methodology. The documents were used to reflect facts and feelings from the co-operation, the work process and the challenges encountered within collaboration. This article contributes to research methodology and best practice context by revealing the key elements, which build the motivation compelling (as personal inner motivation) the participant to work out of office hours as well as on weekends. Furthermore, as the artist-engineer co-operation is not frequently reported in scientific literature, this study reveals valuable information for practitioners and co-operation researchers.
NASA Astrophysics Data System (ADS)
Canright, David; Osvik, Dag Arne
We explore ways to reduce the number of bit operations required to implement AES. One way involves optimizing the composite field approach for entire rounds of AES. Another way is integrating the Galois multiplications of MixColumns with the linear transformations of the S-box. Combined with careful optimizations, these reduce the number of bit operations to encrypt one block by 9.0%, compared to earlier work that used the composite field only in the S-box. For decryption, the improvement is 13.5%. This work may be useful both as a starting point for a bit-sliced software implementation, where reducing operations increases speed, and also for hardware with limited resources.
Emergence of dynamic cooperativity in the stochastic kinetics of fluctuating enzymes
NASA Astrophysics Data System (ADS)
Kumar, Ashutosh; Chatterjee, Sambarta; Nandi, Mintu; Dua, Arti
2016-08-01
Dynamic co-operativity in monomeric enzymes is characterized in terms of a non-Michaelis-Menten kinetic behaviour. The latter is believed to be associated with mechanisms that include multiple reaction pathways due to enzymatic conformational fluctuations. Recent advances in single-molecule fluorescence spectroscopy have provided new fundamental insights on the possible mechanisms underlying reactions catalyzed by fluctuating enzymes. Here, we present a bottom-up approach to understand enzyme turnover kinetics at physiologically relevant mesoscopic concentrations informed by mechanisms extracted from single-molecule stochastic trajectories. The stochastic approach, presented here, shows the emergence of dynamic co-operativity in terms of a slowing down of the Michaelis-Menten (MM) kinetics resulting in negative co-operativity. For fewer enzymes, dynamic co-operativity emerges due to the combined effects of enzymatic conformational fluctuations and molecular discreteness. The increase in the number of enzymes, however, suppresses the effect of enzymatic conformational fluctuations such that dynamic co-operativity emerges solely due to the discrete changes in the number of reacting species. These results confirm that the turnover kinetics of fluctuating enzyme based on the parallel-pathway MM mechanism switches over to the single-pathway MM mechanism with the increase in the number of enzymes. For large enzyme numbers, convergence to the exact MM equation occurs in the limit of very high substrate concentration as the stochastic kinetics approaches the deterministic behaviour.
Emergence of dynamic cooperativity in the stochastic kinetics of fluctuating enzymes.
Kumar, Ashutosh; Chatterjee, Sambarta; Nandi, Mintu; Dua, Arti
2016-08-28
Dynamic co-operativity in monomeric enzymes is characterized in terms of a non-Michaelis-Menten kinetic behaviour. The latter is believed to be associated with mechanisms that include multiple reaction pathways due to enzymatic conformational fluctuations. Recent advances in single-molecule fluorescence spectroscopy have provided new fundamental insights on the possible mechanisms underlying reactions catalyzed by fluctuating enzymes. Here, we present a bottom-up approach to understand enzyme turnover kinetics at physiologically relevant mesoscopic concentrations informed by mechanisms extracted from single-molecule stochastic trajectories. The stochastic approach, presented here, shows the emergence of dynamic co-operativity in terms of a slowing down of the Michaelis-Menten (MM) kinetics resulting in negative co-operativity. For fewer enzymes, dynamic co-operativity emerges due to the combined effects of enzymatic conformational fluctuations and molecular discreteness. The increase in the number of enzymes, however, suppresses the effect of enzymatic conformational fluctuations such that dynamic co-operativity emerges solely due to the discrete changes in the number of reacting species. These results confirm that the turnover kinetics of fluctuating enzyme based on the parallel-pathway MM mechanism switches over to the single-pathway MM mechanism with the increase in the number of enzymes. For large enzyme numbers, convergence to the exact MM equation occurs in the limit of very high substrate concentration as the stochastic kinetics approaches the deterministic behaviour.
Emergence of dynamic cooperativity in the stochastic kinetics of fluctuating enzymes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Ashutosh; Chatterjee, Sambarta; Nandi, Mintu
Dynamic co-operativity in monomeric enzymes is characterized in terms of a non-Michaelis-Menten kinetic behaviour. The latter is believed to be associated with mechanisms that include multiple reaction pathways due to enzymatic conformational fluctuations. Recent advances in single-molecule fluorescence spectroscopy have provided new fundamental insights on the possible mechanisms underlying reactions catalyzed by fluctuating enzymes. Here, we present a bottom-up approach to understand enzyme turnover kinetics at physiologically relevant mesoscopic concentrations informed by mechanisms extracted from single-molecule stochastic trajectories. The stochastic approach, presented here, shows the emergence of dynamic co-operativity in terms of a slowing down of the Michaelis-Menten (MM) kineticsmore » resulting in negative co-operativity. For fewer enzymes, dynamic co-operativity emerges due to the combined effects of enzymatic conformational fluctuations and molecular discreteness. The increase in the number of enzymes, however, suppresses the effect of enzymatic conformational fluctuations such that dynamic co-operativity emerges solely due to the discrete changes in the number of reacting species. These results confirm that the turnover kinetics of fluctuating enzyme based on the parallel-pathway MM mechanism switches over to the single-pathway MM mechanism with the increase in the number of enzymes. For large enzyme numbers, convergence to the exact MM equation occurs in the limit of very high substrate concentration as the stochastic kinetics approaches the deterministic behaviour.« less
Principles of Temporal Processing Across the Cortical Hierarchy.
Himberger, Kevin D; Chien, Hsiang-Yun; Honey, Christopher J
2018-05-02
The world is richly structured on multiple spatiotemporal scales. In order to represent spatial structure, many machine-learning models repeat a set of basic operations at each layer of a hierarchical architecture. These iterated spatial operations - including pooling, normalization and pattern completion - enable these systems to recognize and predict spatial structure, while robust to changes in the spatial scale, contrast and noisiness of the input signal. Because our brains also process temporal information that is rich and occurs across multiple time scales, might the brain employ an analogous set of operations for temporal information processing? Here we define a candidate set of temporal operations, and we review evidence that they are implemented in the mammalian cerebral cortex in a hierarchical manner. We conclude that multiple consecutive stages of cortical processing can be understood to perform temporal pooling, temporal normalization and temporal pattern completion. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Knowledge co-production and boundary work to promote implementation of conservation plans.
Nel, Jeanne L; Roux, Dirk J; Driver, Amanda; Hill, Liesl; Maherry, Ashton C; Snaddon, Kate; Petersen, Chantel R; Smith-Adao, Lindie B; Van Deventer, Heidi; Reyers, Belinda
2016-02-01
Knowledge co-production and boundary work offer planners a new frame for critically designing a social process that fosters collaborative implementation of resulting plans. Knowledge co-production involves stakeholders from diverse knowledge systems working iteratively toward common vision and action. Boundary work is a means of creating permeable knowledge boundaries that satisfy the needs of multiple social groups while guarding the functional integrity of contributing knowledge systems. Resulting products are boundary objects of mutual interest that maintain coherence across all knowledge boundaries. We examined how knowledge co-production and boundary work can bridge the gap between planning and implementation and promote cross-sectoral cooperation. We applied these concepts to well-established stages in regional conservation planning within a national scale conservation planning project aimed at identifying areas for conserving rivers and wetlands of South Africa and developing an institutional environment for promoting their conservation. Knowledge co-production occurred iteratively over 4 years in interactive stake-holder workshops that included co-development of national freshwater conservation goals and spatial data on freshwater biodiversity and local conservation feasibility; translation of goals into quantitative inputs that were used in Marxan to select draft priority conservation areas; review of draft priority areas; and packaging of resulting map products into an atlas and implementation manual to promote application of the priority area maps in 37 different decision-making contexts. Knowledge co-production stimulated dialogue and negotiation and built capacity for multi-scale implementation beyond the project. The resulting maps and information integrated diverse knowledge types of over 450 stakeholders and represented >1000 years of collective experience. The maps provided a consistent national source of information on priority conservation areas for rivers and wetlands and have been applied in 25 of the 37 use contexts since their launch just over 3 years ago. When framed as a knowledge co-production process supported by boundary work, regional conservation plans can be developed into valuable boundary objects that offer a tangible tool for multi-agency cooperation around conservation. Our work provides practical guidance for promoting uptake of conservation science and contributes to an evidence base on how conservation efforts can be improved. © 2015 Society for Conservation Biology.
The deep space network, volume 8
NASA Technical Reports Server (NTRS)
1972-01-01
Progress is reported on DSN supporting research and technology, advanced development and engineering, implementation, and operations which pertain to mission-independent or multiple-mission development as well as to support of flight projects.
The intellectual core of enterprise information systems: a co-citation analysis
NASA Astrophysics Data System (ADS)
Shiau, Wen-Lung
2016-10-01
Enterprise information systems (EISs) have evolved in the past 20 years, attracting the attention of international practitioners and scholars. Although literature reviews and analyses have been conducted to examine the multiple dimensions of EISs, no co-citation analysis has been conducted to examine the knowledge structures involved in EIS studies; thus, the current study fills this research gap. This study investigated the intellectual structures of EISs. All data source documents (1083 articles and 24,090 citations) were obtained from the Institute for Scientific Information Web of Knowledge database. A co-citation analysis was used to analyse EIS data. By using factor analysis, we identified eight critical factors: (a) factors affecting the implementation and success of information systems (ISs); (b) the successful implementation of enterprise resource planning (ERP); (c) IS evaluation and success, (d) system science studies; (e) factors influencing ERP success; (f) case research and theoretical models; (g) user acceptance of information technology; and (h) IS frameworks. Multidimensional scaling and cluster analysis were used to visually map the resultant EIS knowledge. It is difficult to implement an EIS in an enterprise and each organisation exhibits specific considerations. The current findings indicate that managers must focus on ameliorating inferior project performance levels, enabling a transition from 'vicious' to 'virtuous' projects. Successful EIS implementation yields substantial organisational advantages.
IPeak: An open source tool to combine results from multiple MS/MS search engines.
Wen, Bo; Du, Chaoqin; Li, Guilin; Ghali, Fawaz; Jones, Andrew R; Käll, Lukas; Xu, Shaohang; Zhou, Ruo; Ren, Zhe; Feng, Qiang; Xu, Xun; Wang, Jun
2015-09-01
Liquid chromatography coupled tandem mass spectrometry (LC-MS/MS) is an important technique for detecting peptides in proteomics studies. Here, we present an open source software tool, termed IPeak, a peptide identification pipeline that is designed to combine the Percolator post-processing algorithm and multi-search strategy to enhance the sensitivity of peptide identifications without compromising accuracy. IPeak provides a graphical user interface (GUI) as well as a command-line interface, which is implemented in JAVA and can work on all three major operating system platforms: Windows, Linux/Unix and OS X. IPeak has been designed to work with the mzIdentML standard from the Proteomics Standards Initiative (PSI) as an input and output, and also been fully integrated into the associated mzidLibrary project, providing access to the overall pipeline, as well as modules for calling Percolator on individual search engine result files. The integration thus enables IPeak (and Percolator) to be used in conjunction with any software packages implementing the mzIdentML data standard. IPeak is freely available and can be downloaded under an Apache 2.0 license at https://code.google.com/p/mzidentml-lib/. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Ardanuy, Philip; Bergen, Bill; Huang, Allen; Kratz, Gene; Puschell, Jeff; Schueler, Carl; Walker, Joe
2006-08-01
The US operates a diverse, evolving constellation of research and operational environmental satellites, principally in polar and geosynchronous orbits. Our current and enhanced future domestic remote sensing capability is complemented by the significant capabilities of our current and potential future international partners. In this analysis, we define "success" through the data customers' "eyes": participating in the sufficient and continuously improving satisfaction of their mission responsibilities. To successfully fuse together observations from multiple simultaneous platforms and sensors into a common, self-consistent, operational environment requires that there exist a unified calibration and validation approach. Here, we consider develop a concept for an integrating framework for absolute accuracy; long-term stability; self-consistency among sensors, platforms, techniques, and observing systems; and validation and characterization of performance. Across all systems, this is a non-trivial problem. Simultaneous Nadir Overpasses, or SNO's, provide a proven intercomparison technique: simultaneous, collocated, co-angular measurements. Many systems have off-nadir elements, or effects, that must be calibrated. For these systems, the nadir technique constrains the process. We define the term "SOON," for simultaneous overpass off nadir. We present a target architecture and sensitivity analysis for the affordable, sustainable implementation of a global SOON calibration/validation network that can deliver the much-needed comprehensive, common, self-consistent operational picture in near-real time, at an affordable cost.
Kirk, Andrew G; Plant, David V; Szymanski, Ted H; Vranesic, Zvonko G; Tooley, Frank A P; Rolston, David R; Ayliffe, Michael H; Lacroix, Frederic K; Robertson, Brian; Bernier, Eric; Brosseau, Daniel F
2003-05-10
Design and implementation of a free-space optical backplane for multiprocessor applications is presented. The system is designed to interconnect four multiprocessor nodes that communicate by using multiplexed 32-bit packets. Each multiprocessor node is electrically connected to an optoelectronic VLSI chip which implements the hyperplane interconnection architecture. The chips each contain 256 optical transmitters (implemented as dual-rail multiple quantum-well modulators) and 256 optical receivers. A rigid free-space microoptical interconnection system that interconnects the transceiver chips in a 512-channel unidirectional ring is implemented. Full design, implementation, and operational details are provided.
NASA Astrophysics Data System (ADS)
Kirk, Andrew G.; Plant, David V.; Szymanski, Ted H.; Vranesic, Zvonko G.; Tooley, Frank A. P.; Rolston, David R.; Ayliffe, Michael H.; Lacroix, Frederic K.; Robertson, Brian; Bernier, Eric; Brosseau, Daniel F.
2003-05-01
Design and implementation of a free-space optical backplane for multiprocessor applications is presented. The system is designed to interconnect four multiprocessor nodes that communicate by using multiplexed 32-bit packets. Each multiprocessor node is electrically connected to an optoelectronic VLSI chip which implements the hyperplane interconnection architecture. The chips each contain 256 optical transmitters (implemented as dual-rail multiple quantum-well modulators) and 256 optical receivers. A rigid free-space microoptical interconnection system that interconnects the transceiver chips in a 512-channel unidirectional ring is implemented. Full design, implementation, and operational details are provided.
Adaptive reconfigurable V-BLAST type equalizer for cognitive MIMO-OFDM radios
NASA Astrophysics Data System (ADS)
Ozden, Mehmet Tahir
2015-12-01
An adaptive channel shortening equalizer design for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) radio receivers is considered in this presentation. The proposed receiver has desirable features for cognitive and software defined radio implementations. It consists of two sections: MIMO decision feedback equalizer (MIMO-DFE) and adaptive multiple Viterbi detection. In MIMO-DFE section, a complete modified Gram-Schmidt orthogonalization of multichannel input data is accomplished using sequential processing multichannel Givens lattice stages, so that a Vertical Bell Laboratories Layered Space Time (V-BLAST) type MIMO-DFE is realized at the front-end section of the channel shortening equalizer. Matrix operations, a major bottleneck for receiver operations, are accordingly avoided, and only scalar operations are used. A highly modular and regular radio receiver architecture that has a suitable structure for digital signal processing (DSP) chip and field programable gate array (FPGA) implementations, which are important for software defined radio realizations, is achieved. The MIMO-DFE section of the proposed receiver can also be reconfigured for spectrum sensing and positioning functions, which are important tasks for cognitive radio applications. In connection with adaptive multiple Viterbi detection section, a systolic array implementation for each channel is performed so that a receiver architecture with high computational concurrency is attained. The total computational complexity is given in terms of equalizer and desired response filter lengths, alphabet size, and number of antennas. The performance of the proposed receiver is presented for two-channel case by means of mean squared error (MSE) and probability of error evaluations, which are conducted for time-invariant and time-variant channel conditions, orthogonal and nonorthogonal transmissions, and two different modulation schemes.
NASA Astrophysics Data System (ADS)
Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2018-03-01
A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, M. A.; Strelchenko, Alexei; Vaquero, Alejandro
Lattice quantum chromodynamics simulations in nuclear physics have benefited from a tremendous number of algorithmic advances such as multigrid and eigenvector deflation. These improve the time to solution but do not alleviate the intrinsic memory-bandwidth constraints of the matrix-vector operation dominating iterative solvers. Batching this operation for multiple vectors and exploiting cache and register blocking can yield a super-linear speed up. Block-Krylov solvers can naturally take advantage of such batched matrix-vector operations, further reducing the iterations to solution by sharing the Krylov space between solves. However, practical implementations typically suffer from the quadratic scaling in the number of vector-vector operations.more » Using the QUDA library, we present an implementation of a block-CG solver on NVIDIA GPUs which reduces the memory-bandwidth complexity of vector-vector operations from quadratic to linear. We present results for the HISQ discretization, showing a 5x speedup compared to highly-optimized independent Krylov solves on NVIDIA's SaturnV cluster.« less
Kislov, Roman; Walshe, Kieran; Harvey, Gill
2012-10-15
Effective implementation of change in healthcare organisations involves multiple professional and organisational groups and is often impeded by professional and organisational boundaries that present relatively impermeable barriers to sharing knowledge and spreading work practices. Informed by the theory of communities of practice (CoPs), this study explored the effects of intra-organisational and inter-organisational boundaries on the implementation of service improvement within and across primary healthcare settings and on the development of multiprofessional and multi-organisational CoPs during this process. The study was conducted within the Collaboration for Leadership in Applied Health Research and Care (CLAHRC) for Greater Manchester-a collaborative partnership between the University of Manchester and local National Health Service organisations aiming to undertake applied health research and enhance its implementation in clinical practice. It deployed a qualitative embedded case study design, encompassing semistructured interviews, direct observation and documentary analysis, conducted in 2010-2011. The sample included practice doctors, nurses, managers and members of the CLAHRC implementation team. The study showed that in spite of epistemic and status differences, professional boundaries between general practitioners, practice nurses and practice managers co-located in the same practice over a relatively long period of time could be successfully bridged, leading to the formation of multiprofessional CoPs. While knowledge circulated relatively easily within these CoPs, barriers to knowledge sharing emerged at the boundary separating them from other groups existing in the same primary care setting. The strongest boundaries, however, lay between individual general practices, with inter-organisational knowledge sharing and collaboration between them remaining unequally developed across different areas due to historical factors, competition and strong organisational identification. Manipulated emergence of multi-organisational CoPs in the context of primary care may thus be problematic. In cases when manipulated emergence of new CoPs is problematic, boundary issues could be addressed by adopting a developmental perspective on CoPs, which provides an alternative to the analytical and instrumental perspectives previously described in the CoP literature. This perspective implies a pragmatic, situational approach to mapping existing CoPs and their characteristics and potentially modifying them in the process of service improvement through the combination of internal and external facilitation.
Community engagement research and dual diagnosis anonymous.
Roush, Sean; Monica, Corbett; Pavlovich, Danny; Drake, Robert E
2015-01-01
Community engagement research is widely discussed but rarely implemented. This article describes the implementation of a community engagement research project on Dual Diagnosis Anonymous, a rapidly spreading peer support program in Oregon for people with co-occurring mental illness and substance use disorders. After three years of discussions, overcoming barriers, and involving several institutions, this grassroots research project has been implemented and is expanding. Active participants in Dual Diagnosis Anonymous inspired and instructed policy makers, professionals, and students. Community engagement research requires frontline participants, community members, and professional collaborators to overcome multiple barriers with persistence and steadfastness. Building trust, collaboration, and structures for community engagement research takes time and a community effort.
Image sensor with high dynamic range linear output
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Fossum, Eric R. (Inventor)
2007-01-01
Designs and operational methods to increase the dynamic range of image sensors and APS devices in particular by achieving more than one integration times for each pixel thereof. An APS system with more than one column-parallel signal chains for readout are described for maintaining a high frame rate in readout. Each active pixel is sampled for multiple times during a single frame readout, thus resulting in multiple integration times. The operation methods can also be used to obtain multiple integration times for each pixel with an APS design having a single column-parallel signal chain for readout. Furthermore, analog-to-digital conversion of high speed and high resolution can be implemented.
ERIC Educational Resources Information Center
Scott, Lee-Allison
2003-01-01
The first wireless technology program for preschoolers was implemented in January at the Primrose School at Bentwater in Atlanta, Georgia, a new corporate school operated by Primrose School Franchising Co. The new school serves as a testing and training facility for groundbreaking educational approaches, including emerging innovations in…
NASA Technical Reports Server (NTRS)
Klarer, Paul
1993-01-01
An approach for a robotic control system which implements so called 'behavioral' control within a realtime multitasking architecture is proposed. The proposed system would attempt to ameliorate some of the problems noted by some researchers when implementing subsumptive or behavioral control systems, particularly with regard to multiple processor systems and realtime operations. The architecture is designed to allow synchronous operations between various behavior modules by taking advantage of a realtime multitasking system's intertask communications channels, and by implementing each behavior module and each interconnection node as a stand-alone task. The potential advantages of this approach over those previously described in the field are discussed. An implementation of the architecture is planned for a prototype Robotic All Terrain Lunar Exploration Rover (RATLER) currently under development and is briefly described.
Francis-Coad, Jacqueline; Etherton-Beer, Christopher; Bulsara, Caroline; Nobre, Debbie; Hill, Anne-Marie
The aims of this study were to evaluate establishing and operating a web-based community of practice (CoP) to lead falls prevention in a residential aged care (RAC) setting. A mixed methods evaluation was conducted in two phases using a survey and transcripts from interactive electronic sources. Nurses and allied health staff (n = 20) with an interest in falls prevention representing 13 sites of an RAC organization participated. In Phase 1, the CoP was developed, and the establishment of its structure and composition was evaluated using determinants of success reported in the literature. In Phase 2, all participants interacted using the web, but frequency of engagement by any participant was low. Participatory barriers, including competing demands from other tasks and low levels of knowledge about information communication technology (ICT) applications, were identified by CoP members. A web-based CoP can be established and operated across multiple RAC sites if RAC management support dedicated time for web-based participation and staff are given web-based training. Copyright © 2016 Elsevier Inc. All rights reserved.
A computer program for the design of optimum catalytic monoliths for CO2 lasers
NASA Technical Reports Server (NTRS)
Guinn, K.; Goldblum, S.; Noskowski, E.; Herz, R.
1990-01-01
Pulsed CO2 lasers have many applications in aeronautics, space research, weather monitoring and other areas. Full exploitation of the potential of these lasers is hampered by the dissociation of CO2 that occurs during laser operation. The development of closed-cycle CO2 lasers requires active CO-O2 recombination (CO oxidation) catalysts and design methods for implementation of catalysts inside lasers. The performance criteria and constraints involved in the design of catalyst configurations for use in a closed-cycle laser are discussed, and several design studies performed with a computerized design program that was written are presented. Trade-offs between catalyst activity and dimensions, flow channel dimensions, pressure drop, O2 conversion and other variables are discussed.
A survey of the state of the art and focused research in range systems, task 2
NASA Technical Reports Server (NTRS)
Yao, K.
1986-01-01
Many communication, control, and information processing subsystems are modeled by linear systems incorporating tapped delay lines (TDL). Such optimized subsystems result in full precision multiplications in the TDL. In order to reduce complexity and cost in a microprocessor implementation, these multiplications can be replaced by single-shift instructions which are equivalent to powers of two multiplications. Since, in general, the obvious operation of rounding the infinite precision TDL coefficients to the nearest powers of two usually yield quite poor system performance, the optimum powers of two coefficient solution was considered. Detailed explanations on the use of branch-and-bound algorithms for finding the optimum powers of two solutions are given. Specific demonstration of this methodology to the design of a linear data equalizer and its implementation in assembly language on a 8080 microprocessor with a 12 bit A/D converter are reported. This simple microprocessor implementation with optimized TDL coefficients achieves a system performance comparable to the optimum linear equalization with full precision multiplications for an input data rate of 300 baud. The philosophy demonstrated in this implementation is dully applicable to many other microprocessor controlled information processing systems.
'Design of CO-O2 recombination catalysts for closed-cycle CO2 lasers'
NASA Technical Reports Server (NTRS)
Guinn, K.; Goldblum, S.; Noskowski, E.; Herz, R.
1989-01-01
Pulsed CO2 lasers have many applications in aeronautics, space research, weather monitoring and other areas. Full exploitation of the potential of these lasers is hampered by the dissociation of CO2 that occurs during laser operation. The development of closed-cycle CO2 lasers requires active CO-O2 recombination (CO oxidation) catalysts and design methods for implementation of catalysts inside lasers. This paper will discuss the performance criteria and constraints involved in the design of monolith catalyst configurations for use in a closed-cycle laser and will present a design study performed with a computerized design program that had been written. Trade-offs between catalyst activity and dimensions, flow channel dimensions, pressure drop, O2 conversion and other variables will be discussed.
Innovative Seismoeletromagnetic Research at the front of the Hellenic Arc
NASA Astrophysics Data System (ADS)
Makris, John P.; Chiappini, Massimo; Nardi, Adriano; Carluccio, Roberto; Rigakis, Hercules; Hloupis, George; Fragkiadakis, Kostantinos; Pentaris, Fragkiskos; Saltas, Vassilios; Vallianatos, Filippos
2013-04-01
Taking into account the complex nature and rarity of strong seismic events, as well as the form multiplicity and timing variety of possible preseismic signatures, the predominant view of the scientific community still seems nowadays to lean against earthquake prediction, especially the short-term one. On the other hand, seismoelectromagnetic (SEM) research appears to be a promising approach to earthquake prediction research. In this context, the project TeCH-SEM [Technologies Coalescence for Holistic Seismoelectromagnetic Research (Lithosphere-Atmosphere-Ionosphere Coupling)] aims to establish an integrated approach to SEM investigation, by developing and implementing novel-innovative technologies for the study of pre-seismic electric, magnetic and electromagnetic signatures in a broadband spectrum (ULF-ELF-VLF-LF-HF). In this framework, at the natural laboratory of the seismically active south- and south-western part of the Hellenic Arc (broader region of Crete) is being developed a permanent network of ULF-ELF seismoelectromagnetic stations featuring novel design that provides real-time telemetry, extended autonomy, light-weight and small-size but robust and powerful datalogging and self-diagnostics for reliable, long-term operation. This network is complemented by the simultaneous deployment of an innovative ELF-VLF seismoelectromagnetic telemetric network that will attempt to detect, in real conditions, VLF electromagnetic transients that have been repeatedly observed in the laboratory to be emitted from rock samples with various lithologies subjected to fracture under uniaxial compression. Both networks, it is anticipated to remain in operation for many years. Acknowledgements This research is implemented in the framework of the project entitled "Technologies Coalescence for Holistic Seismoelectromagnetic Research (Lithosphere-Atmosphere-Ionosphere Coupling)" of the Archimedes III Call through the Operational Program "Education and Lifelong Learning" and is co-financed by the European Union (European Social Fund) and Greek national funds.
FPGA wavelet processor design using language for instruction-set architectures (LISA)
NASA Astrophysics Data System (ADS)
Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios
2007-04-01
The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.
NASA Technical Reports Server (NTRS)
Matty, Christopher M.
2010-01-01
Crewed space vehicles have a common requirement to remove the carbon dioxide (CO2) created by the metabolic processes of the crew. The space shuttle [Space Transportation System (STS)] and International Space Station (ISS) each have systems in place that allow control and removal of CO2 from the habitable cabin environment. During periods in which the space shuttle is docked to the ISS, known as "joint docked operations," the space shuttle and ISS share a common atmosphere environment. During this period, an elevated amount of CO2 is produced through the combined metabolic activity of the STS and ISS crews. This elevated CO2 production, together with the large effective atmosphere created by collective volumes of the docked vehicles, creates a unique set of requirements for CO2 removal. This paper will describe individual CO2 control plans implemented by STS and ISS engineering teams, as well as the integrated plans used when both vehicles are docked. The paper will also discuss some of the issues and anomalies experienced by both engineering teams.
NASA Technical Reports Server (NTRS)
Matty, Christopher M.; Hayley, Elizabeth P.
2009-01-01
Manned space vehicles have a common requirement to remove the Carbon Dioxide (CO2) created by the metabolic processes of the crew. The Space Shuttle and International Space Station (ISS) each have systems in place to allow control and removal of CO2 from the habitable cabin environment. During periods where the Space Shuttle is docked to ISS, known as joint docked operations, the Space Shuttle and ISS share a common atmosphere environment. During this period there is an elevated production of CO2 caused by the combined metabolic activity of the Space Shuttle and ISS crew. This elevated CO2 production, combined with the large effective atmosphere created by the collective volumes of the docked vehicles, creates a unique set of requirements for CO2 removal. This paper will describe the individual CO2 control plans implemented by the Space Shuttle and ISS engineering teams, as well as the integrated plans used when both vehicles are docked. In addition, the paper will discuss some of the issues and anomalies experienced by both engineering teams.
NASA Technical Reports Server (NTRS)
Orcutt, John M.; Barbre, Robert E., Jr.; Brenton, James C.; Decker, Ryan K.
2017-01-01
Tropospheric winds are an important driver of the design and operation of space launch vehicles. Multiple types of weather balloons and Doppler Radar Wind Profiler (DRWP) systems exist at NASA's Kennedy Space Center (KSC), co-located on the United States Air Force's (USAF) Eastern Range (ER) at the Cape Canaveral Air Force Station (CCAFS), that are capable of measuring atmospheric winds. Meteorological data gathered by these instruments are being used in the design of NASA's Space Launch System (SLS) and other space launch vehicles, and will be used during the day-of-launch (DOL) of SLS to aid in loads and trajectory analyses. For the purpose of SLS day-of-launch needs, the balloons have the altitude coverage needed, but take over an hour to reach the maximum altitude and can drift far from the vehicle's path. The DRWPs have the spatial and temporal resolutions needed, but do not provide complete altitude coverage. Therefore, the Natural Environments Branch (EV44) at Marshall Space Flight Center (MSFC) developed the Profile Envision and Splice Tool (PRESTO) to combine balloon profiles and profiles from multiple DRWPs, filter the spliced profile to a common wavelength, and allow the operator to generate output files as well as to visualize the inputs and the spliced profile for SLS DOL operations. PRESTO was developed in Python taking advantage of NumPy and SciPy for the splicing procedure, matplotlib for the visualization, and Tkinter for the execution of the graphical user interface (GUI). This paper describes in detail the Python coding implementation for the splicing, filtering, and visualization methodology used in PRESTO.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorentla Venkata, Manjunath; Shamis, Pavel; Graham, Richard L
2013-01-01
Many scientific simulations, using the Message Passing Interface (MPI) programming model, are sensitive to the performance and scalability of reduction collective operations such as MPI Allreduce and MPI Reduce. These operations are the most widely used abstractions to perform mathematical operations over all processes that are part of the simulation. In this work, we propose a hierarchical design to implement the reduction operations on multicore systems. This design aims to improve the efficiency of reductions by 1) tailoring the algorithms and customizing the implementations for various communication mechanisms in the system 2) providing the ability to configure the depth ofmore » hierarchy to match the system architecture, and 3) providing the ability to independently progress each of this hierarchy. Using this design, we implement MPI Allreduce and MPI Reduce operations (and its nonblocking variants MPI Iallreduce and MPI Ireduce) for all message sizes, and evaluate on multiple architectures including InfiniBand and Cray XT5. We leverage and enhance our existing infrastructure, Cheetah, which is a framework for implementing hierarchical collective operations to implement these reductions. The experimental results show that the Cheetah reduction operations outperform the production-grade MPI implementations such as Open MPI default, Cray MPI, and MVAPICH2, demonstrating its efficiency, flexibility and portability. On Infini- Band systems, with a microbenchmark, a 512-process Cheetah nonblocking Allreduce and Reduce achieves a speedup of 23x and 10x, respectively, compared to the default Open MPI reductions. The blocking variants of the reduction operations also show similar performance benefits. A 512-process nonblocking Cheetah Allreduce achieves a speedup of 3x, compared to the default MVAPICH2 Allreduce implementation. On a Cray XT5 system, a 6144-process Cheetah Allreduce outperforms the Cray MPI by 145%. The evaluation with an application kernel, Conjugate Gradient solver, shows that the Cheetah reductions speeds up total time to solution by 195%, demonstrating the potential benefits for scientific simulations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinclair, Karin C
This fact sheet covers the work that is being done via the International Energy Agency Task 34 (WREN). The fact sheet highlights the objective, strategy, primary activities, members, and contacts for this task.
Jourdin, Ludovic; Freguia, Stefano; Flexer, Victoria; Keller, Jurg
2016-02-16
The enhancement of microbial electrosynthesis (MES) of acetate from CO2 to performance levels that could potentially support practical implementations of the technology must go through the optimization of key design and operating conditions. We report that higher proton availability drastically increases the acetate production rate, with pH 5.2 found to be optimal, which will likely suppress methanogenic activity without inhibitor addition. Applied cathode potential as low as -1.1 V versus SHE still achieved 99% of electron recovery in the form of acetate at a current density of around -200 A m(-2). These current densities are leading to an exceptional acetate production rate of up to 1330 g m(-2) day(-1) at pH 6.7. Using highly open macroporous reticulated vitreous carbon electrodes with macropore sizes of about 0.6 mm in diameter was found to be optimal for achieving a good balance between total surface area available for biofilm formation and effective mass transfer between the bulk liquid and the electrode and biofilm surface. Furthermore, we also successfully demonstrated the use of a synthetic biogas mixture as carbon dioxide source, yielding similarly high MES performance as pure CO2. This would allow this process to be used effectively for both biogas quality improvement and conversion of the available CO2 to acetate.
Implementing a Multiple Criteria Model Base in Co-Op with a Graphical User Interface Generator
1993-09-23
PROMETHEE ................................ 44 A. THE ALGORITHM S ................................... 44 1. Basic Algorithm of PROMETHEE I and... PROMETHEE II ..... 45 a. Use of the Algorithm in PROMETHEE I ............. 49 b. Use of the Algorithm in PROMETHEE II ............. 50 V 2. Algorithm of... PROMETHEE V ......................... 50 B. SCREEN DESIGNS OF PROMETHEE ...................... 51 1. PROMETHEE I and PROMETHEE II ................... 52 a
A Review of Software Maintenance Technology.
1980-02-01
LABS HONEYWELL, REF BURROUGHS, OTHERS BOOLE £ LANGUAGE "PE BABBAGE , OPERATIONAL INDEPENDENT 4.2.17 INC. IBM H M RELIABILITY MOST LARGE U. MEASUREMENT...80-13 ML llluuluunuuuuu SmeeI..... f. Maintenance Experience (1) Multiple Implementation Charles Holmes (Source 2) described two attempts at McDonnell...proprietary software monitor package distributed by Boole and Babbage , Inc., Sunnyvale, California. it has been implemented on IBM computers and is language
Integrated mobile robot control
NASA Technical Reports Server (NTRS)
Amidi, Omead; Thorpe, Charles
1991-01-01
This paper describes the structure, implementation, and operation of a real-time mobile robot controller which integrates capabilities such as: position estimation, path specification and tracking, human interfaces, fast communication, and multiple client support. The benefits of such high-level capabilities in a low-level controller was shown by its implementation for the Navlab autonomous vehicle. In addition, performance results from positioning and tracking systems are reported and analyzed.
Integrated mobile robot control
NASA Astrophysics Data System (ADS)
Amidi, Omead; Thorpe, Chuck E.
1991-03-01
This paper describes the strucwre implementation and operation of a real-time mobile robot controller which integrates capabilities such as: position estimation path specification and hacking human interfaces fast communication and multiple client support The benefits of such high-level capabilities in a low-level controller was shown by its implementation for the Naviab autonomous vehicle. In addition performance results from positioning and tracking systems are reported and analyzed.
NASA Astrophysics Data System (ADS)
Sturgess, G. J.; Syed, S. A.
1982-06-01
A numerical simulation is made of the flow in the Wright Aeronautical Propulsion Laboratory diffusion flame research combustor operating with a strong central jet of carbon dioxide in a weak and removed co-axial jet of air. The simulation is based on a finite difference solution of the time-average, steady-state, elliptic form of the Reynolds equations. Closure for these equations is provided by a two-equation turbulence model. Comparisons between measurements and predictions are made for centerline axial velocities and radial profiles of CO2 concentration. Earlier findings for a single specie, constant density, single jet flow that a large expansion ratio confined jet behaves initially as if it were unconfined, are confirmed for the multiple-specie, variable density, multiple-jet system. The lack of universality in the turbulence model constants and the turbulent Schmidt/Prandtl number is discussed.
Paul S Wills, PhD; Pfeiffer, Timothy; Baptiste, Richard; Watten, Barnaby J.
2016-01-01
Control of alkalinity, dissolved carbon dioxide (dCO2), and pH are critical in marine recirculating aquaculture systems (RAS) in order to maintain health and maximize growth. A small-scale prototype aragonite sand filled fluidized bed reactor was tested under varying conditions of alkalinity and dCO2 to develop and model the response of dCO2 across the reactor. A large-scale reactor was then incorporated into an operating marine recirculating aquaculture system to observe the reactor as the system moved toward equilibrium. The relationship between alkalinity dCO2, and pH across the reactor are described by multiple regression equations. The change in dCO2 across the small-scale reactor indicated a strong likelihood that an equilibrium alkalinity would be maintained by using a fluidized bed aragonite reactor. The large-scale reactor verified this observation and established equilibrium at an alkalinity of approximately 135 mg/L as CaCO3, dCO2 of 9 mg/L, and a pH of 7.0 within 4 days that was stable during a 14 day test period. The fluidized bed aragonite reactor has the potential to simplify alkalinity and pH control, and aid in dCO2 control in RAS design and operation. Aragonite sand, purchased in bulk, is less expensive than sodium bicarbonate and could reduce overall operating production costs.
Park, Jae Hong; Peters, Thomas M.; Altmaier, Ralph; Jones, Samuel M.; Gassman, Richard; Anthony, T. Renée
2017-01-01
We have developed a time-dependent simulation model to estimate in-room concentrations of multiple contaminants [ammonia (NH3), carbon dioxide (CO2), carbon monoxide (CO) and dust] as a function of increased ventilation with filtered recirculation for swine farrowing facilities. Energy and mass balance equations were used to simulate the indoor air quality (IAQ) and operational cost for a variety of ventilation conditions over a 3-month winter period for a facility located in the Midwest U.S., using simplified and real-time production parameters, comparing results to field data. A revised model was improved by minimizing the sum of squared errors (SSE) between modeled and measured NH3 and CO2. After optimizing NH3 and CO2, other IAQ results from the simulation were compared to field measurements using linear regression. For NH3, the coefficient of determination (R2) for simulation results and field measurements improved from 0.02 with the original model to 0.37 with the new model. For CO2, the R2 for simulation results and field measurements was 0.49 with the new model. When the makeup air was matched to hallway air CO2 concentrations (1,500 ppm), simulation results showed the smallest SSE. With the new model, the R2 for other contaminants were 0.34 for inhalable dust, 0.36 for respirable dust, and 0.26 for CO. Operation of the air cleaner decreased inhalable dust by 35% and respirable dust concentrations by 33%, while having no effect on NH3, CO2, in agreement with field data, and increasing operational cost by $860 (58%) for the three-month period. PMID:28775911
Briefing paper : toward a best practice model for managed lanes in Texas.
DOT National Transportation Integrated Search
2013-09-01
Over the past two decades, agencies : have increasingly implemented managed : lanes (MLs) to mitigate growing urban traffic : congestion in the United States. Multiple operating : projects : representing a combination : of HOV-to-HOT conversions a...
NASA Technical Reports Server (NTRS)
1975-01-01
Work accomplished on the Deep Space Network (DSN) was described, including the following topics: supporting research and technology, advanced development and engineering, system implementation, and DSN operations pertaining to mission-independent or multiple-mission development as well as to support of flight projects.
1988-10-03
full achievable region is achievable if there is only a bounded degree of asynchronism. E. Arikan , in a Ph.D. thesis [Ari85], extended sequential...real co-operation is required to reduce the number of transmissions to O(log log N). 14 REFERENCES [Ari85] E. Arikan , "Sequential Decoding for Multiple
Emission computerized axial tomography from multiple gamma-camera views using frequency filtering.
Pelletier, J L; Milan, C; Touzery, C; Coitoux, P; Gailliard, P; Budinger, T F
1980-01-01
Emission computerized axial tomography is achievable in any nuclear medicine department from multiple gamma camera views. Data are collected by rotating the patient in front of the camera. A simple fast algorithm is implemented, known as the convolution technique: first the projection data are Fourier transformed and then an original filter designed for optimizing resolution and noise suppression is applied; finally the inverse transform of the latter operation is back-projected. This program, which can also take into account the attenuation for single photon events, was executed with good results on phantoms and patients. We think that it can be easily implemented for specific diagnostic problems.
Gschwind, Michael K [Chappaqua, NY
2011-03-01
Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.
Brunette, Mary F; Asher, Dianne; Whitley, Rob; Lutz, Wilma J; Wieder, Barbara L; Jones, Amanda M; McHugo, Gregory J
2008-09-01
Approximately half of the people who have serious mental illnesses experience a co-occurring substance use disorder at some point in their lifetime. Integrated dual disorders treatment, a program to treat persons with co-occurring disorders, improves outcomes but is not widely available in public mental health settings. This report describes the extent to which this intervention was implemented by 11 community mental health centers participating in a large study of practice implementation. Facilitators and barriers to implementation are described. Trained implementation monitors conducted regular site visits over two years. During visits, monitors interviewed key informants, conducted ethnographic observations of implementation efforts, and assessed fidelity to the practice model. These data were coded and used as a basis for detailed site reports summarizing implementation processes. The authors reviewed the reports and distilled the three top facilitators and barriers for each site. The most prominent cross-site facilitators and barriers were identified. Two sites reached high fidelity, six sites reached moderate fidelity, and three sites remained at low fidelity over the two years. Prominent facilitators and barriers to implementation with moderate to high fidelity were administrative leadership, consultation and training, supervisor mastery and supervision, chronic staff turnover, and finances. Common facilitators and barriers to implementation of integrated dual disorders treatment emerged across sites. The results confirmed the importance of the use of the consultant-trainer in the model of implementation, as well as the need for intensive activities at multiple levels to facilitate implementation. Further research on service implementation is needed, including but not limited to clarifying strategies to overcome barriers.
Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Terry R
2012-01-01
This paper describes a kernel scheduling algorithm that is based on coscheduling principles and that is intended for parallel applications running on 1000 cores or more. Experimental results for a Linux implementation on a Cray XT5 machine are presented. The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.
Microalgae biorefinery: High value products perspectives.
Chew, Kit Wayne; Yap, Jing Ying; Show, Pau Loke; Suan, Ng Hui; Juan, Joon Ching; Ling, Tau Chuan; Lee, Duu-Jong; Chang, Jo-Shu
2017-04-01
Microalgae have received much interest as a biofuel feedstock in response to the uprising energy crisis, climate change and depletion of natural sources. Development of microalgal biofuels from microalgae does not satisfy the economic feasibility of overwhelming capital investments and operations. Hence, high-value co-products have been produced through the extraction of a fraction of algae to improve the economics of a microalgae biorefinery. Examples of these high-value products are pigments, proteins, lipids, carbohydrates, vitamins and anti-oxidants, with applications in cosmetics, nutritional and pharmaceuticals industries. To promote the sustainability of this process, an innovative microalgae biorefinery structure is implemented through the production of multiple products in the form of high value products and biofuel. This review presents the current challenges in the extraction of high value products from microalgae and its integration in the biorefinery. The economic potential assessment of microalgae biorefinery was evaluated to highlight the feasibility of the process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Designing software for operational decision support through coloured Petri nets
NASA Astrophysics Data System (ADS)
Maggi, F. M.; Westergaard, M.
2017-05-01
Operational support provides, during the execution of a business process, replies to questions such as 'how do I end the execution of the process in the cheapest way?' and 'is my execution compliant with some expected behaviour?' These questions may be asked several times during a single execution and, to answer them, dedicated software components (the so-called operational support providers) need to be invoked. Therefore, an infrastructure is needed to handle multiple providers, maintain data between queries about the same execution and discard information when it is no longer needed. In this paper, we use coloured Petri nets (CPNs) to model and analyse software implementing such an infrastructure. This analysis is needed to clarify the requirements before implementation and to guarantee that the resulting software is correct. To this aim, we present techniques to represent and analyse state spaces with 250 million states on a normal PC. We show how the specified requirements have been implemented as a plug-in of the process mining tool ProM and how the operational support in ProM can be used in combination with an existing operational support provider.
Urban governance and the systems approaches to health-environment co-benefits in cities.
Oliveira, Jose A Puppim de; Doll, Christopher N H; Siri, José; Dreyfus, Magali; Farzaneh, Hooman; Capon, Anthony
2015-11-01
The term "co-benefits" refers to positive outcomes accruing from a policy beyond the intended outcome, often or usually in other sectors. In the urban context, policies implemented in particular sectors (such as transport, energy or waste) often generate multiple co-benefits in other areas. Such benefits may be related to the reduction of local or global environmental impacts and also extend into the area of public health. A key to identifying and realising co-benefits is the adoption of systems approaches to understand inter-sectoral linkages and, in particular, the translation of this understanding to improved sector-specific and city governance. This paper reviews a range of policies which can yield health and climate co-benefits across different urban sectors and illustrates, through a series of cases, how taking a systems approach can lead to innovations in urban governance which aid the development of healthy and sustainable cities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Binbin; Liu, Lihong; Cui, Ganglong
2015-11-21
In this work, the recently introduced quantum trajectory mean-field (QTMF) approach is implemented and employed to explore photodissociation dynamics of diazirinone (N{sub 2}CO), which are based on the high-level ab initio calculation. For comparison, the photodissociation process has been simulated as well with the fewest-switches surface hopping (FSSH) and the ab initio multiple spawning (AIMS) methods. Overall, the dynamical behavior predicted by the three methods is consistent. The N{sub 2}CO photodissociation at λ > 335 nm is an ultrafast process and the two C—N bonds are broken in a stepwise way, giving birth to CO and N{sub 2} as themore » final products in the ground state. Meanwhile, some noticeable differences were found in the QTMF, FSSH, and AIMS simulated time constants for fission of the C—N bonds, excited-state lifetime, and nonadiabatic transition ratios in different intersection regions. These have been discussed in detail. The present study provides a clear evidence that direct ab initio QTMF approach is one of the reliable tools for simulating nonadiabatic dynamics processes.« less
Efficient electrochemical CO2 conversion powered by renewable energy.
Kauffman, Douglas R; Thakkar, Jay; Siva, Rajan; Matranga, Christopher; Ohodnicki, Paul R; Zeng, Chenjie; Jin, Rongchao
2015-07-22
The catalytic conversion of CO2 into industrially relevant chemicals is one strategy for mitigating greenhouse gas emissions. Along these lines, electrochemical CO2 conversion technologies are attractive because they can operate with high reaction rates at ambient conditions. However, electrochemical systems require electricity, and CO2 conversion processes must integrate with carbon-free, renewable-energy sources to be viable on larger scales. We utilize Au25 nanoclusters as renewably powered CO2 conversion electrocatalysts with CO2 → CO reaction rates between 400 and 800 L of CO2 per gram of catalytic metal per hour and product selectivities between 80 and 95%. These performance metrics correspond to conversion rates approaching 0.8-1.6 kg of CO2 per gram of catalytic metal per hour. We also present data showing CO2 conversion rates and product selectivity strongly depend on catalyst loading. Optimized systems demonstrate stable operation and reaction turnover numbers (TONs) approaching 6 × 10(6) molCO2 molcatalyst(-1) during a multiday (36 h total hours) CO2 electrolysis experiment containing multiple start/stop cycles. TONs between 1 × 10(6) and 4 × 10(6) molCO2 molcatalyst(-1) were obtained when our system was powered by consumer-grade renewable-energy sources. Daytime photovoltaic-powered CO2 conversion was demonstrated for 12 h and we mimicked low-light or nighttime operation for 24 h with a solar-rechargeable battery. This proof-of-principle study provides some of the initial performance data necessary for assessing the scalability and technical viability of electrochemical CO2 conversion technologies. Specifically, we show the following: (1) all electrochemical CO2 conversion systems will produce a net increase in CO2 emissions if they do not integrate with renewable-energy sources, (2) catalyst loading vs activity trends can be used to tune process rates and product distributions, and (3) state-of-the-art renewable-energy technologies are sufficient to power larger-scale, tonne per day CO2 conversion systems.
A Tensor Product Formulation of Strassen's Matrix Multiplication Algorithm with Memory Reduction
Kumar, B.; Huang, C. -H.; Sadayappan, P.; ...
1995-01-01
In this article, we present a program generation strategy of Strassen's matrix multiplication algorithm using a programming methodology based on tensor product formulas. In this methodology, block recursive programs such as the fast Fourier Transforms and Strassen's matrix multiplication algorithm are expressed as algebraic formulas involving tensor products and other matrix operations. Such formulas can be systematically translated to high-performance parallel/vector codes for various architectures. In this article, we present a nonrecursive implementation of Strassen's algorithm for shared memory vector processors such as the Cray Y-MP. A previous implementation of Strassen's algorithm synthesized from tensor product formulas required working storagemore » of size O(7 n ) for multiplying 2 n × 2 n matrices. We present a modified formulation in which the working storage requirement is reduced to O(4 n ). The modified formulation exhibits sufficient parallelism for efficient implementation on a shared memory multiprocessor. Performance results on a Cray Y-MP8/64 are presented.« less
Mobile-Device-Supported Strategy for Chinese Reading Comprehension
ERIC Educational Resources Information Center
Chang, Kuo-En; Lan, Yu-Ju; Chang, Chien-Mei; Sung, Yao-Ting
2010-01-01
The work described in this paper explores the feasibility of using of a wireless handheld system (WHS) that supports the individual and co-operative reading activities of students and helps teachers implement reading strategy instruction in Chinese language classes. The experimental findings demonstrate that the WHS benefits students applying…
Morpho-Structural Characterization of WC20Co Deposited Layers
NASA Astrophysics Data System (ADS)
Tugui, C. A.; Vizureanu, P.
2017-06-01
Hydroelectric power plants use the power of water to produce electricity. In this paper we propose a solution that will increase the efficiency of turbine operation by implementing new innovative technologies to increase the working characteristics by depositing hard thin films of tungsten carbide. For this purpose hard tough deposits with WC20Co and Jet Plasma Jet on X3CrNiMo13-4 stainless steel were used for the realization of the Francis turbine with vertical shaft.
2011-03-01
operations erent types of lenge categor le pattern of of scenarios, ersonnel are nalysis s based upon . See Section t question th iffer on their...environmen change may Multiple con achievable a Underspecif Independent (they may h tential scen e complexity verall areas t ere both con ry operations rather...scenario de iption of the ghanistan operations co t as a liaison hanistan Nat of Security (N here was a m ut the level o in security. I security in
An efficient implementation of Forward-Backward Least-Mean-Square Adaptive Line Enhancers
NASA Technical Reports Server (NTRS)
Yeh, H.-G.; Nguyen, T. M.
1995-01-01
An efficient implementation of the forward-backward least-mean-square (FBLMS) adaptive line enhancer is presented in this article. Without changing the characteristics of the FBLMS adaptive line enhancer, the proposed implementation technique reduces multiplications by 25% and additions by 12.5% in two successive time samples in comparison with those operations of direct implementation in both prediction and weight control. The proposed FBLMS architecture and algorithm can be applied to digital receivers for enhancing signal-to-noise ratio to allow fast carrier acquisition and tracking in both stationary and nonstationary environments.
Verification and Planning Based on Coinductive Logic Programming
NASA Technical Reports Server (NTRS)
Bansal, Ajay; Min, Richard; Simon, Luke; Mallya, Ajay; Gupta, Gopal
2008-01-01
Coinduction is a powerful technique for reasoning about unfounded sets, unbounded structures, infinite automata, and interactive computations [6]. Where induction corresponds to least fixed point's semantics, coinduction corresponds to greatest fixed point semantics. Recently coinduction has been incorporated into logic programming and an elegant operational semantics developed for it [11, 12]. This operational semantics is the greatest fix point counterpart of SLD resolution (SLD resolution imparts operational semantics to least fix point based computations) and is termed co- SLD resolution. In co-SLD resolution, a predicate goal p( t) succeeds if it unifies with one of its ancestor calls. In addition, rational infinite terms are allowed as arguments of predicates. Infinite terms are represented as solutions to unification equations and the occurs check is omitted during the unification process. Coinductive Logic Programming (Co-LP) and Co-SLD resolution can be used to elegantly perform model checking and planning. A combined SLD and Co-SLD resolution based LP system forms the common basis for planning, scheduling, verification, model checking, and constraint solving [9, 4]. This is achieved by amalgamating SLD resolution, co-SLD resolution, and constraint logic programming [13] in a single logic programming system. Given that parallelism in logic programs can be implicitly exploited [8], complex, compute-intensive applications (planning, scheduling, model checking, etc.) can be executed in parallel on multi-core machines. Parallel execution can result in speed-ups as well as in larger instances of the problems being solved. In the remainder we elaborate on (i) how planning can be elegantly and efficiently performed under real-time constraints, (ii) how real-time systems can be elegantly and efficiently model- checked, as well as (iii) how hybrid systems can be verified in a combined system with both co-SLD and SLD resolution. Implementations of co-SLD resolution as well as preliminary implementations of the planning and verification applications have been developed [4]. Co-LP and Model Checking: The vast majority of properties that are to be verified can be classified into safety properties and liveness properties. It is well known within model checking that safety properties can be verified by reachability analysis, i.e, if a counter-example to the property exists, it can be finitely determined by enumerating all the reachable states of the Kripke structure.
Wu, Chun-Chang; Chuang, Wen-Yu; Wu, Ching-Da; Su, Yu-Cheng; Huang, Yung-Yang; Huang, Yang-Jing; Peng, Sheng-Yu; Yu, Shih-An; Lin, Chih-Ting; Lu, Shey-Shi
2017-01-01
A self-sustained multi-sensor platform for indoor environmental monitoring is proposed in this paper. To reduce the cost and power consumption of the sensing platform, in the developed platform, organic materials of PEDOT:PSS and PEDOT:PSS/EB-PANI are used as the sensing films for humidity and CO2 detection, respectively. Different from traditional gas sensors, these organic sensing films can operate at room temperature without heating processes or infrared transceivers so that the power consumption of the developed humidity and the CO2 sensors can be as low as 10 μW and 5 μW, respectively. To cooperate with these low-power sensors, a Complementary Metal-Oxide-Semiconductor (CMOS) system-on-chip (SoC) is designed to amplify and to read out multiple sensor signals with low power consumption. The developed SoC includes an analog-front-end interface circuit (AFE), an analog-to-digital convertor (ADC), a digital controller and a power management unit (PMU). Scheduled by the digital controller, the sensing circuits are power gated with a small duty-cycle to reduce the average power consumption to 3.2 μW. The designed PMU converts the power scavenged from a dye sensitized solar cell (DSSC) module into required supply voltages for SoC circuits operation under typical indoor illuminance conditions. To our knowledge, this is the first multiple environmental parameters (Temperature/CO2/Humidity) sensing platform that demonstrates a true self-powering functionality for long-term operations. PMID:28353680
Assisting Design Given Multiple Performance Criteria
1988-08-01
with uninstantiated operators is created then each operator’s implementation is selected. g - Keywords: computer-aided design, artificial...IEEE Trans- actions on Software Engineering, SE-7(1), 1981. [BG86] Forrest D. Brewer and Daniel D. Gajski . An expert-system paradigm for de- sign. In...Teukolsky, api William T. Vet- terling. Numerical Recipes. Cambridge University Press, Cambridge, England, 1987. [RFS83] G . G . Rassweiler, M. D
Multi-objective optimisation of aircraft flight trajectories in the ATM and avionics context
NASA Astrophysics Data System (ADS)
Gardi, Alessandro; Sabatini, Roberto; Ramasamy, Subramanian
2016-05-01
The continuous increase of air transport demand worldwide and the push for a more economically viable and environmentally sustainable aviation are driving significant evolutions of aircraft, airspace and airport systems design and operations. Although extensive research has been performed on the optimisation of aircraft trajectories and very efficient algorithms were widely adopted for the optimisation of vertical flight profiles, it is only in the last few years that higher levels of automation were proposed for integrated flight planning and re-routing functionalities of innovative Communication Navigation and Surveillance/Air Traffic Management (CNS/ATM) and Avionics (CNS+A) systems. In this context, the implementation of additional environmental targets and of multiple operational constraints introduces the need to efficiently deal with multiple objectives as part of the trajectory optimisation algorithm. This article provides a comprehensive review of Multi-Objective Trajectory Optimisation (MOTO) techniques for transport aircraft flight operations, with a special focus on the recent advances introduced in the CNS+A research context. In the first section, a brief introduction is given, together with an overview of the main international research initiatives where this topic has been studied, and the problem statement is provided. The second section introduces the mathematical formulation and the third section reviews the numerical solution techniques, including discretisation and optimisation methods for the specific problem formulated. The fourth section summarises the strategies to articulate the preferences and to select optimal trajectories when multiple conflicting objectives are introduced. The fifth section introduces a number of models defining the optimality criteria and constraints typically adopted in MOTO studies, including fuel consumption, air pollutant and noise emissions, operational costs, condensation trails, airspace and airport operations. A brief overview of atmospheric and weather modelling is also included. Key equations describing the optimality criteria are presented, with a focus on the latest advancements in the respective application areas. In the sixth section, a number of MOTO implementations in the CNS+A systems context are mentioned with relevant simulation case studies addressing different operational tasks. The final section draws some conclusions and outlines guidelines for future research on MOTO and associated CNS+A system implementations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolosz, Ben, E-mail: kolosz27@gmail.com; Grant-Muller, Susan, E-mail: S.M.Grant-Muller@its.leeds.ac.uk
The paper reports research involving three cost–benefit analyses performed on different ITS schemes (Active Traffic Management, Intelligent Speed Adaptation and the Automated Highway System) on one of the UK's busiest highways — the M42. The environmental scope of the assets involved is widened to take into account the possibility of new technology linked by ICT and located within multiple spatial regions. The areas focused on in the study were data centre energy emissions, the embedded emissions of the road-side infrastructure, vehicle tailpipe emissions, additional hardware required by the vehicles (if applicable) and safety, and all aspects of sustainability. Dual discountingmore » is applied which aims to provide a separate discount rate for environmental elements. For ATM, despite the energy costs of the data centre, the initial implementation costs and mitigation costs of its embedded emissions, a high cost–benefit ratio of 5.89 is achieved, although the scheme becomes less effective later on its lifecycle due to rising costs of energy. ISA and AHS generate a negative result, mainly due to the cost of getting the vehicle on the road. In order to negate these costs, the pricing of the vehicle should be scaled depending upon the technology that is outfitted. Retrofitting on vehicles without the technology should be paid for by the driver. ATM will offset greenhouse gas emissions by 99 kt of CO{sub 2} equivalency over a 25 year lifespan. This reduction has taken into account the expected improvement in vehicle technology. AHS is anticipated to save 280 kt of CO{sub 2} equivalency over 15 years of operational usage. However, this offset is largely dependent on assumptions such as the level of market penetration. - Highlights: • Three cost–benefit analyses are applied to inter-urban intelligent transport. • For ATM, a high cost–benefit ratio of 5.89 is achieved. • ATM offsets greenhouse gas emissions by 99 kt of CO{sub 2} equivalency over 25 years. • ISA and AHS generate a negative result due to vehicle implementation costs. • AHS is anticipated to save 280 kt of CO{sub 2} equivalency over 15 years.« less
Tomasula, P M; Yee, W C F; McAloon, A J; Nutter, D W; Bonnaillie, L M
2013-05-01
Energy-savings measures have been implemented in fluid milk plants to lower energy costs and the energy-related carbon dioxide (CO2) emissions. Although these measures have resulted in reductions in steam, electricity, compressed air, and refrigeration use of up to 30%, a benchmarking framework is necessary to examine the implementation of process-specific measures that would lower energy use, costs, and CO2 emissions even further. In this study, using information provided by the dairy industry and equipment vendors, a customizable model of the fluid milk process was developed for use in process design software to benchmark the electrical and fuel energy consumption and CO2 emissions of current processes. It may also be used to test the feasibility of new processing concepts to lower energy and CO2 emissions with calculation of new capital and operating costs. The accuracy of the model in predicting total energy usage of the entire fluid milk process and the pasteurization step was validated using available literature and industry energy data. Computer simulation of small (40.0 million L/yr), medium (113.6 million L/yr), and large (227.1 million L/yr) processing plants predicted the carbon footprint of milk, defined as grams of CO2 equivalents (CO2e) per kilogram of packaged milk, to within 5% of the value of 96 g of CO 2e/kg of packaged milk obtained in an industry-conducted life cycle assessment and also showed, in agreement with the same study, that plant size had no effect on the carbon footprint of milk but that larger plants were more cost effective in producing milk. Analysis of the pasteurization step showed that increasing the percentage regeneration of the pasteurizer from 90 to 96% would lower its thermal energy use by almost 60% and that implementation of partial homogenization would lower electrical energy use and CO2e emissions of homogenization by 82 and 5.4%, respectively. It was also demonstrated that implementation of steps to lower non-process-related electrical energy in the plant would be more effective in lowering energy use and CO2e emissions than fuel-related energy reductions. The model also predicts process-related water usage, but this portion of the model was not validated due to a lack of data. The simulator model can serve as a benchmarking framework for current plant operations and a tool to test cost-effective process upgrades or evaluate new technologies that improve the energy efficiency and lower the carbon footprint of milk processing plants. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
2012-01-01
Background Effective implementation of change in healthcare organisations involves multiple professional and organisational groups and is often impeded by professional and organisational boundaries that present relatively impermeable barriers to sharing knowledge and spreading work practices. Informed by the theory of communities of practice (CoPs), this study explored the effects of intra-organisational and inter-organisational boundaries on the implementation of service improvement within and across primary healthcare settings and on the development of multiprofessional and multi-organisational CoPs during this process. Methods The study was conducted within the Collaboration for Leadership in Applied Health Research and Care (CLAHRC) for Greater Manchester—a collaborative partnership between the University of Manchester and local National Health Service organisations aiming to undertake applied health research and enhance its implementation in clinical practice. It deployed a qualitative embedded case study design, encompassing semistructured interviews, direct observation and documentary analysis, conducted in 2010–2011. The sample included practice doctors, nurses, managers and members of the CLAHRC implementation team. Findings The study showed that in spite of epistemic and status differences, professional boundaries between general practitioners, practice nurses and practice managers co-located in the same practice over a relatively long period of time could be successfully bridged, leading to the formation of multiprofessional CoPs. While knowledge circulated relatively easily within these CoPs, barriers to knowledge sharing emerged at the boundary separating them from other groups existing in the same primary care setting. The strongest boundaries, however, lay between individual general practices, with inter-organisational knowledge sharing and collaboration between them remaining unequally developed across different areas due to historical factors, competition and strong organisational identification. Manipulated emergence of multi-organisational CoPs in the context of primary care may thus be problematic. Conclusions In cases when manipulated emergence of new CoPs is problematic, boundary issues could be addressed by adopting a developmental perspective on CoPs, which provides an alternative to the analytical and instrumental perspectives previously described in the CoP literature. This perspective implies a pragmatic, situational approach to mapping existing CoPs and their characteristics and potentially modifying them in the process of service improvement through the combination of internal and external facilitation. PMID:23068016
What Are the Professional, Political, and Ethical Challenges of Co-Creating Health Care Systems?
Singh, Guddi; Owens, John; Cribb, Alan
2017-11-01
Co-creation is seen by many as a means of meeting the multiple challenges facing contemporary health care systems by involving institutions, professionals, patients, and stakeholders in new roles, relationships, and collaborative practices. While co-creation has the potential to positively transform health care systems, it generates a number of political and ethical challenges that should not be overlooked. We suggest that those involved in envisioning and implementing co-creation initiatives pay close attention to significant questions of equity, power, and justice and to the fundamental challenge of securing a common vision of the aims of and agendas for health care systems. While such initiatives present significant opportunities for improvement, they need to be viewed in light of their accompanying professional, political, and ethical challenges. © 2017 American Medical Association. All Rights Reserved.
Use of bacterial co-cultures for the efficient production of chemicals.
Jones, J Andrew; Wang, Xin
2017-12-02
The microbial production of chemicals has traditionally relied on a single engineered microbe to enable the complete bioconversion of substrate to final product. Recently, a growing fraction of research has transitioned towards employing a modular co-culture engineering strategy using multiple microbes growing together to facilitate a divide-and-conquer approach for chemical biosynthesis. Here, we review key success stories that leverage the unique advantages of co-culture engineering, while also addressing the critical concerns that will limit the wide-spread implementation of this technology. Future studies that address the need to monitor and control the population dynamics of each strain module, while maintaining robust flux routes towards a wide range of desired products will lead the efforts to realize the true potential of co-culture engineering. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wu, Felicia; Munkvold, Gary P
2008-06-11
The rapidly expanding U.S. ethanol industry is generating a growing supply of co-products, mostly in the form of dried distillers' grain and solubles (DDGS) or wet distillers' grains (WDG). In the United States, 90% of the co-products of maize-based ethanol are fed to livestock. An unintended consequence is that animals are likely to be fed higher levels of mycotoxins, which are concentrated up to three times in DDGS compared to grain. The model developed in this study estimates current losses to the swine industry from weight gain reduction due to fumonisins in added DDGS at $9 million ($2-18 million) annually. If there is complete market penetration of DDGS in swine feed with 20% DDGS inclusion in swine feed and fumonisins are not controlled, losses may increase to $147 million ($29-293 million) annually. These values represent only those losses attributable to one mycotoxin on one adverse outcome on one species. The total loss due to mycotoxins in DDGS could be significantly higher due to additive or multiplicative effects of multiple mycotoxins on animal health. If mycotoxin surveillance is implemented by ethanol producers, losses are shifted among multiple stakeholders. Solutions to this problem include methods to reduce mycotoxin contamination in both pre- and postharvest maize.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Kevin C.
The work summarized in this report is the first step towards a project that will re-train and create jobs for personnel in the coal industry and continue regional economic development to benefit regions impacted by previous downturns. The larger project is aimed at capturing ~300 tons/day (272 metric tonnes/day) CO 2 at a 90% capture rate from existing coal- fired boilers at the Abbott Power Plant on the campus of University of Illinois (UI). It will employ the Linde-BASF novel amine-based advanced CO 2 capture technology, which has already shown the potential to be cost-effective, energy efficient and compact atmore » the 0.5-1.5 MWe pilot scales. The overall objective of the project is to design and install a scaled-up system of nominal 15 MWe size, integrate it with the Abbott Power Plant flue gas, steam and other utility systems, and demonstrate the viability of continuous operation under realistic conditions with high efficiency and capacity. The project will also begin to build a workforce that understands how to operate and maintain the capture plants by including students from regional community colleges and universities in the operation and evaluation of the capture system. This project will also lay the groundwork for follow-on projects that pilot utilization of the captured CO 2 from coal-fired power plants. The net impact will be to demonstrate a replicable means to (1) use a standardized procedure to evaluate power plants for their ability to be retrofitted with a pilot capture unit; (2) design and construct reliable capture systems based on the Linde-BASF technology; (3) operate and maintain these systems; (4) implement training programs with local community colleges and universities to establish a workforce to operate and maintain the systems; and (5) prepare to evaluate at the large pilot scale level various methods to utilize the resulting captured CO 2. Towards the larger project goal, the UI-led team, together with Linde, has completed a preliminary design for the carbon capture pilot plant with basic engineering and cost estimates, established permitting needs, identified approaches to address Environmental, Health, and Safety concerns related to pilot plant installation and operation, developed approaches for long-term use of the captured carbon, and established strategies for workforce development and job creation that will re-train coal operators to operate carbon capture plants. This report describes Phase I accomplishments and demonstrates that the project team is well-prepared for full implementation of Phase 2, to design, build, and operate the carbon capture pilot plant.« less
Brine production strategy modeling for active and integrated management of water resources in CCS
NASA Astrophysics Data System (ADS)
Court, B.; Celia, M. A.; Nordbotten, J. M.; Buscheck, T. A.; Elliot, T. J.; Bandilla, K.; Dobossy, M.
2010-12-01
Our society is at present highly dependent on coal, which will continue to play a major role in baseload electricity production in the coming decades. Most projected climate change mitigation strategies require CO2 Capture and Sequestration (CCS) as a vital element to stabilize CO2 atmospheric emissions. In these strategies, CCS will have to expand in the next two decades by several orders of magnitude compared to current worldwide implementation. At present the interactions among freshwater extraction, CO2 injection, and brine management are being considered too narrowly across CCS operations, and in the case of freshwater almost completely overlooked. Following the authors’ recently published overview of these challenges, an active and integrated management of water resources throughout CCS operations was proposed to avoid overlooking critical challenges that may become major obstacles to CCS implementation. Water resources management is vital for several reasons including that a coal-fired power plant retrofitted for CCS requires twice as much cooling water as the original plant. However this increased demand may be accommodated by brine extraction and treatment, which would concurrently function as large-scale pressure management and a potential source of freshwater. Synergistic advantages of such proactive integration that were identified led the authors to concluded that: Active management of CCS operations through an integrated approach -including brine production, treatment, use for cooling, and partial reinjection- can address challenges simultaneously with several synergistic advantages; and, that freshwater and brine must be linked to CO2 and pressure as key decision making parameters throughout CCS operations while recognizing scalability and potential pore space competition challenges. This work presents a detailed modeling investigation of a potential integration opportunity resulting from brine production. Technical results will focus solely on the conjunctive use of saline aquifers for CO2 sequestration and water supply for power plants. The impact of CO2 injection-brine withdrawal coupling on (i) the CO2 injection plume, (ii) the pressure field, and (iii) CO2 and brine leakage risk will be quantified using a range of simulation codes from Schlumberger’s full numerical ECLIPSE model to a simplified analytical model, in an effort to complement useful work initiated at Lawrence Livermore National Laboratory. In particular the impact of different relative permeability and capillary pressure curves on these three components will be presented and put in context of current modeling risk analysis approach in the CCS scientific community.
Framework for Development and Distribution of Hardware Acceleration
NASA Astrophysics Data System (ADS)
Thomas, David B.; Luk, Wayne W.
2002-07-01
This paper describes IGOL, a framework for developing reconfigurable data processing applications. While IGOL was originally designed to target imaging and graphics systems, its structure is sufficiently general to support a broad range of applications. IGOL adopts a four-layer architecture: application layer, operation layer, appliance layer and configuration layer. This architecture is intended to separate and co-ordinate both the development and execution of hardware and software components. Hardware developers can use IGOL as an instance testbed for verification and benchmarking, as well as for distribution. Software application developers can use IGOL to discover hardware accelerated data processors, and to access them in a transparent, non-hardware specific manner. IGOL provides extensive support for the RC1000-PP board via the Handel-C language, and a wide selection of image processing filters have been developed. IGOL also supplies plug-ins to enable such filters to be incorporated in popular applications such as Premiere, Winamp, VirtualDub and DirectShow. Moreover, IGOL allows the automatic use of multiple cards to accelerate an application, demonstrated using DirectShow. To enable transparent acceleration without sacrificing performance, a three-tiered COM (Component Object Model) API has been designed and implemented. This API provides a well-defined and extensible interface which facilitates the development of hardware data processors that can accelerate multiple applications.
Zhao, Xiangshan; Gan, Lixia; Pan, Haiyun; Kan, Donghui; Majeski, Michael; Adam, Stephen A; Unterman, Terry G
2004-01-01
FOXO1, a Forkhead transcription factor, is an important target of insulin and growth factor action. Phosphorylation of Thr-24, Ser-256 and Ser-319 promotes nuclear exclusion of FOXO1, yet the mechanisms regulating nuclear/cytoplasmic shuttling of FOXO1 are poorly understood. Previous studies have identified an NLS (nuclear localization signal) in the C-terminal basic region of the DBD (DNA-binding domain), and a leucine-rich, leptomycin-B sensitive NES (nuclear export signal) located further downstream. Here, we find that other elements in the DBD also contribute to nuclear localization, and that multiple mechanisms contribute to nuclear exclusion of FOXO1. Phosphorylation of Ser-319 and a cluster of nearby residues (Ser-322, Ser-325 and Ser-329) functions co-operatively with the nearby NES to promote nuclear exclusion. The N-terminal region of FOXO1 (amino acids 1-149) also is sufficient to promote nuclear exclusion, and does so through multiple mechanisms. Amino acids 1-50 are sufficient to promote nuclear exclusion of green fluorescent protein fusion proteins, and the phosphorylation of Thr-24 is required for this effect. A leucine-rich, leptomycin B-sensitive export signal is also present nearby. Phosphorylated FOXO1 binds 14-3-3 proteins, and co-precipitation studies with tagged proteins indicate that 14-3-3 binding involves co-operative interactions with both Thr-24 and Ser-256. Ser-256 is located in the C-terminal region of the DBD, where 14-3-3 proteins may interfere both with DNA-binding and with nuclear-localization functions. Together, these studies demonstrate that multiple elements contribute to nuclear/cytoplasmic shuttling of FOXO1, and that phosphorylation and 14-3-3 binding regulate the cellular distribution and function of FOXO1 through multiple mechanisms. The presence of these redundant mechanisms supports the concept that the regulation of FOXO1 function plays a critical role in insulin and growth factor action. PMID:14664696
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, A.
1996-12-31
For globally sustainable development to be achieved, three concerns are central: productive economic growth, social justice and ecological sustainability. Development co-operation supports the realisation of these three goals in partner countries by helping to alleviate poverty, promote economic growth through private-sector development and protect vital natural resources. The aim of globally sustainable development can only be achieved if industrial countries too implement necessary reforms and structural adjustments at every level. Co-operation efforts with partners must therefore be complemented by coherent policies at home. This is a matter of credibility, but also of developmental far-sightedness. Internal reforms in the industrial countriesmore » secure financial leeway for their providing foreign assistance in the longer term. Environmental and resource protection as a focal point of Germany`s development co-operation with the PRC aims to preserve vital natural resources, shape economic development in their partner countries in an ecologically sound manner and put China in a position to participate in global endeavours to protect the environment. Climate protection measures figure prominently in this area. This is justified given China`s share of global CO{sub 2} emissions and the potential for energy-saving measures and measures to increase power intensity. This potential is derived primarily from the possibility of using energy-efficient technologies, increasing the relatively low energy prices and making use of renewable sources of energy.« less
Multi-pixel high-resolution three-dimensional imaging radar
NASA Technical Reports Server (NTRS)
Cooper, Ken B. (Inventor); Dengler, Robert J. (Inventor); Siegel, Peter H. (Inventor); Chattopadhyay, Goutam (Inventor); Ward, John S. (Inventor); Juan, Nuria Llombart (Inventor); Bryllert, Tomas E. (Inventor); Mehdi, Imran (Inventor); Tarsala, Jan A. (Inventor)
2012-01-01
A three-dimensional imaging radar operating at high frequency e.g., 670 GHz radar using low phase-noise synthesizers and a fast chirper to generate a frequency-modulated continuous-wave (FMCW) waveform, is disclosed that operates with a multiplexed beam to obtain range information simultaneously on multiple pixels of a target. A source transmit beam may be divided by a hybrid coupler into multiple transmit beams multiplexed together and directed to be reflected off a target and return as a single receive beam which is demultiplexed and processed to reveal range information of separate pixels of the target associated with each transmit beam simultaneously. The multiple transmit beams may be developed with appropriate optics to be temporally and spatially differentiated before being directed to the target. Temporal differentiation corresponds to a different intermediate frequencies separating the range information of the multiple pixels. Collinear transmit beams having differentiated polarizations may also be implemented.
Castillo, Encarnación; López-Ramos, Juan A.; Morales, Diego P.
2018-01-01
Security is a critical challenge for the effective expansion of all new emerging applications in the Internet of Things paradigm. Therefore, it is necessary to define and implement different mechanisms for guaranteeing security and privacy of data interchanged within the multiple wireless sensor networks being part of the Internet of Things. However, in this context, low power and low area are required, limiting the resources available for security and thus hindering the implementation of adequate security protocols. Group keys can save resources and communications bandwidth, but should be combined with public key cryptography to be really secure. In this paper, a compact and unified co-processor for enabling Elliptic Curve Cryptography along to Advanced Encryption Standard with low area requirements and Group-Key support is presented. The designed co-processor allows securing wireless sensor networks with independence of the communications protocols used. With an area occupancy of only 2101 LUTs over Spartan 6 devices from Xilinx, it requires 15% less area while achieving near 490% better performance when compared to cryptoprocessors with similar features in the literature. PMID:29337921
Parrilla, Luis; Castillo, Encarnación; López-Ramos, Juan A; Álvarez-Bermejo, José A; García, Antonio; Morales, Diego P
2018-01-16
Security is a critical challenge for the effective expansion of all new emerging applications in the Internet of Things paradigm. Therefore, it is necessary to define and implement different mechanisms for guaranteeing security and privacy of data interchanged within the multiple wireless sensor networks being part of the Internet of Things. However, in this context, low power and low area are required, limiting the resources available for security and thus hindering the implementation of adequate security protocols. Group keys can save resources and communications bandwidth, but should be combined with public key cryptography to be really secure. In this paper, a compact and unified co-processor for enabling Elliptic Curve Cryptography along to Advanced Encryption Standard with low area requirements and Group-Key support is presented. The designed co-processor allows securing wireless sensor networks with independence of the communications protocols used. With an area occupancy of only 2101 LUTs over Spartan 6 devices from Xilinx, it requires 15% less area while achieving near 490% better performance when compared to cryptoprocessors with similar features in the literature.
Operational research as implementation science: definitions, challenges and research priorities.
Monks, Thomas
2016-06-06
Operational research (OR) is the discipline of using models, either quantitative or qualitative, to aid decision-making in complex implementation problems. The methods of OR have been used in healthcare since the 1950s in diverse areas such as emergency medicine and the interface between acute and community care; hospital performance; scheduling and management of patient home visits; scheduling of patient appointments; and many other complex implementation problems of an operational or logistical nature. To date, there has been limited debate about the role that operational research should take within implementation science. I detail three such roles for OR all grounded in upfront system thinking: structuring implementation problems, prospective evaluation of improvement interventions, and strategic reconfiguration. Case studies from mental health, emergency medicine, and stroke care are used to illustrate each role. I then describe the challenges for applied OR within implementation science at the organisational, interventional, and disciplinary levels. Two key challenges include the difficulty faced in achieving a position of mutual understanding between implementation scientists and research users and a stark lack of evaluation of OR interventions. To address these challenges, I propose a research agenda to evaluate applied OR through the lens of implementation science, the liberation of OR from the specialist research and consultancy environment, and co-design of models with service users. Operational research is a mature discipline that has developed a significant volume of methodology to improve health services. OR offers implementation scientists the opportunity to do more upfront system thinking before committing resources or taking risks. OR has three roles within implementation science: structuring an implementation problem, prospective evaluation of implementation problems, and a tool for strategic reconfiguration of health services. Challenges facing OR as implementation science include limited evidence and evaluation of impact, limited service user involvement, a lack of managerial awareness, effective communication between research users and OR modellers, and availability of healthcare data. To progress the science, a focus is needed in three key areas: evaluation of OR interventions, embedding the knowledge of OR in health services, and educating OR modellers about the aims and benefits of service user involvement.
Wide bandwidth phase-locked loop circuit
NASA Technical Reports Server (NTRS)
Koudelka, Robert David (Inventor)
2005-01-01
A PLL circuit uses a multiple frequency range PLL in order to phase lock input signals having a wide range of frequencies. The PLL includes a VCO capable of operating in multiple different frequency ranges and a divider bank independently configurable to divide the output of the VCO. A frequency detector detects a frequency of the input signal and a frequency selector selects an appropriate frequency range for the PLL. The frequency selector automatically switches the PLL to a different frequency range as needed in response to a change in the input signal frequency. Frequency range hysteresis is implemented to avoid operating the PLL near a frequency range boundary.
Teaching Multiplication with Regrouping to Students with Learning Disabilities
ERIC Educational Resources Information Center
Flores, Margaret M.; Hinton, Vanessa M.; Schweck, Kelly B.
2014-01-01
The Common Core Standards require demonstration of conceptual knowledge of numbers, operations, and relations between mathematical concepts. Supplemental instruction should explicitly guide students with specific learning disabilities (SLD) in these skills. In this article, we illustrate implementation of the concrete-representational-abstract…
A Lunar Surface Operations Simulator
NASA Technical Reports Server (NTRS)
Nayar, H.; Balaram, J.; Cameron, J.; Jain, A.; Lim, C.; Mukherjee, R.; Peters, S.; Pomerantz, M.; Reder, L.; Shakkottai, P.;
2008-01-01
The Lunar Surface Operations Simulator (LSOS) is being developed to support planning and design of space missions to return astronauts to the moon. Vehicles, habitats, dynamic and physical processes and related environment systems are modeled and simulated in LSOS to assist in the visualization and design optimization of systems for lunar surface operations. A parametric analysis tool and a data browser were also implemented to provide an intuitive interface to run multiple simulations and review their results. The simulator and parametric analysis capability are described in this paper.
Design and Implementation of the PALM-3000 Real-Time Control System
NASA Technical Reports Server (NTRS)
Truong, Tuan N.; Bouchez, Antonin H.; Burruss, Rick S.; Dekany, Richard G.; Guiwits, Stephen R.; Roberts, Jennifer E.; Shelton, Jean C.; Troy, Mitchell
2012-01-01
This paper reflects, from a computational perspective, on the experience gathered in designing and implementing realtime control of the PALM-3000 adaptive optics system currently in operation at the Palomar Observatory. We review the algorithms that serve as functional requirements driving the architecture developed, and describe key design issues and solutions that contributed to the system's low compute-latency. Additionally, we describe an implementation of dense matrix-vector-multiplication for wavefront reconstruction that exceeds 95% of the maximum sustained achievable bandwidth on NVIDIA Geforce 8800GTX GPU.
Newell, Matthew R [Los Alamos, NM; Jones, David Carl [Los Alamos, NM
2009-09-01
A portable multiplicity counter has signal input circuitry, processing circuitry and a user/computer interface disposed in a housing. The processing circuitry, which can comprise a microcontroller integrated circuit operably coupled to shift register circuitry implemented in a field programmable gate array, is configured to be operable via the user/computer interface to count input signal pluses receivable at said signal input circuitry and record time correlations thereof in a total counting mode, coincidence counting mode and/or a multiplicity counting mode. The user/computer interface can be for example an LCD display/keypad and/or a USB interface. The counter can include a battery pack for powering the counter and low/high voltage power supplies for biasing external detectors so that the counter can be configured as a hand-held device for counting neutron events.
CephFS: a new generation storage platform for Australian high energy physics
NASA Astrophysics Data System (ADS)
Borges, G.; Crosby, S.; Boland, L.
2017-10-01
This paper presents an implementation of a Ceph file system (CephFS) use case at the ARC Center of Excellence for Particle Physics at the Terascale (CoEPP). CoEPP’s CephFS provides a posix-like file system on top of a Ceph RADOS object store, deployed on commodity hardware and without single points of failure. By delivering a unique file system namespace at different CoEPP centres spread across Australia, local HEP researchers can store, process and share data independently of their geographical locations. CephFS is also used as the back-end file system for a WLCG ATLAS user area at the Australian Tier-2. Dedicated SRM and XROOTD services, deployed on top of CoEPP’s CephFS, integrates it in ATLAS data distributed operations. This setup, while allowing Australian HEP researchers to trigger data movement via ATLAS grid tools, also enables local posix-like read access providing greater control to scientists of their data flows. In this article we will present details on CoEPP’s Ceph/CephFS implementation and report performance I/O metrics collected during the testing/tuning phase of the system.
Effective implementation of wavelet Galerkin method
NASA Astrophysics Data System (ADS)
Finěk, Václav; Šimunková, Martina
2012-11-01
It was proved by W. Dahmen et al. that an adaptive wavelet scheme is asymptotically optimal for a wide class of elliptic equations. This scheme approximates the solution u by a linear combination of N wavelets and a benchmark for its performance is the best N-term approximation, which is obtained by retaining the N largest wavelet coefficients of the unknown solution. Moreover, the number of arithmetic operations needed to compute the approximate solution is proportional to N. The most time consuming part of this scheme is the approximate matrix-vector multiplication. In this contribution, we will introduce our implementation of wavelet Galerkin method for Poisson equation -Δu = f on hypercube with homogeneous Dirichlet boundary conditions. In our implementation, we identified nonzero elements of stiffness matrix corresponding to the above problem and we perform matrix-vector multiplication only with these nonzero elements.
A modified homogeneous relaxation model for CO2 two-phase flow in vapour ejector
NASA Astrophysics Data System (ADS)
Haida, M.; Palacz, M.; Smolka, J.; Nowak, A. J.; Hafner, A.; Banasiak, K.
2016-09-01
In this study, the homogenous relaxation model (HRM) for CO2 flow in a two-phase ejector was modified in order to increase the accuracy of the numerical simulations The two- phase flow model was implemented on the effective computational tool called ejectorPL for fully automated and systematic computations of various ejector shapes and operating conditions. The modification of the HRM was performed by a change of the relaxation time and the constants included in the relaxation time equation based on the experimental result under the operating conditions typical for the supermarket refrigeration system. The modified HRM was compared to the HEM results, which were performed based on the comparison of motive nozzle and suction nozzle mass flow rates.
NASA Astrophysics Data System (ADS)
Tatomir, Alexandru Bogdan A. C.; Flemisch, Bernd; Class, Holger; Helmig, Rainer; Sauter, Martin
2017-04-01
Geological storage of CO2 represents one viable solution to reduce greenhouse gas emission in the atmosphere. Potential leakage of CO2 storage can occur through networks of interconnected fractures. The geometrical complexity of these networks is often very high involving fractures occurring at various scales and having hierarchical structures. Such multiphase flow systems are usually hard to solve with a discrete fracture modelling (DFM) approach. Therefore, continuum fracture models assuming average properties are usually preferred. The multiple interacting continua (MINC) model is an extension of the classic double porosity model (Warren and Root, 1963) which accounts for the non-linear behaviour of the matrix-fracture interactions. For CO2 storage applications the transient representation of the inter-porosity two phase flow plays an important role. This study tests the accuracy and computational efficiency of the MINC method complemented with the multiple sub-region (MSR) upscaling procedure versus the DFM. The two phase flow MINC simulator is implemented in the free-open source numerical toolbox DuMux (www.dumux.org). The MSR (Gong et al., 2009) determines the inter-porosity terms by solving simplified local single-phase flow problems. The DFM is considered as the reference solution. The numerical examples consider a quasi-1D reservoir with a quadratic fracture system , a five-spot radial symmetric reservoir, and a completely random generated fracture system. Keywords: MINC, upscaling, two-phase flow, fractured porous media, discrete fracture model, continuum fracture model
Value Encounters - Modeling and Analyzing Co-creation of Value
NASA Astrophysics Data System (ADS)
Weigand, Hans
Recent marketing and management literature has introduced the concept of co-creation of value. Current value modeling approaches such as e3-value focus on the exchange of value rather than co-creation. In this paper, an extension to e3-value is proposed in the form of a “value encounter”. Value encounters are defined as interaction spaces where a group of actors meet and derive value by each one bringing in some of its own resources. They can be analyzed from multiple strategic perspectives, including knowledge management, social network management and operational management. Value encounter modeling can be instrumental in the context of service analysis and design.
NASA Astrophysics Data System (ADS)
Goltz, J. D.
2016-12-01
Although variants of both earthquake early warning and short-term operational earthquake forecasting systems have been implemented or are now being implemented in some regions and nations, they have been slow to gain acceptance within the disciplines that produced them as well as among those for whom they were intended to assist. To accelerate the development and implementation of these technologies will require the cooperation and collaboration of multiple disciplines, some inside and others outside of academia. Seismologists, social scientists, emergency managers, elected officials and key opinion leaders from the media and public must be the participants in this process. Representatives of these groups come from both inside and outside of academia and represent very different organizational cultures, backgrounds and expectations for these systems, sometimes leading to serious disagreements and impediments to further development and implementation. This presentation will focus on examples of the emergence of earthquake early warning and operational earthquake forecasting systems in California, Japan and other regions and document the challenges confronted in the ongoing effort to improve seismic safety.
NASA Technical Reports Server (NTRS)
Elshorbany, Yasin F.; Duncan, Bryan N.; Strode, Sarah A.; Wang, James S.; Kouatchou, Jules
2016-01-01
We present the Efficient CH4-CO-OH (ECCOH) chemistry module that allows for the simulation of the methane, carbon monoxide, and hydroxyl radical (CH4-CO- OH) system, within a chemistry climate model, carbon cycle model, or Earth system model. The computational efficiency of the module allows many multi-decadal sensitivity simulations of the CH4-CO-OH system, which primarily determines the global atmospheric oxidizing capacity. This capability is important for capturing the nonlinear feedbacks of the CH4-CO-OH system and understanding the perturbations to methane, CO, and OH, and the concomitant impacts on climate. We implemented the ECCOH chemistry module in the NASA GEOS-5 atmospheric global circulation model (AGCM), performed multiple sensitivity simulations of the CH4-CO-OH system over 2 decades, and evaluated the model output with surface and satellite data sets of methane and CO. The favorable comparison of output from the ECCOH chemistry module (as configured in the GEOS- 5 AGCM) with observations demonstrates the fidelity of the module for use in scientific research.
Using CO2 Prophet to estimate recovery factors for carbon dioxide enhanced oil recovery
Attanasi, Emil D.
2017-07-17
IntroductionThe Oil and Gas Journal’s enhanced oil recovery (EOR) survey for 2014 (Koottungal, 2014) showed that gas injection is the most frequently applied method of EOR in the United States and that carbon dioxide (CO2 ) is the most commonly used injection fluid for miscible operations. The CO2-EOR process typically follows primary and secondary (waterflood) phases of oil reservoir development. The common objective of implementing a CO2-EOR program is to produce oil that remains after the economic limit of waterflood recovery is reached. Under conditions of miscibility or multicontact miscibility, the injected CO2 partitions between the gas and liquid CO2 phases, swells the oil, and reduces the viscosity of the residual oil so that the lighter fractions of the oil vaporize and mix with the CO2 gas phase (Teletzke and others, 2005). Miscibility occurs when the reservoir pressure is at least at the minimum miscibility pressure (MMP). The MMP depends, in turn, on oil composition, impurities of the CO2 injection stream, and reservoir temperature. At pressures below the MMP, component partitioning, oil swelling, and viscosity reduction occur, but the efficiency is increasingly reduced as the pressure falls farther below the MMP. CO2-EOR processes are applied at the reservoir level, where a reservoir is defined as an underground formation containing an individual and separate pool of producible hydrocarbons that is confined by impermeable rock or water barriers and is characterized by a single natural pressure system. A field may consist of a single reservoir or multiple reservoirs that are not in communication but which may be associated with or related to a single structural or stratigraphic feature (U.S. Energy Information Administration [EIA], 2000). The purpose of modeling the CO2-EOR process is discussed along with the potential CO2-EOR predictive models. The data demands of models and the scope of the assessments require tradeoffs between reservoir-specific data that can be assembled and simplifying assumptions that allow assignment of default values for some reservoir parameters. These issues are discussed in the context of the CO2 Prophet EOR model, and their resolution is demonstrated with the computation of recovery-factor estimates for CO2-EOR of 143 reservoirs in the Powder River Basin Province in southeastern Montana and northeastern Wyoming.
FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods.
Zierke, Stephanie; Bakos, Jason D
2010-04-12
Likelihood (ML)-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF) is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA)-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10x speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs).
ERIC Educational Resources Information Center
Meritt, Julia; Gibson, David; Christensen, Rhonda; Knezek, Gerald
2013-01-01
Two alternative technologies forming the basis of computer-mediated teacher preparation systems are compared and contrasted regarding implementation, operation, and assessment considerations. The role-playing system in Second Life is shown to have the unique characteristic of developing a co-constructed pedagogical identity, while the flight…
13 CFR 134.618 - How are awards paid?
Code of Federal Regulations, 2010 CFR
2010-01-01
... CASES BEFORE THE OFFICE OF HEARINGS AND APPEALS Implementation of the Equal Access to Justice Act § 134... of Financial Operations, SBA, P.O. Box 205, Denver, CO 80201-0205. SBA will pay you the amount awarded within 60 days of receipt of your request unless it is notified that you or another party has...
Pollution prevention/waste minimization is a win-win-win situation for government, industry, and the public, which offers more than just protection of the environment for all. ndustry gains from reduced capital and operating costs, reduced liabilities,cleaner and safer working co...
Co-Operation is Not Enough: Teacher Educators as Curriculum Developers in Times of Change
ERIC Educational Resources Information Center
Hoydalsvik, Torhild Erika Lillemark
2017-01-01
The purpose of this exploratory two site case study is to examine how teacher educators, student teachers and programme leaders experience their 'curriculum developer role' in times of change, against the background of a new national guideline for preschool teacher education being implemented in Norway. The multidisciplinary team approach…
Low-Power, Chip-Scale, Carbon Dioxide Gas Sensors for Spacesuit Monitoring
NASA Technical Reports Server (NTRS)
Rani, Asha; Shi, Chen; Thomson, Brian; Debnath, Ratan; Wen, Boamei; Motayed, Abhishek; Chullen, Cinda
2018-01-01
N5 Sensors, Inc. through a Small Business Technology Transfer (STTR) contract award has been developing ultra-small, low-power carbon dioxide (CO2) gas sensors, suited for monitoring CO2 levels inside NASA spacesuits. Due to the unique environmental conditions within the spacesuits, such as high humidity, large temperature swings, and operating pressure swings, measurement of key gases relevant to astronaut's safety and health such as(CO2), is quite challenging. Conventional non-dispersive infrared absorption based CO2 sensors present challenges inside the spacesuits due to size, weight, and power constraints, along with the ability to sense CO2 in a high humidity environment. Unique chip-scale, nanoengineered chemiresistive gas-sensing architecture has been developed for this application, which can be operated in a typical space-suite environmental conditions. Unique design combining the selective adsorption properties of the nanophotocatalytic clusters of metal-oxides and metals, provides selective detection of CO2 in high relative humidity conditions. All electronic design provides a compact and low-power solution, which can be implemented for multipoint detection of CO2 inside the spacesuits. This paper will describe the sensor architecture, development of new photocatalytic material for better sensor response, and advanced structure for better sensitivity and shorter response times.
Validation of a Sensor-Driven Modeling Paradigm for Multiple Source Reconstruction with FFT-07 Data
2009-05-01
operational warning and reporting (information) systems that combine automated data acquisition, analysis , source reconstruction, display and distribution of...report and to incorporate this operational ca- pability into the integrative multiscale urban modeling system implemented in the com- putational...Journal of Fluid Mechanics, 180, 529–556. [27] Flesch, T., Wilson, J. D., and Yee, E. (1995), Backward- time Lagrangian stochastic dispersion models
NASA Astrophysics Data System (ADS)
Cohen, K. K.; Klara, S. M.; Srivastava, R. D.
2004-12-01
The U.S. Department of Energy's (U.S. DOE's) Carbon Sequestration Program is developing state-of-the-science technologies for measurement, mitigation, and verification (MM&V) in field operations of geologic sequestration. MM&V of geologic carbon sequestration operations will play an integral role in the pre-injection, injection, and post-injection phases of carbon capture and storage projects to reduce anthropogenic greenhouse gas emissions. Effective MM&V is critical to the success of CO2 storage projects and will be used by operators, regulators, and stakeholders to ensure safe and permanent storage of CO2. In the U.S. DOE's Program, Carbon sequestration MM&V has numerous instrumental roles: Measurement of a site's characteristics and capability for sequestration; Monitoring of the site to ensure the storage integrity; Verification that the CO2 is safely stored; and Protection of ecosystems. Other drivers for MM&V technology development include cost-effectiveness, measurement precision, and frequency of measurements required. As sequestration operations are implemented in the future, it is anticipated that measurements over long time periods and at different scales will be required; this will present a significant challenge. MM&V sequestration technologies generally utilize one of the following approaches: below ground measurements; surface/near-surface measurements; aerial and satellite imagery; and modeling/simulations. Advanced subsurface geophysical technologies will play a primary role for MM&V. It is likely that successful MM&V programs will incorporate multiple technologies including but not limited to: reservoir modeling and simulations; geophysical techniques (a wide variety of seismic methods, microgravity, electrical, and electromagnetic techniques); subsurface fluid movement monitoring methods such as injection of tracers, borehole and wellhead pressure sensors, and tiltmeters; surface/near surface methods such as soil gas monitoring and infrared sensors and; aerial and satellite imagery. This abstract will describe results, similarities, and contrasts for funded studies from the U.S. DOE's Carbon Sequestration Program including examples from the Sleipner North Sea Project, the Canadian Weyburn Field/Dakota Gasification Plant Project, the Frio Formation Texas Project, and Yolo County Bioreactor Landfill Project. The abstract will also address the following: How are the terms ``measurement,'' ``mitigation''and ``verification'' defined in the Program? What is the U.S. DOE's Carbon Sequestration Program Roadmap and what are the Roadmap goals for MM&V? What is the current status of MM&V technologies?
Single-photon three-qubit quantum logic using spatial light modulators.
Kagalwala, Kumel H; Di Giuseppe, Giovanni; Abouraddy, Ayman F; Saleh, Bahaa E A
2017-09-29
The information-carrying capacity of a single photon can be vastly expanded by exploiting its multiple degrees of freedom: spatial, temporal, and polarization. Although multiple qubits can be encoded per photon, to date only two-qubit single-photon quantum operations have been realized. Here, we report an experimental demonstration of three-qubit single-photon, linear, deterministic quantum gates that exploit photon polarization and the two-dimensional spatial-parity-symmetry of the transverse single-photon field. These gates are implemented using a polarization-sensitive spatial light modulator that provides a robust, non-interferometric, versatile platform for implementing controlled unitary gates. Polarization here represents the control qubit for either separable or entangling unitary operations on the two spatial-parity target qubits. Such gates help generate maximally entangled three-qubit Greenberger-Horne-Zeilinger and W states, which is confirmed by tomographical reconstruction of single-photon density matrices. This strategy provides access to a wide range of three-qubit states and operations for use in few-qubit quantum information processing protocols.Photons are essential for quantum information processing, but to date only two-qubit single-photon operations have been realized. Here the authors demonstrate experimentally a three-qubit single-photon linear deterministic quantum gate by exploiting polarization along with spatial-parity symmetry.
Reducing Risk in CO2 Sequestration: A Framework for Integrated Monitoring of Basin Scale Injection
NASA Astrophysics Data System (ADS)
Seto, C. J.; Haidari, A. S.; McRae, G. J.
2009-12-01
Geological sequestration of CO2 is an option for stabilization of atmospheric CO2 concentrations. Technical ability to safely store CO2 in the subsurface has been demonstrated through pilot projects and a long history of enhanced oil recovery and acid gas disposal operations. To address climate change, current injection operations must be scaled up by a factor of 100, raising issues of safety and security. Monitoring and verification is an essential component in ensuring safe operations and managing risk. Monitoring provides assurance that CO2 is securely stored in the subsurface, and the mechanisms governing transport and storage are well understood. It also provides an early warning mechanism for identification of anomalies in performance, and a means for intervention and remediation through the ability to locate the CO2. Through theoretical studies, bench scale experiments and pilot tests, a number of technologies have demonstrated their ability to monitor CO2 in the surface and subsurface. Because the focus of these studies has been to demonstrate feasibility, individual techniques have not been integrated to provide a more robust method for monitoring. Considering the large volumes required for injection, size of the potential footprint, length of time a project must be monitored and uncertainty, operational considerations of cost and risk must balance safety and security. Integration of multiple monitoring techniques will reduce uncertainty in monitoring injected CO2, thereby reducing risk. We present a framework for risk management of large scale injection through model based monitoring network design. This framework is applied to monitoring CO2 in a synthetic reservoir where there is uncertainty in the underlying permeability field controlling fluid migration. Deformation and seismic data are used to track plume migration. A modified Ensemble Kalman filter approach is used to estimate flow properties by jointly assimilating flow and geomechanical observations. Issues of risk, cost and uncertainty are considered.
Vermeul, Vince R.; Strickland, Chris E.; Thorne, Paul D.; ...
2014-12-31
The FutureGen 2.0 Project will design and build a first-of-its-kind, near-zero emissions coal-fueled power plant with carbon capture and storage (CCS). To assess storage site performance and meet the regulatory requirements of the Class VI Underground Injection Control (UIC) Program for CO2 Geologic Sequestration, the FutureGen 2.0 project will implement a suite of monitoring technologies designed to 1) evaluate CO2 mass balance and 2) detect any unforeseen loss in CO2 containment. The monitoring program will include direct monitoring of the injection stream and reservoir, and early-leak-detection monitoring directly above the primary confining zone. It will also implement an adaptive monitoringmore » strategy whereby monitoring results are continually evaluated and the monitoring network is modified as required, including the option to drill additional wells in out-years. Wells will be monitored for changes in CO2 concentration and formation pressure, and other geochemical/isotopic signatures that provide indication of CO2 or brine leakage. Indirect geophysical monitoring technologies that were selected for implementation include passive seismic, integrated surface deformation, time-lapse gravity, and pulsed neutron capture logging. Near-surface monitoring approaches that have been initiated include surficial aquifer and surface- water monitoring, soil-gas monitoring, atmospheric monitoring, and hyperspectral data acquisition for assessment of vegetation conditions. Initially, only the collection of baseline data sets is planned; the need for additional near- surface monitoring will be continually evaluated throughout the design and operational phases of the project, and selected approaches may be reinstituted if conditions warrant. Given the current conceptual understanding of the subsurface environment, early and appreciable impacts to near-surface environments are not expected.« less
Automatic Rejection Of Multimode Laser Pulses
NASA Technical Reports Server (NTRS)
Tratt, David M.; Menzies, Robert T.; Esproles, Carlos
1991-01-01
Characteristic modulation detected, enabling rejection of multimode signals. Monitoring circuit senses multiple longitudinal mode oscillation of transversely excited, atmospheric-pressure (TEA) CO2 laser. Facility developed for inclusion into coherent detection laser radar (LIDAR) system. However, circuit described of use in any experiment where desireable to record data only when laser operates in single longitudinal mode.
Child Welfare, Education, Inequality, and Social Policy in Comparative Perspective
ERIC Educational Resources Information Center
Fusarelli, Lance D.
2015-01-01
Using international data on child well-being and educational attainment, this article compares child well-being in the United States to member countries in the Organisation for Economic Co-operation and Development (OECD). Multiple measures of child well-being are analyzed, such as material well-being (including poverty, unemployment, and income…
Coal-Derived Warm Syngas Purification and CO 2 Capture-Assisted Methane Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dagle, Robert A.; King, David L.; Li, Xiaohong S.
2014-10-01
Gasifier-derived syngas from coal has many applications in the area of catalytic transformation to fuels and chemicals. Raw syngas must be treated to remove a number of impurities that would otherwise poison the synthesis catalysts. Inorganic impurities include alkali salts, chloride, sulfur compounds, heavy metals, ammonia, and various P, As, Sb, and Se- containing compounds. Systems comprising multiple sorbent and catalytic beds have been developed for the removal of impurities from gasified coal using a warm cleanup approach. This approach has the potential to be more economic than the currently available acid gas removal (AGR) approaches and improves upon currentlymore » available processes that do not provide the level of impurity removal that is required for catalytic synthesis application. Gasification also lends itself much more readily to the capture of CO 2, important in the regulation and control of greenhouse gas emissions. CO 2 capture material was developed and in this study was demonstrated to assist in methane production from the purified syngas. Simultaneous CO 2 sorption enhances the CO methanation reaction through relaxation of thermodynamic constraint, thus providing economic benefit rather than simply consisting of an add-on cost for carbon capture and release. Molten and pre-molten LiNaKCO 3 can promote MgO and MgO-based double salts to capture CO 2 with high cycling capacity. A stable cycling CO 2 capacity up to 13 mmol/g was demonstrated. This capture material was specifically developed in this study to operate in the same temperature range and therefore integrate effectively with warm gas cleanup and methane synthesis. By combining syngas methanation, water-gas-shift, and CO 2 sorption in a single reactor, single pass yield to methane of 99% was demonstrated at 10 bar and 330°C when using a 20 wt% Ni/MgAl 2O 4 catalyst and a molten-phase promoted MgO-based sorbent. Under model feed conditions both the sorbent and catalyst exhibited favorable stability after multiple test cycles. The cleanup for warm gas cleanup of inorganics was broken down into three major steps: chloride removal, sulfur removal, and the removal for a multitude of trace metal contaminants. Na 2CO 3 was found to optimally remove chlorides at an operating temperature of 450ºC. For sulfur removal two regenerable ZnO beds are used for bulk H 2S removal at 450ºC (<5 ppm S) and a non-regenerable ZnO bed for H 2S polishing at 300ºC (<40 ppb S). It was also found that sulfur from COS could be adsorbed (to levels below our detection limit of 40 ppb) in the presence of water that leads to no detectable slip of H 2S. Finally, a sorbent material comprising of Cu and Ni was found to be effective in removing trace metal impurities such as AsH 3 and PH 3 when operating at 300ºC. Proof-of-concept of the integrated cleanup process was demonstrated with gasifier-generated syngas produced at the Western Research Institute using Wyoming Decker Coal. When operating with a ~1 SLPM feed, multiple inorganic contaminant removal sorbents and a tar-reforming bed was able to remove the vast majority of contaminants from the raw syngas. A tar-reforming catalyst was employed due to the production of tars generated from the gasifier used in this particular study. It is envisioned that in a real application a commercial scale gasifier operating at a higher temperature would produce lesser amount of tar. Continuous operation of a poison-sensitive copper-based WGS catalyst located downstream from the cleanup steps resulted in successful demonstration.« less
Dealing with school violence: how hospitals met this new challenge to emergency preparedness.
Appelbaum, A
This article discusses how hospitals in Jonesboro, AR, and Denver, CO, met the challenge of dealing with school shooting rampages that resulted in multiple casualties and received widespread media coverage. Hospitals need to be well-prepared to implement their emergency disaster plans and handle the physical and emotional trauma of such incidents which, according to a well-known criminologist, may become more frequent.
ERIC Educational Resources Information Center
Constantine, Angelina; Rózowa, Paula; Szostkowski, Alaina; Ellis, Joshua; Roehrig, Gillian
2017-01-01
In the age of STEM education, teachers consistently struggle to understand the nature of technology and how to integrate it. This multiple-case study uses the TPACK framework to explore the beliefs and practices of three elementary science and engineering teachers from an urban school district with a recently implemented 1:1 iPad policy. All three…
ERIC Educational Resources Information Center
Milman, Natalie B.; Carlson-Bancroft, Angela; Vanden Boogart, Amy
2014-01-01
This mixed methods case study examined the implementation of a 1:1 iPad initiative in a suburban, co-educational, independent, preK-4th grade elementary school in the United States. This article focuses on how teachers used iPads to differentiate instruction and across multiple content areas. Findings show the processes by which teachers employed…
NASA Astrophysics Data System (ADS)
Dobson, B.; Pianosi, F.; Wagener, T.
2016-12-01
Extensive scientific literature exists on the study of how operation decisions in water resource systems can be made more effectively through the use of optimization methods. However, to the best of the authors' knowledge, there is little in the literature on the implementation of these optimization methods by practitioners. We have performed a survey among UK reservoir operators to assess the current state of method implementation in practice. We also ask questions to assess the potential for implementation of operation optimization. This will help academics to target industry in their current research, identify any misconceptions in industry about the area and open new branches of research for which there is an unsatisfied demand. The UK is a good case study because the regulatory framework is changing to impose "no build" solutions for supply issues, as well as planning across entire water resource systems rather than individual components. Additionally there is a high appetite for efficiency due to the water industry's privatization and most operators are part of companies that control multiple water resources, increasing the potential for cooperation and coordination.
Extent of telehealth use in rural and urban hospitals.
Ward, Marcia M; Ullrich, Fred; Mueller, Keith
2014-01-01
Key Findings. Data from 4,727 hospitals in the 2013 HIMSS Analytics database yielded these findings: (1) Two-thirds (66.0% of rural defined as nonmetropolitan and 68.0% of urban) had no telehealth services or were only in the process of implementing a telehealth application. One-third (34.0%rural and 32.0% urban) had at least one telehealth application currently in use. (2) Among hospitals with "live and operational" telehealth services, 61.4% indicated only a single department/program with an operational telehealth service, and 38.6% indicated two or more departments/programs with operational telehealth services. Rural hospitals were significantly less likely to have multiple services (35.2%) than were urban hospitals (42.1%) (3) Hospitals that were more likely to have implemented at least one telehealth service were academic medical centers, not-for-profit institutions, hospitals belonging to integrated delivery systems, and larger institutions (in terms of FTEs but not licensed beds). Rural and urban hospitals did not differ significantly in overall telehealth implementation rates. (4) Urban and rural hospitals did differ in the department where telehealth was implemented. Urban hospitals were more likely than rural hospitals to have operational telehealth implementations in cardiology/stroke/heart attack programs (7.4% vs. 6.2%), neurology (4.4% vs. 2.1%), and obstetrics/gynecology/NICU/pediatrics (3.8% vs. 2.5%). In contrast, rural hospitals were more likely than urban hospital to have operational telehealth implementations in radiology departments (17.7% vs. 13.9%) and in emergency/trauma care (8.8% vs. 6.3%).
Garmer, K; Dahlman, S; Sperling, L
1995-12-01
This study deals with the design, trials and evaluation of a co-education programme at the Volvo Uddevalla plant in Sweden. Involving operators, manufacturing engineers and managers, the programme served as a support for the creation of a participatory ergonomics process, intended for continuous use at the plant. It consisted of a basic ergonomics knowledge package, and a dialogue model defining the roles and relations of actors involved. As a practical part of the programme, trial development projects were also carried out by the participants. The main and long term objective of the project was to start the participants cooperating in a continuous change and development process on the shop-floor. The outcome of the co-education programme was evaluated immediately after the first two regular courses, and, as a longterm follow-up, after seven subsequent courses shortly after the closing of the Uddevalla plant. The co-education programme was shown to be successful. Later on, the expertize of both operators and manufacturing engineers became obvious to everyone at the plant, and the cooperation between operators and manufacturing engineers increased steadily. The main conclusion drawn was that the co-education programme is a good starting point for a process of participation and industrial change work. However, in order to get a permanent impact, the whole organization must nurse and nourish the further development, and implementation of the process.
A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations
Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; ...
2017-06-01
As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less
A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel
As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less
Asikanius, Elina; Rufibach, Kaspar; Bahlo, Jasmin; Bieska, Gabriele; Burger, Hans Ulrich
2016-11-01
To optimize resources, randomized clinical trials with multiple arms can be an attractive option to simultaneously test various treatment regimens in pharmaceutical drug development. The motivation for this work was the successful conduct and positive final outcome of a three-arm randomized clinical trial primarily assessing whether obinutuzumab plus chlorambucil in patients with chronic lympocytic lymphoma and coexisting conditions is superior to chlorambucil alone based on a time-to-event endpoint. The inference strategy of this trial was based on a closed testing procedure. We compare this strategy to three potential alternatives to run a three-arm clinical trial with a time-to-event endpoint. The primary goal is to quantify the differences between these strategies in terms of the time it takes until the first analysis and thus potential approval of a new drug, number of required events, and power. Operational aspects of implementing the various strategies are discussed. In conclusion, using a closed testing procedure results in the shortest time to the first analysis with a minimal loss in power. Therefore, closed testing procedures should be part of the statistician's standard clinical trials toolbox when planning multiarm clinical trials. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Memristor-based cellular nonlinear/neural network: design, analysis, and applications.
Duan, Shukai; Hu, Xiaofang; Dong, Zhekang; Wang, Lidan; Mazumder, Pinaki
2015-06-01
Cellular nonlinear/neural network (CNN) has been recognized as a powerful massively parallel architecture capable of solving complex engineering problems by performing trillions of analog operations per second. The memristor was theoretically predicted in the late seventies, but it garnered nascent research interest due to the recent much-acclaimed discovery of nanocrossbar memories by engineers at the Hewlett-Packard Laboratory. The memristor is expected to be co-integrated with nanoscale CMOS technology to revolutionize conventional von Neumann as well as neuromorphic computing. In this paper, a compact CNN model based on memristors is presented along with its performance analysis and applications. In the new CNN design, the memristor bridge circuit acts as the synaptic circuit element and substitutes the complex multiplication circuit used in traditional CNN architectures. In addition, the negative differential resistance and nonlinear current-voltage characteristics of the memristor have been leveraged to replace the linear resistor in conventional CNNs. The proposed CNN design has several merits, for example, high density, nonvolatility, and programmability of synaptic weights. The proposed memristor-based CNN design operations for implementing several image processing functions are illustrated through simulation and contrasted with conventional CNNs. Monte-Carlo simulation has been used to demonstrate the behavior of the proposed CNN due to the variations in memristor synaptic weights.
Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows
NASA Astrophysics Data System (ADS)
Jittamai, Phongchai
This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and approximately 40% of the tested problems were solved optimally by the algorithms.
Yu, Yang; Rajagopal, Ram
2015-02-17
Two dispatch protocols have been adopted by electricity markets to deal with the uncertainty of wind power but the effects of the selection between the dispatch protocols have not been comprehensively analyzed. We establish a framework to compare the impacts of adopting different dispatch protocols on the efficacy of using wind power and implementing a carbon tax to reduce emissions. We suggest that a market has high potential to achieve greater emission reduction by adopting the stochastic dispatch protocol instead of the static protocol when the wind energy in the market is highly uncertain or the market has enough adjustable generators, such as gas-fired combustion generators. Furthermore, the carbon-tax policy is more cost-efficient for reducing CO2 emission when the market operates according to the stochastic protocol rather than the static protocol. An empirical study, which is calibrated according to the data from the Electric Reliability Council of Texas market, confirms that using wind energy in the Texas market results in a 12% CO2 emission reduction when the market uses the stochastic dispatch protocol instead of the 8% emission reduction associated with the static protocol. In addition, if a 6$/ton carbon tax is implemented in the Texas market operated according to the stochastic protocol, the CO2 emission is similar to the emission level from the same market with a 16$/ton carbon tax operated according to the static protocol. Correspondingly, the 16$/ton carbon tax associated with the static protocol costs 42.6% more than the 6$/ton carbon tax associated with the stochastic protocol.
van Miltenburg, Nynke; Buskens, Vincent
2017-01-01
We study the effects of different punishment institutions on cooperation in a six-person prisoner’s dilemma game in which actors observe others’ cooperation with some noise (i.e. imperfect public monitoring). Previous research has shown that peer punishment can sustain cooperation, if a certain proportion of group members punish defectors at a cost to themselves. However, in the presence of noise, co-operators will sometimes be mistaken for defectors and punished, and defectors will sometimes be mistaken for co-operators and escape punishment. Both types of mistakes are detrimental for cooperation because cooperation is discouraged and defection is encouraged. By means of a laboratory experiment, we study whether this adverse effect of noise can be mitigated by consensual punishment. The more other group members have to agree on punishing a defector, the less likely will a co-operator be punished by mistake. We compare a punishment institution in which each subject decides individually whether to punish another, with institutions in which punishments are only implemented if subjects reach sufficient consensus that a particular group member should be punished. In conditions without noise, we find that cooperation and subjects’ payoffs are higher if more consensus is required before a punishment is implemented. In conditions with noise, cooperation is lower if more consensus is required. Moreover, with noise, subjects’ payoffs are lower under all punishment institutions than in the control condition without punishment opportunities. Our results narrow down the conditions under which punishment institutions can promote cooperation if such cooperation is noisy. PMID:29176900
van Miltenburg, Nynke; Przepiorka, Wojtek; Buskens, Vincent
2017-01-01
We study the effects of different punishment institutions on cooperation in a six-person prisoner's dilemma game in which actors observe others' cooperation with some noise (i.e. imperfect public monitoring). Previous research has shown that peer punishment can sustain cooperation, if a certain proportion of group members punish defectors at a cost to themselves. However, in the presence of noise, co-operators will sometimes be mistaken for defectors and punished, and defectors will sometimes be mistaken for co-operators and escape punishment. Both types of mistakes are detrimental for cooperation because cooperation is discouraged and defection is encouraged. By means of a laboratory experiment, we study whether this adverse effect of noise can be mitigated by consensual punishment. The more other group members have to agree on punishing a defector, the less likely will a co-operator be punished by mistake. We compare a punishment institution in which each subject decides individually whether to punish another, with institutions in which punishments are only implemented if subjects reach sufficient consensus that a particular group member should be punished. In conditions without noise, we find that cooperation and subjects' payoffs are higher if more consensus is required before a punishment is implemented. In conditions with noise, cooperation is lower if more consensus is required. Moreover, with noise, subjects' payoffs are lower under all punishment institutions than in the control condition without punishment opportunities. Our results narrow down the conditions under which punishment institutions can promote cooperation if such cooperation is noisy.
A physical layer perspective on access network sharing
NASA Astrophysics Data System (ADS)
Pfeiffer, Thomas
2015-12-01
Unlike in copper or wireless networks, there is no sharing of resources in fiber access networks yet, other than bit stream access or cable sharing, in which the fibers of a cable are let to one or multiple operators. Sharing optical resources on a single fiber among multiple operators or different services has not yet been applied. While this would allow for a better exploitation of installed infrastructures, there are operational issues which still need to be resolved, before this sharing model can be implemented in networks. Operating multiple optical systems and services over a common fiber plant, autonomously and independently from each other, can result in mutual distortions on the physical layer. These distortions will degrade the performance of the involved systems, unless precautions are taken in the infrastructure hardware to eliminate or to reduce them to an acceptable level. Moreover, the infrastructure needs to be designed such as to support different system technologies and to ensure a guaranteed quality of the end-to-end connections. In this paper, suitable means are proposed to be introduced in fiber access infrastructures that will allow for shared utilization of the fibers while safeguarding the operational needs and business interests of the involved parties.
Real-time validation of receiver state information in optical space-time block code systems.
Alamia, John; Kurzweg, Timothy
2014-06-15
Free space optical interconnect (FSOI) systems are a promising solution to interconnect bottlenecks in high-speed systems. To overcome some sources of diminished FSOI performance caused by close proximity of multiple optical channels, multiple-input multiple-output (MIMO) systems implementing encoding schemes such as space-time block coding (STBC) have been developed. These schemes utilize information pertaining to the optical channel to reconstruct transmitted data. The STBC system is dependent on accurate channel state information (CSI) for optimal system performance. As a result of dynamic changes in optical channels, a system in operation will need to have updated CSI. Therefore, validation of the CSI during operation is a necessary tool to ensure FSOI systems operate efficiently. In this Letter, we demonstrate a method of validating CSI, in real time, through the use of moving averages of the maximum likelihood decoder data, and its capacity to predict the bit error rate (BER) of the system.
Participant verification: prevention of co-enrolment in clinical trials in South Africa.
Harichund, C; Haripersad, K; Ramjee, R
2013-05-15
As KwaZulu-Natal Province is the epicentre of the HIV epidemic in both South Africa (SA) and globally, it is an ideal location to conduct HIV prevention and therapeutic trials. Numerous prevention trials are currently being conducted here; the potential for participant co-enrolment may compromise the validity of these studies and is therefore of great concern. To report the development and feasibility of a digital, fingerprint-based participant identification method to prevent co-enrolment at multiple clinical trial sites. The Medical Research Council (MRC) HIV Prevention Research Unit (HPRU) developed the Biometric Co-enrolment Prevention System (BCEPS), which uses fingerprint-based biometric technology to identify participants. A trial website was used to determine the robustness and usability of the system. After successful testing, the BCEPS was piloted in July 2010 across 7 HPRU clinical research sites. The BCEPS was pre-loaded with study names and clinical trial sites, with new participant information loaded at first visit to a trial site. We successfully implemented the BCEPS at the 7 HPRU sites. Using the BCEPS, we performed real-time 'flagging' of women who were already enrolled in another study as they entered a trial at an HPRU site and, where necessary, excluded them from participation on site. This system has promise in reducing co-enrolment in clinical trials and represents a valuable tool for future implementation by all groups conducting trials. The MRC is currently co-ordinating this effort with clinical trial sites nationally.
Karim, Quarraisha Abdool; Kharsany, Ayesha B M; Naidoo, Kasavan; Yende, Nonhlanhla; Gengiah, Tanuja; Omar, Zaheen; Arulappan, Natasha; Mlisana, Koleka P; Luthuli, Londiwe R; Karim, Salim S Abdool
2011-05-01
In settings where multiple HIV prevention trials are conducted in close proximity, trial participants may attempt to enroll in more than one trial simultaneously. Co-enrollment impacts on participant's safety and validity of trial results. We describe our experience, remedial action taken, inter-organizational collaboration and lessons learnt following the identification of co-enrolled participants. Between February and April 2008, we identified 185 of the 398 enrolled participants as ineligible. In violation of the study protocol exclusion criteria, there was simultaneous enrollment in another HIV prevention trial (ineligible co-enrolled, n=135), and enrollment of women who had participated in a microbicide trial within the past 12 months (ineligible not co-enrolled, n=50). Following a complete audit of all enrolled participants, ineligible participants were discontinued via study exit visits from trial follow-up. Custom-designed education program on co-enrollment impacting on participants' safety and validity of the trial results was implemented. Shared electronic database between research units was established to enable verification of each volunteer's trial participation and to prevent future co-enrollments. Interviews with ineligible enrolled women revealed that high-quality care, financial incentives, altruistic motives, preference for sex with gel, wanting to increase their likelihood of receiving active gel, perceived low risk of discovery and peer pressure are the reasons for their enrollment in the CAPRISA 004 trial. Instituting education programs based on the reasons reported by women for seeking enrollment in more than one trial and using a shared central database system to identify co-enrollments have effectively prevented further co-enrollments. Copyright © 2011 Elsevier Inc. All rights reserved.
Planning Requirements for Small School Facilities.
ERIC Educational Resources Information Center
Davis, J. Clark; McQueen, Robert
The unique requirements of small school facilities, designed to handle multiple curricular functions within the same operational space, necessitate the creation of educational specifications tying the curriculum to that portion of the facility in which each curriculum component will be implemented. Thus, in planning the facility the major concern…
NASA Technical Reports Server (NTRS)
Elshorbany, Yasin F.; Duncan, Bryan N.; Strode, Sarah A.; Wang, James S.; Kouatchou, Jules
2015-01-01
We present the Efficient CH4-CO-OH Module (ECCOH) that allows for the simulation of the methane, carbon monoxide and hydroxyl radical (CH4-CO-OH cycle, within a chemistry climate model, carbon cycle model, or earth system model. The computational efficiency of the module allows many multi-decadal, sensitivity simulations of the CH4-CO-OH cycle, which primarily determines the global tropospheric oxidizing capacity. This capability is important for capturing the nonlinear feedbacks of the CH4-CO-OH system and understanding the perturbations to relatively long-lived methane and the concomitant impacts on climate. We implemented the ECCOH module into the NASA GEOS-5 Atmospheric Global Circulation Model (AGCM), performed multiple sensitivity simulations of the CH4-CO-OH system over two decades, and evaluated the model output with surface and satellite datasets of methane and CO. The favorable comparison of output from the ECCOH module (as configured in the GEOS-5 AGCM) with observations demonstrates the fidelity of the module for use in scientific research.
The quarantine bay 4RC CO/sub 2/ WAG pilot project: A postflood evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsie, J.C.; Moore, J.S.
1988-08-01
This paper reviews the design, implementation, and performance of a miscible CO/sub 2/ water-alternating-gas (WAG) project in a U.S. gulf coast reservoir. The field-test obtained since the inception of the project in Oct. 1981 are presented, and solutions to such operational problems as downhole corrosion are discussed. Remarkable project response and recovery demonstrated that the CO/sub 2/ WAG process is technically viable for mobilizing considerable amounts of residual oil from watered-out Miocene reservoirs. A 16.9% recovery or original oil in place (OOIP) was obtained with a CO/sub 2/ slug size of 18.9% original HCPV. The CO/sub 2/ requirement was 2.57more » Mcf/bbl (458 m/sup 3//m/sup 3/) of oil recovered.« less
The Quarantine Bay 4RC CO/sub 2/-WAG pilot project: A post-flood evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsie, J.C.; Moore, J.S.
1986-01-01
This paper reviews the design, implementation, and performance of a miscible CO/sub 2/ - WAG (water-alternating-gas) project in a Gulf Coast reservoir. The field test data obtained since the inception of the project in October 1981 are presented, and solutions to operational problems such as downhole corrosion are discussed. Remarkable project response and recovery demonstrated that the CO/sub 2/-WAG process is technically viable for mobilizing considerable amounts of residual oil from watered-out Miocene reservoirs. A 14.7% recovery of original-oil-in-place (OOIP) was obtained using a CO/sub 2/ slug size of 18.9% original hydrocarbon pore volume (HCPV). The CO/sub 2/ requirement wasmore » 2.95 Mcf/bbl (531 m/sup 3//m/sup 3/) of oil recovered.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klarer, P.
1994-03-01
The design of a multitasking behavioral control system for the Robotic All Terrain Lunar Exploration Rover (RATLER) is described. The control system design attempts to ameliorate some of the problems noted by some researchers when implementing subsumption or behavioral control systems, particularly with regard to multiple processor systems and real-time operations. The architecture is designed to allow both synchronous and asynchronous operations between various behavior modules by taking advantage of intertask communications channels, and by implementing each behavior module and each interconnection node as a stand-alone task. The potential advantages of this approach over those previously described in the fieldmore » are discussed. An implementation of the architecture is planned for a prototype Robotic All Terrain Lunar Exploration Rover (RATLER) currently under development, and is briefly described.« less
A Electro-Optical Image Algebra Processing System for Automatic Target Recognition
NASA Astrophysics Data System (ADS)
Coffield, Patrick Cyrus
The proposed electro-optical image algebra processing system is designed specifically for image processing and other related computations. The design is a hybridization of an optical correlator and a massively paralleled, single instruction multiple data processor. The architecture of the design consists of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined in terms of basic operations of an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how it implements the natural decomposition of algebraic functions into spatially distributed, point use operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The implementation of the proposed design may be accomplished in many ways. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control a large variety of the arithmetic and logic operations of the image algebra's generalized matrix product. The generalized matrix product is the most powerful fundamental operation in the algebra, thus allowing a wide range of applications. No other known device or design has made this claim of processing speed and general implementation of a heterogeneous image algebra.
Coupled thermal–hydrological–mechanical modeling of CO 2 -enhanced coalbed methane recovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Tianran; Rutqvist, Jonny; Oldenburg, Curtis M.
CO 2 -enhanced coalbed methane recovery, also known as CO 2 -ECBM, is a potential win-win approach for enhanced methane production while simultaneously sequestering injected anthropogenic CO 2 to decrease CO 2 emissions into the atmosphere. Here, CO 2 -ECBM is simulated using a coupled thermal–hydrological–mechanical (THM) numerical model that considers multiphase (gas and water) flow and solubility, multicomponent (CO 2 and CH 4 ) diffusion and adsorption, heat transfer and coal deformation. The coupled model is based on the TOUGH-FLAC simulator, which is applied here for the first time to model CO 2 -ECBM. The capacity of the simulatormore » for modeling methane production is verified by a code-to-code comparison with the general-purpose finite-element solver COMSOL. Then, the TOUGH-FLAC simulator is applied in an isothermal simulation to study the variations in permeability evolution during a CO 2 -ECBM operation while considering four different stress-dependent permeability models that have been implemented into the simulator. Finally, the TOUGH-FLAC simulator is applied in non-isothermal simulations to model THM responses during a CO 2 -ECBM operation.Our simulations show that the permeability evolution, mechanical stress, and deformation are all affected by changes in pressure, temperature and adsorption swelling, with adsorption swelling having the largest effect. The calculated stress changes do not induce any mechanical failure in the coal seam, except near the injection well in one case of a very unfavorable stress field.« less
Coupled thermal–hydrological–mechanical modeling of CO 2 -enhanced coalbed methane recovery
Ma, Tianran; Rutqvist, Jonny; Oldenburg, Curtis M.; ...
2017-05-22
CO 2 -enhanced coalbed methane recovery, also known as CO 2 -ECBM, is a potential win-win approach for enhanced methane production while simultaneously sequestering injected anthropogenic CO 2 to decrease CO 2 emissions into the atmosphere. Here, CO 2 -ECBM is simulated using a coupled thermal–hydrological–mechanical (THM) numerical model that considers multiphase (gas and water) flow and solubility, multicomponent (CO 2 and CH 4 ) diffusion and adsorption, heat transfer and coal deformation. The coupled model is based on the TOUGH-FLAC simulator, which is applied here for the first time to model CO 2 -ECBM. The capacity of the simulatormore » for modeling methane production is verified by a code-to-code comparison with the general-purpose finite-element solver COMSOL. Then, the TOUGH-FLAC simulator is applied in an isothermal simulation to study the variations in permeability evolution during a CO 2 -ECBM operation while considering four different stress-dependent permeability models that have been implemented into the simulator. Finally, the TOUGH-FLAC simulator is applied in non-isothermal simulations to model THM responses during a CO 2 -ECBM operation.Our simulations show that the permeability evolution, mechanical stress, and deformation are all affected by changes in pressure, temperature and adsorption swelling, with adsorption swelling having the largest effect. The calculated stress changes do not induce any mechanical failure in the coal seam, except near the injection well in one case of a very unfavorable stress field.« less
Development and Measurements of a Mid-Infrared Multi-Gas Sensor System for CO, CO2 and CH4 Detection
Dong, Ming; Zheng, Chuantao; Miao, Shuzhuo; Zhang, Yu; Du, Qiaoling; Wang, Yiding
2017-01-01
A multi-gas sensor system was developed that uses a single broadband light source and multiple carbon monoxide (CO), carbon dioxide (CO2) and methane (CH4) pyroelectric detectors by use of the time division multiplexing (TDM) technique. A stepper motor-based rotating system and a single-reflection spherical optical mirror were designed and adopted to realize and enhance multi-gas detection. Detailed measurements under static detection mode (without rotation) and dynamic mode (with rotation) were performed to study the performance of the sensor system for the three gas species. Effects of the motor rotating period on sensor performances were also investigated and a rotation speed of 0.4π rad/s was required to obtain a stable sensing performance, corresponding to a detection period of ~10 s to realize one round of detection. Based on an Allan deviation analysis, the 1σ detection limits under static operation are 2.96, 4.54 and 2.84 parts per million in volume (ppmv) for CO, CO2 and CH4, respectively and the 1σ detection limits under dynamic operations are 8.83, 8.69 and 10.29 ppmv for the three gas species, respectively. The reported sensor has potential applications in various fields requiring CO, CO2 and CH4 detection such as in coal mines. PMID:28953260
NASA Astrophysics Data System (ADS)
Watanabe, Shuji; Takano, Hiroshi; Fukuda, Hiroya; Hiraki, Eiji; Nakaoka, Mutsuo
This paper deals with a digital control scheme of multiple paralleled high frequency switching current amplifier with four-quadrant chopper for generating gradient magnetic fields in MRI (Magnetic Resonance Imaging) systems. In order to track high precise current pattern in Gradient Coils (GC), the proposal current amplifier cancels the switching current ripples in GC with each other and designed optimum switching gate pulse patterns without influences of the large filter current ripple amplitude. The optimal control implementation and the linear control theory in GC current amplifiers have affinity to each other with excellent characteristics. The digital control system can be realized easily through the digital control implementation, DSPs or microprocessors. Multiple-parallel operational microprocessors realize two or higher paralleled GC current pattern tracking amplifier with optimal control design and excellent results are given for improving the image quality of MRI systems.
Economical Implementation of a Filter Engine in an FPGA
NASA Technical Reports Server (NTRS)
Kowalski, James E.
2009-01-01
A logic design has been conceived for a field-programmable gate array (FPGA) that would implement a complex system of multiple digital state-space filters. The main innovative aspect of this design lies in providing for reuse of parts of the FPGA hardware to perform different parts of the filter computations at different times, in such a manner as to enable the timely performance of all required computations in the face of limitations on available FPGA hardware resources. The implementation of the digital state-space filter involves matrix vector multiplications, which, in the absence of the present innovation, would ordinarily necessitate some multiplexing of vector elements and/or routing of data flows along multiple paths. The design concept calls for implementing vector registers as shift registers to simplify operand access to multipliers and accumulators, obviating both multiplexing and routing of data along multiple paths. Each vector register would be reused for different parts of a calculation. Outputs would always be drawn from the same register, and inputs would always be loaded into the same register. A simple state machine would control each filter. The output of a given filter would be passed to the next filter, accompanied by a "valid" signal, which would start the state machine of the next filter. Multiple filter modules would share a multiplication/accumulation arithmetic unit. The filter computations would be timed by use of a clock having a frequency high enough, relative to the input and output data rate, to provide enough cycles for matrix and vector arithmetic operations. This design concept could prove beneficial in numerous applications in which digital filters are used and/or vectors are multiplied by coefficient matrices. Examples of such applications include general signal processing, filtering of signals in control systems, processing of geophysical measurements, and medical imaging. For these and other applications, it could be advantageous to combine compact FPGA digital filter implementations with other application-specific logic implementations on single integrated-circuit chips. An FPGA could readily be tailored to implement a variety of filters because the filter coefficients would be loaded into memory at startup.
Efficient electrochemical CO 2 conversion powered by renewable energy
Kauffman, Douglas R.; Thakkar, Jay; Siva, Rajan; ...
2015-06-29
Here, the catalytic conversion of CO 2 into industrially relevant chemicals is one strategy for mitigating greenhouse gas emissions. Along these lines, electrochemical CO 2 conversion technologies are attractive because they can operate with high reaction rates at ambient conditions. However, electrochemical systems require electricity, and CO 2 conversion processes must integrate with carbon-free, renewable-energy sources to be viable on larger scales. We utilize Au 25 nanoclusters as renewably powered CO 2 conversion electrocatalysts with CO 2 → CO reaction rates between 400 and 800 L of CO 2 per gram of catalytic metal per hour and product selectivities betweenmore » 80 and 95%. These performance metrics correspond to conversion rates approaching 0.8–1.6 kg of CO 2 per gram of catalytic metal per hour. We also present data showing CO 2 conversion rates and product selectivity strongly depend on catalyst loading. Optimized systems demonstrate stable operation and reaction turnover numbers (TONs) approaching 6 × 10 6 mol CO 2 molcatalyst–1 during a multiday (36 hours total hours) CO 2electrolysis experiment containing multiple start/stop cycles. TONs between 1 × 10 6 and 4 × 10 6 molCO 2 molcatalyst–1 were obtained when our system was powered by consumer-grade renewable-energy sources. Daytime photovoltaic-powered CO 2 conversion was demonstrated for 12 h and we mimicked low-light or nighttime operation for 24 h with a solar-rechargeable battery. This proof-of-principle study provides some of the initial performance data necessary for assessing the scalability and technical viability of electrochemical CO 2 conversion technologies. Specifically, we show the following: (1) all electrochemical CO 2 conversion systems will produce a net increase in CO 2 emissions if they do not integrate with renewable-energy sources, (2) catalyst loading vs activity trends can be used to tune process rates and product distributions, and (3) state-of-the-art renewable-energy technologies are sufficient to power larger-scale, tonne per day CO 2 conversion systems.« less
Shek, Daniel T L; Ma, Cecilia M S
2012-01-17
The present study was conducted to explore the implementation quality of the Secondary 3 Program of the Tier 1 Program of Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in the third year of the Full Implementation Phase. Classroom observations of 182 units in 129 schools were conducted. Results showed that the overall level of program adherence was 73.9%. Thirteen aspects concerning program delivery were significantly correlated. Multiple regression analyses revealed that overall implementation quality was significantly predicted by student participation and involvement, strategies to enhance student motivation, use of positive and supportive feedback, degree of achievement of the objectives, and lesson preparation. Success of implementation was significantly predicted by student participation and involvement, classroom control, use of positive and supportive feedback, opportunity for reflection, degree of achievement of the objectives and time management. The present findings generally suggest that the implementation quality of Project P.A.T.H.S. was high.
Panni, M K; Shah, S J; Chavarro, C; Rawl, M; Wojnarwsky, P K; Panni, J K
2013-10-01
There are multiple components leading to improved operating room efficiency. We undertook a project focusing on first case starts; accounting for each delay component on a global basis. Our hypothesis was there would be a reduction in first start delays after we implemented strategies to address the issues identified through this accounting process. An orange sheet checklist was implemented, with specific items that needed to be clear prior to roll back to the operating room (OR), and an OR facilitator was employed to intervene whenever there were any missing items needed for a specific patient. We present the data from this quality improvement project over an 18-month period. Initially, 10.07 (± 0.73) delayed first starts occurred per day but declined steadily over time to a low of 4.95 (± 0.38) per day after 6 months (-49.2 %, P < 0.001). By the end of the project, the most common reasons for delay still included late surgical attending (19%), schedule changes (14%) as well as 'other reasons' (13%), but with an overall reduction per day of each. Total anaesthesia delay initially totalled 11% of the first start delays, but was negligible (< 1%) at the project's completion. While we have a challenging operating room environment based on our patient population, multiple trainees in both the surgery and anaesthesiology teams: an orange sheet - pre-operative checklist in addition to a dedicated pre-operative facilitator; allowed us to make a substantial improvement in our first start on time starts. © 2013 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Segmentation And Quantification Of Black Holes In Multiple Sclerosis
Datta, Sushmita; Sajja, Balasrinivasa Rao; He, Renjie; Wolinsky, Jerry S.; Gupta, Rakesh K.; Narayana, Ponnada A.
2006-01-01
A technique that involves minimal operator intervention was developed and implemented for identification and quantification of black holes on T1- weighted magnetic resonance images (T1 images) in multiple sclerosis (MS). Black holes were segmented on T1 images based on grayscale morphological operations. False classification of black holes was minimized by masking the segmented images with images obtained from the orthogonalization of T2-weighted and T1 images. Enhancing lesion voxels on postcontrast images were automatically identified and eliminated from being included in the black hole volume. Fuzzy connectivity was used for the delineation of black holes. The performance of this algorithm was quantitatively evaluated on 14 MS patients. PMID:16126416
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1991-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
Low-Power Architecture for an Optical Life Gas Analyzer
NASA Technical Reports Server (NTRS)
Pilgrim, Jeffrey; Vakhtin, Andrei
2012-01-01
Analog and digital electronic control architecture has been combined with an operating methodology for an optical trace gas sensor platform that allows very low power consumption while providing four independent gas measurements in essentially real time, as well as a user interface and digital data storage and output. The implemented design eliminates the cross-talk between the measurement channels while maximizing the sensitivity, selectivity, and dynamic range for each measured gas. The combination provides for battery operation on a simple camcorder battery for as long as eight hours. The custom, compact, rugged, self-contained design specifically targets applications of optical major constituent and trace gas detection for multiple gases using multiple lasers and photodetectors in an integrated package.
Simultaneous high efficiency capture of CO.sub.2 and H.sub.2S from pressurized gas
Gal, Eli; Krishnan, Gopala N.; Jayaweera, Indira S.
2016-10-11
Low-cost and energy-efficient CO.sub.2 and H.sub.2S capture is provided obtaining greater than 99.9% capture efficiency from pressurized gas. The acid species are captured in an ammonia solution, which is then regenerated by stripping the absorbed species. The solution can capture as much as 330 grams of CO.sub.2 and H.sub.2S per 1000 gram of water and when regenerated it produces pure pressurized acid gas containing more than 99.7% CO.sub.2 and H2S. The absorption of the acid species is accomplished in two absorbers in-series, each having multiple stages. More than 95% of the acid species are captured in the first absorber and the balance is captured in the second absorber to below 10 ppm concentration in the outlet gas. The two absorbers operate at temperatures ranging from 20-70 degrees Celsius. The two absorbers and the main stripper of the alkaline solution operate at similar pressures ranging from 5-200 bara.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-02
... Ada County Air Quality Maintenance Area will maintain air quality standards for carbon monoxide (CO... Avenue, Suite 900, Seattle, WA 98101. Hand Delivery/Courier: U.S. EPA Region 10, 1200 Sixth Avenue, Suite... deliveries are only accepted during normal hours of operation, and special arrangements should be made for...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-25
... financial institution operating outside of the United States that is of primary money laundering concern... international money laundering and the financing of terrorism. Regulations implementing the BSA appear at 31 CFR..., class of transaction, or type of account is of ``primary money laundering concern,'' to require domestic...
Regeneration of pilot-scale ion exchange columns for hexavalent chromium removal.
Korak, Julie A; Huggins, Richard; Arias-Paic, Miguel
2017-07-01
Due to stricter regulations, some drinking water utilities must implement additional treatment processes to meet potable water standards for hexavalent chromium (Cr(VI)), such as the California limit of 10 μg/L. Strong base anion exchange is effective for Cr(VI) removal, but efficient resin regeneration and waste minimization are important for operational, economic and environmental considerations. This study compared multiple regeneration methods on pilot-scale columns on the basis of regeneration efficiency, waste production and salt usage. A conventional 1-Stage regeneration using 2 N sodium chloride (NaCl) was compared to 1) a 2-Stage process with 0.2 N NaCl followed by 2 N NaCl and 2) a mixed regenerant solution with 2 N NaCl and 0.2 N sodium bicarbonate. All methods eluted similar cumulative amounts of chromium with 2 N NaCl. The 2-Stage process eluted an additional 20-30% of chromium in the 0.2 N fraction, but total resin capacity is unaffected if this fraction is recycled to the ion exchange headworks. The 2-Stage approach selectively eluted bicarbonate and sulfate with 0.2 N NaCl before regeneration using 2 N NaCl. Regeneration approach impacted the elution efficiency of both uranium and vanadium. Regeneration without co-eluting sulfate and bicarbonate led to incomplete uranium elution and potential formation of insoluble uranium hydroxides that could lead to long-term resin fouling, decreased capacity and render the resin a low-level radioactive solid waste. Partial vanadium elution occurred during regeneration due to co-eluting sulfate suppressing vanadium release. Waste production and salt usage were comparable for the 1- and 2-Stage regeneration processes with similar operational setpoints with respect to chromium or nitrate elution. Published by Elsevier Ltd.
Algorithm implementation on the Navier-Stokes computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krist, S.E.; Zang, T.A.
1987-03-01
The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.
Algorithm implementation on the Navier-Stokes computer
NASA Technical Reports Server (NTRS)
Krist, Steven E.; Zang, Thomas A.
1987-01-01
The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.
Potential for cobalt recovery from lateritic ores in Europe
NASA Astrophysics Data System (ADS)
Herrington, R.
2012-04-01
Cobalt is one of the 'critical metals' identified under the EU Raw Materials Initiative. Annually the global mine production of cobalt is around 55,000 tonnes,with Europe's industries consuming around 30% of that figure. Currently Europe produces around 27 tonnes of cobalt from mines in Finland although new capacity is planned. Co-bearing nickel laterite ores being mined in Greece, Macedonia and Kosovo where the cobalt is currently not being recovered (ores have typical analyses of 0.055% Co and >1% Ni,). These ores are currently treated directly in pyrometallurgical plants to recover the contained nickel and this process means there is no separate cobalt product produced. Hydrometallurgical treatment of mineralogically suitable laterite ores can recover the cobalt; for example Cuba recovers 3,500 tonnes of cobalt from its laterite mining operations, which are of a similar scale to the current European operations. Implementation of hydrometallurgical techniques is in its infancy in Europe with one deposit in Turkey planning to use atmospheric heap leaching to recover nickel and copper from oxide-dominated ores. More widespread implementation of these methods to mineralogically suitable ore types could unlock the highly significant undeveloped resources (with metal contents >0.04% Co and >1% Ni), which have been defined throughout the Balkans eastwards into Turkey. At a conservative estimate, this region has the potential to supply up to 30% of the EU cobalt requirements.
Optimal PGU operation strategy in CHP systems
NASA Astrophysics Data System (ADS)
Yun, Kyungtae
Traditional power plants only utilize about 30 percent of the primary energy that they consume, and the rest of the energy is usually wasted in the process of generating or transmitting electricity. On-site and near-site power generation has been considered by business, labor, and environmental groups to improve the efficiency and the reliability of power generation. Combined heat and power (CHP) systems are a promising alternative to traditional power plants because of the high efficiency and low CO2 emission achieved by recovering waste thermal energy produced during power generation. A CHP operational algorithm designed to optimize operational costs must be relatively simple to implement in practice such as to minimize the computational requirements from the hardware to be installed. This dissertation focuses on the following aspects pertaining the design of a practical CHP operational algorithm designed to minimize the operational costs: (a) real-time CHP operational strategy using a hierarchical optimization algorithm; (b) analytic solutions for cost-optimal power generation unit operation in CHP Systems; (c) modeling of reciprocating internal combustion engines for power generation and heat recovery; (d) an easy to implement, effective, and reliable hourly building load prediction algorithm.
Design of 90×8 ROIC with pixel level digital TDI implementation for scanning type LWIR FPAs
NASA Astrophysics Data System (ADS)
Ceylan, Omer; Kayahan, Huseyin; Yazici, Melik; Gurbuz, Yasar
2013-06-01
Design of a 90×8 CMOS readout integrated circuit (ROIC) based on pixel level digital time delay integration (TDI) for scanning type LWIR focal plane arrays (FPAs) is presented. TDI is implemented on 8 pixels which improves the SNR of the system with a factor of √8. Oversampling rate of 3 improves the spatial resolution of the system. TDI operation is realized with a novel under-pixel analog-to-digital converter, which improves the noise performance of ROIC with a lower quantization noise. Since analog signal is converted to digital domain in-pixel, non-uniformities and inaccuracies due to analog signal routing over large chip area is eliminated. Contributions of each pixel for proper TDI operation are added in summation counters, no op-amps are used for summation, hence power consumption of ROIC is lower than its analog counterparts. Due to lack of multiple capacitors or summation amplifiers, ROIC occupies smaller chip area compared to its analog counterparts. ROIC is also superior to its digital counterparts due to novel digital TDI implementation in terms of power consumption, noise and chip area. ROIC supports bi-directional scan, multiple gain settings, bypass operation, automatic gain adjustment, pixel select/deselect, and is programmable through serial or parallel interface. Input referred noise of ROIC is less than 750 rms electrons, while power consumption is less than 20mW. ROIC is designed to perform both in room and cryogenic temperatures.
Graph State-Based Quantum Group Authentication Scheme
NASA Astrophysics Data System (ADS)
Liao, Longxia; Peng, Xiaoqi; Shi, Jinjing; Guo, Ying
2017-02-01
Motivated by the elegant structure of the graph state, we design an ingenious quantum group authentication scheme, which is implemented by operating appropriate operations on the graph state and can solve the problem of multi-user authentication. Three entities, the group authentication server (GAS) as a verifier, multiple users as provers and the trusted third party Trent are included. GAS and Trent assist the multiple users in completing the authentication process, i.e., GAS is responsible for registering all the users while Trent prepares graph states. All the users, who request for authentication, encode their authentication keys on to the graph state by performing Pauli operators. It demonstrates that a novel authentication scheme can be achieved with the flexible use of graph state, which can synchronously authenticate a large number of users, meanwhile the provable security can be guaranteed definitely.
A Review on Spectral Amplitude Coding Optical Code Division Multiple Access
NASA Astrophysics Data System (ADS)
Kaur, Navpreet; Goyal, Rakesh; Rani, Monika
2017-06-01
This manuscript deals with analysis of Spectral Amplitude Coding Optical Code Division Multiple Access (SACOCDMA) system. The major noise source in optical CDMA is co-channel interference from other users known as multiple access interference (MAI). The system performance in terms of bit error rate (BER) degrades as a result of increased MAI. It is perceived that number of users and type of codes used for optical system directly decide the performance of system. MAI can be restricted by efficient designing of optical codes and implementing them with unique architecture to accommodate more number of users. Hence, it is a necessity to design a technique like spectral direct detection (SDD) technique with modified double weight code, which can provide better cardinality and good correlation property.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rugh, John P; Kekelia, Bidzina; Kreutzer, Cory J
The U.S. uses 7.6 billion gallons of fuel per year for vehicle air conditioning (A/C), equivalent to 5.7 percent of the total national light-duty vehicle (LDV) fuel use. This equates to 30 gallons/year per vehicle, or 23.5 grams (g) of carbon dioxide (CO2) per mile, for an average U.S. vehicle. A/C is a significant contribution to national fuel use; therefore, technologies that reduce A/C loads may reduce operational costs, A/C fuel use, and CO2 emissions. Since A/C is not operated during standard EPA fuel economy testing protocols, EPA provides off-cycle credits to encourage OEMs to implement advanced A/C technologies thatmore » reduce fuel use in the real world. NREL researchers assessed thermal/solar off-cycle credits available in the U.S. Environmental Protection Agency's (EPA's) Final Rule for Model Year 2017 and Later Light-Duty Vehicle Greenhouse Gas Emissions and Corporate Average Fuel Economy. Credits include glazings, solar reflective paint, and passive and active cabin ventilation. Implementing solar control glass reduced CO2 emissions by 2.0 g/mi, and solar reflective paint resulted in a reduction of 0.8 g/mi. Active and passive ventilation strategies only reduced emissions by 0.1 and 0.2 g/mi, respectively. The national-level analysis process is powerful and general; it can be used to determine the impact of a wide range of new vehicle thermal technologies on fuel use, EV range, and CO2 emissions.« less
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi; Alkalai, Leon
1996-01-01
Recent changes within NASA's space exploration program favor the design, implementation, and operation of low cost, lightweight, small and micro spacecraft with multiple launches per year. In order to meet the future needs of these missions with regard to the use of spacecraft microelectronics, NASA's advanced flight computing (AFC) program is currently considering industrial cooperation and advanced packaging architectures. In relation to this, the AFC program is reviewed, considering the design and implementation of NASA's AFC multichip module.
NASA Astrophysics Data System (ADS)
Nemes, Csaba; Barcza, Gergely; Nagy, Zoltán; Legeza, Örs; Szolgay, Péter
2014-06-01
In the numerical analysis of strongly correlated quantum lattice models one of the leading algorithms developed to balance the size of the effective Hilbert space and the accuracy of the simulation is the density matrix renormalization group (DMRG) algorithm, in which the run-time is dominated by the iterative diagonalization of the Hamilton operator. As the most time-dominant step of the diagonalization can be expressed as a list of dense matrix operations, the DMRG is an appealing candidate to fully utilize the computing power residing in novel kilo-processor architectures. In the paper a smart hybrid CPU-GPU implementation is presented, which exploits the power of both CPU and GPU and tolerates problems exceeding the GPU memory size. Furthermore, a new CUDA kernel has been designed for asymmetric matrix-vector multiplication to accelerate the rest of the diagonalization. Besides the evaluation of the GPU implementation, the practical limits of an FPGA implementation are also discussed.
Decision Network for Blue Green Solutions to Influence Policy Impact Assessments
NASA Astrophysics Data System (ADS)
Mijic, A.; Theodoropoulos, G.; El Hattab, M. H.; Brown, K.
2017-12-01
Sustainable Urban Drainage Systems (SuDS) deliver ecosystems services that can potentially yield multiple benefits to the urban environment. These benefits can be achieved through optimising SUDS' integration with the local environment and water resources, creating so-called Blue Green Solutions (BGS). The BGS paradigm, however, presents several challenges, in particular quantifying the benefits and creating the scientific evidence-base that can persuade high-level decision-makers and stakeholders to implement BGS at large scale. This work presents the development of the easily implemented and tailored-made approach that allows a robust assessment of the BGS co-benefits, and can influence the types of information that are included in policy impact assessments. The Analytic Network Process approach is used to synthesise the available evidence on the co-benefits of the BGS. The approach enables mapping the interactions between individual BGS selection criteria, and creates a platform to assess the synergetic benefits that arise from components interactions. By working with Government departments and other public and private sector stakeholders, this work has produced a simple decision criteria-based network that will enable the co-benefits and trade-offs of BGS to be quantified and integrated into UK policy appraisals.
Mortsiefer, Achim; Rotthoff, Thomas; Schmelzer, Regine; Immecke, J; Ortmanns, B; in der Schmitten, J; Altiner, A; Karger, André
2012-01-01
Implementation of a longitudinal curriculum for training in advanced communications skills represents an unmet need in most German medical faculties, especially in the 4rth and 5th years of medical studies. The CoMeD project (communication in medical education Düsseldorf) attempted to establish an interdisciplinary program to teach and to assess communicative competence in the 4th academic year. In this paper, we describe the development of the project and report results of its evaluation by medical students. Teaching objectives and lesson formats were developed in a multistage process. A teaching program for simulated patients (SP) was built up and continuous lecturer trainings were estabilshed. Several clinical disciplines co-operated for the purpose of integrating the communication training into the pre-existing clinical teaching curriculum. The CoMeD project was evaluated using feedback-forms after each course. Until now, six training units for especially challenging communication tasks like "dealing with aggression" or "breaking bad news" were implemented, each unit connected with a preliminary tutorial or e-learning course. An OSCE (objective structured clinical examination) with 4 stations was introduced. The students' evaluation of the six CoMeD training units showed the top or second-best rating in more than 80% of the answers. Introducing an interdisciplinary communication training and a corresponding OSCE into the 4th year medical curriculum is feasible. Embedding communication teaching in a clinical context and involvement of clinicians as lecturers seem to be important factors for ensuring practical relevance and achieving high acceptance by medical students.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFreniere, L. M.; Environmental Science Division
The results of the 2006 investigation of contaminant sources at Navarre, Kansas, clearly demonstrate the following: {sm_bullet} Sources of carbon tetrachloride contamination were found on the Navarre Co-op property. These sources are the locations of the highest concentrations of carbon tetrachloride found in soil and groundwater at Navarre. The ongoing groundwater contamination at Navarre originates from these sources. {sm_bullet} The sources on the Co-op property are in locations where the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA) never conducted grain storage operations. {sm_bullet} No definitive sources of carbon tetrachloride were identified on the portion of the currentmore » Co-op property formerly used by the CCC/USDA. {sm_bullet} The source areas on the Co-op property are consistent with the locations of the most intense Co-op operations, both historically and at present. The Co-op historically stored carbon tetrachloride for retail sale and used it as a grain fumigant in these locations. {sm_bullet} The distribution patterns of other contaminants (tetrachloroethene and nitrate) originating from sources on the Co-op property mimic the carbon tetrachloride plume. These other contaminants are not associated with CCC/USDA operations. {sm_bullet} The distribution of carbon tetrachloride at the Co-op source areas, particularly the absence of contamination in soils at depths less than 20 ft below ground level, is consistent with vertical migration into the subsurface through a conduit (well Co-op 2), with subsequent lateral migration through the subsurface. {sm_bullet} The groundwater flow direction, which is toward the west-northwest, is not consistent with migration of carbon tetrachloride in groundwater from the former CCC/USDA property to the source areas on the Co-op property. {sm_bullet} The absence of soil and groundwater contamination along surface drainage pathways on the former CCC/USDA property is not consistent with migration of carbon tetrachloride in surface water runoff from the former CCC/USDA property to the source areas on the Co-op property. {sm_bullet} The contamination detected in soil and groundwater samples collected along the northern boundary of the former CCC/USDA facility can be attributed to migration from the Co-op sources or to operations of the Co-op on the property after CCC/USDA operations ended. {sm_bullet} The southern boundary of the Co-op property has expanded over time, so that the Co-op has operated for a lengthy period in all areas previously leased by the CCC/USDA (Figure S.1). The Co-op began expanding onto the former CCC/USDA property in 1969 and has operated on that property longer than the CCC/USDA did. The use of carbon tetrachloride as a grain fumigant was standard industry practice until 1985, when the compound was banned by the U.S. Environmental Protection Agency. {sm_bullet} Petroleum-related contamination was detected on the southern part of the former CCC/USDA property. This contamination is associated with aboveground storage tanks that are owned and operated by the Co-op. The major findings of the 2006 investigations are summarized in greater detail below. The 2006 investigation was implemented by the Environmental Science Division of Argonne National Laboratory on behalf of the CCC/USDA.« less
Moving towards a new vision: implementation of a public health policy intervention.
Valaitis, Ruta; MacDonald, Marjorie; Kothari, Anita; O'Mara, Linda; Regan, Sandra; Garcia, John; Murray, Nancy; Manson, Heather; Peroff-Johnston, Nancy; Bursey, Gayle; Boyko, Jennifer
2016-05-17
Public health systems in Canada have undergone significant policy renewal over the last decade in response to threats to the public's health, such as severe acute respiratory syndrome. There is limited research on how public health policies have been implemented or what has influenced their implementation. This paper explores policy implementation in two exemplar public health programs -chronic disease prevention and sexually-transmitted infection prevention - in Ontario, Canada. It examines public health service providers', managers' and senior managements' perspectives on the process of implementation of the Ontario Public Health Standards 2008 and factors influencing implementation. Public health staff from six health units representing rural, remote, large and small urban settings were included. We conducted 21 focus groups and 18 interviews between 2010 (manager and staff focus groups) and 2011 (senior management interviews) involving 133 participants. Research assistants coded transcripts and researchers reviewed these; the research team discussed and resolved discrepancies. To facilitate a breadth of perspectives, several team members helped interpret the findings. An integrated knowledge translation approach was used, reflected by the inclusion of academics as well as decision-makers on the team and as co-authors. Front line service providers often were unaware of the new policies but managers and senior management incorporated them in operational and program planning. Some participants were involved in policy development or provided feedback prior to their launch. Implementation was influenced by many factors that aligned with Greenhalgh and colleagues' empirically-based Diffusion of Innovations in Service Organizations Framework. Factors and related components that were most clearly linked to the OPHS policy implementation were: attributes of the innovation itself; adoption by individuals; diffusion and dissemination; the outer context - interorganizational networks and collaboration; the inner setting - implementation processes and routinization; and, linkage at the design and implementation stage. Multiple factors influenced public health policy implementation. Results provide empirical support for components of Greenhalgh et al's framework and suggest two additional components - the role of external organizational collaborations and partnerships as well as planning processes in influencing implementation. These are important to consider by government and public health organizations when promoting new or revised public health policies as they evolve over time. A successful policy implementation process in Ontario has helped to move public health towards the new vision.
Theory, Design, and Algorithms for Optimal Control of wireless Networks
2010-06-09
The implementation of network-centric warfare technologies is an abiding, critical interest of Air Force Science and Technology efforts for the Warfighter. Wireless communications, strategic signaling are areas of critical Air Force Mission need. Autonomous networks of multiple, heterogeneous Throughput enhancement and robust connectivity in communications and sensor networks are critical factors in net-centric USAF operations. This research directly supports the Air Force vision of information dominance and the development of anywhere, anytime operational readiness.
NASA Astrophysics Data System (ADS)
Stone, Christopher P.; Alferman, Andrew T.; Niemeyer, Kyle E.
2018-05-01
Accurate and efficient methods for solving stiff ordinary differential equations (ODEs) are a critical component of turbulent combustion simulations with finite-rate chemistry. The ODEs governing the chemical kinetics at each mesh point are decoupled by operator-splitting allowing each to be solved concurrently. An efficient ODE solver must then take into account the available thread and instruction-level parallelism of the underlying hardware, especially on many-core coprocessors, as well as the numerical efficiency. A stiff Rosenbrock and a nonstiff Runge-Kutta ODE solver are both implemented using the single instruction, multiple thread (SIMT) and single instruction, multiple data (SIMD) paradigms within OpenCL. Both methods solve multiple ODEs concurrently within the same instruction stream. The performance of these parallel implementations was measured on three chemical kinetic models of increasing size across several multicore and many-core platforms. Two separate benchmarks were conducted to clearly determine any performance advantage offered by either method. The first benchmark measured the run-time of evaluating the right-hand-side source terms in parallel and the second benchmark integrated a series of constant-pressure, homogeneous reactors using the Rosenbrock and Runge-Kutta solvers. The right-hand-side evaluations with SIMD parallelism on the host multicore Xeon CPU and many-core Xeon Phi co-processor performed approximately three times faster than the baseline multithreaded C++ code. The SIMT parallel model on the host and Phi was 13%-35% slower than the baseline while the SIMT model on the NVIDIA Kepler GPU provided approximately the same performance as the SIMD model on the Phi. The runtimes for both ODE solvers decreased significantly with the SIMD implementations on the host CPU (2.5-2.7 ×) and Xeon Phi coprocessor (4.7-4.9 ×) compared to the baseline parallel code. The SIMT implementations on the GPU ran 1.5-1.6 times faster than the baseline multithreaded CPU code; however, this was significantly slower than the SIMD versions on the host CPU or the Xeon Phi. The performance difference between the three platforms was attributed to thread divergence caused by the adaptive step-sizes within the ODE integrators. Analysis showed that the wider vector width of the GPU incurs a higher level of divergence than the narrower Sandy Bridge or Xeon Phi. The significant performance improvement provided by the SIMD parallel strategy motivates further research into more ODE solver methods that are both SIMD-friendly and computationally efficient.
Targeting multiple heterogeneous hardware platforms with OpenCL
NASA Astrophysics Data System (ADS)
Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.
2014-06-01
The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware-specific optimizations as necessary.
NASA Astrophysics Data System (ADS)
Ge, Sijie; Wang, Sujing; Xu, Qiang; Ho, Thomas
2018-03-01
Turnaround operations (start-up and shutdown) are critical operations in olefin plants, which emit large quantities of VOCs, NOx and CO. The emission has great potentials to impact the ozone level in ozone nonattainment areas. This study demonstrates a novel practice to minimize the ozone impact through coordinated scheduling of turnaround operations from multiple olefin plants located in Houston, Texas, an ozone nonattainment area. The study considered two olefin plants scheduled to conduct turnaround operations: one start-up and one shutdown, simultaneously on the same day within a five-hour window. Through dynamic simulations of the turnaround operations using ASPEN Plus Dynamics and air quality simulations using CAMx, the study predicts the ozone impact from the combined effect of the two turnaround operations under different starting-time scenarios. The simulations predict that the ozone impact from planned turnaround operations ranges from a maximum of 11.4 ppb to a minimum of 1.4 ppb. Hence, a reduction of up to 10.0 ppb can be achieved on a single day based on the selected two simulation days. This study demonstrates a cost-effective and environmentally benign ozone control practice for relevant stakeholders, including environmental agencies, regional plant operators, and local communities.
Memristor-Based Analog Computation and Neural Network Classification with a Dot Product Engine.
Hu, Miao; Graves, Catherine E; Li, Can; Li, Yunning; Ge, Ning; Montgomery, Eric; Davila, Noraica; Jiang, Hao; Williams, R Stanley; Yang, J Joshua; Xia, Qiangfei; Strachan, John Paul
2018-03-01
Using memristor crossbar arrays to accelerate computations is a promising approach to efficiently implement algorithms in deep neural networks. Early demonstrations, however, are limited to simulations or small-scale problems primarily due to materials and device challenges that limit the size of the memristor crossbar arrays that can be reliably programmed to stable and analog values, which is the focus of the current work. High-precision analog tuning and control of memristor cells across a 128 × 64 array is demonstrated, and the resulting vector matrix multiplication (VMM) computing precision is evaluated. Single-layer neural network inference is performed in these arrays, and the performance compared to a digital approach is assessed. Memristor computing system used here reaches a VMM accuracy equivalent of 6 bits, and an 89.9% recognition accuracy is achieved for the 10k MNIST handwritten digit test set. Forecasts show that with integrated (on chip) and scaled memristors, a computational efficiency greater than 100 trillion operations per second per Watt is possible. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Kavaya, Michael J.; Frehlich, Rod G.
2007-01-01
The global measurement of vertical profiles of horizontal vector winds has been highly desired for many years by NASA, NOAA and the Integrated Program Office (IPO) implementing the National Polar-orbiting Operational Environmental Satellite Systems (NPOESS). Recently the global wind mission was one of 15 missions recommended to NASA by the first ever NRC Earth Sciences Decadal Survey. Since before 1978, the most promising method to make this space-based measurement has been pulsed Doppler lidar. The favored technology and technique has evolved over the years from obtaining line-of-sight (LOS) wind profiles from a single laser shot using pulsed CO2 gas laser technology to the current plans to use both a coherent-detection and direct-detection pulsed Doppler wind lidar systems with each lidar employing multiple shot accumulation to produce an LOS wind profile. The idea of using two lidars (hybrid concept) entails coherent detection using the NASA LaRC-developed pulsed 2-micron solid state laser technology, and direct detection using pulsed Nd:YAG laser technology tripled in frequency to 355 nm wavelength.
The push for increased coal injection rates -- Blast furnace experience at AK Steel Corporation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dibert, W.A.; Duncan, J.H.; Keaton, D.E.
1994-12-31
An effort has been undertaken to increase the coal injection rate on Amanda blast furnace at AK Steel Corporation`s Ashland Works in Ashland, Kentucky to decrease fuel costs and reduce coke demand. Operating practices have been implemented to achieve a sustained coal injection rate of 140 kg/MT, increased from 100--110 kg/MT. In order to operate successfully at the 140 kg/MT injection rate; changes were implemented to the furnace charging practice, coal rate control methodology, orientation of the injection point, and the manner of distribution of coal to the multiple injection points. Additionally, changes were implemented in the coal processing facilitymore » to accommodate the higher demand of pulverized coal; grinding 29 tonnes per hour, increased from 25 tonnes per hour. Further increases in injection rate will require a supplemental supply of fuel.« less
NASA Astrophysics Data System (ADS)
Blackstock, J. M.; Covington, M. D.; Williams, S. G. W.; Myre, J. M.; Rodriguez, J.
2017-12-01
Variability in CO2 fluxes within Earth's Critical zone occurs over a wide range of timescales. Resolving this and its drivers requires high-temporal resolution monitoring of CO2 both in the soil and aquatic environments. High-cost (> 1,000 USD) gas analyzers and data loggers present cost-barriers for investigations with limited budgets, particularly if high spatial resolution is desired. To overcome high-costs, we developed an Arduino based CO2 measuring platform (i.e. gas analyzer and data logger). The platform was deployed at multiple sites within the Critical Zone overlying the Springfield Plateau aquifer in Northwest Arkansas, USA. The CO2 gas analyzer used in this study was a relatively low-cost SenseAir K30. The analyzer's optical housing was covered by a PTFE semi-permeable membrane allowing for gas exchange between the analyzer and environment. Total approximate cost of the monitoring platform was 200 USD (2% detection limit) to 300 USD (10% detection limit) depending on the K30 model used. For testing purposes, we deployed the Arduino based platform alongside a commercial monitoring platform. CO2 concentration time series were nearly identical. Notably, CO2 cycles at the surface water site, which operated from January to April 2017, displayed a systematic increase in daily CO2 amplitude. Preliminary interpretation suggests key observation of seasonally increasing stream metabolic function. Other interpretations of observed cyclical and event-based behavior are out of the scope of the study; however, the presented method describes an accurate near-hourly characterization of CO2 variability. The new platform has been shown to be operational for several months, and we infer reliable operation for much longer deployments (> 1 year) given adequate environmental protection and power supply. Considering cost-savings, this platform is an attractive option for continuous, accurate, low-power, and low-cost CO2 monitoring for remote locations, globally.
NASA Technical Reports Server (NTRS)
Schaefer, C.; Young, M.; Mason, S.; Coble, C.; Wear, M. L.; Sargsyan, A.; Garcia, K.; Patel, N.; Gibson, C.; Alexander, D.;
2017-01-01
Enhanced screening for the Visual Impairment/Intracranial Pressure (VIIP) syndrome has been implemented to better characterize the ocular and vision changes observed in some long-duration crewmembers. This includes implementation of in-flight ultrasound in 2010 and optical coherence tomography (OCT) in 2013. Potential risk factors for VIIP include cardiovascular health, diet, anatomical and genetic factors, and environmental conditions. Carbon dioxide (CO2), a potent vasodilator, is chronically elevated on the International Space Station (ISS) relative to ambient levels on Earth, and is a plausible risk factor for VIIP. In an effort to understand the possible associations between CO2 and VIIP, this study explores the relationship of ambient CO2 levels on ISS compared to inflight ultrasound and OCT measures of the eye obtained from ISS crewmembers. CO2 measurements were aggregated from Operational Data Reduction Complex and Node 3 major constituent analyzers (MCAs) on ISS or from sensors located in the European Columbus module, as available. CO2 levels in the periods between each ultrasound and OCT session are summarized using timeseries metrics, including time-weighted means and variances. Partial least squares regression analyses are used to quantify the complex relationship between specific ultrasound and OCT measures and the CO2 metrics simulataneously. These analyses will enhance our understanding of the possible associations between CO2 levels and structural changes to the eye which will in turn inform future analysis of inflight VIIP data.
Analysis of Co-Tunneling Current in Fullerene Single-Electron Transistor
NASA Astrophysics Data System (ADS)
KhademHosseini, Vahideh; Dideban, Daryoosh; Ahmadi, MohammadTaghi; Ismail, Razali
2018-05-01
Single-electron transistors (SETs) are nano devices which can be used in low-power electronic systems. They operate based on coulomb blockade effect. This phenomenon controls single-electron tunneling and it switches the current in SET. On the other hand, co-tunneling process increases leakage current, so it reduces main current and reliability of SET. Due to co-tunneling phenomenon, main characteristics of fullerene SET with multiple islands are modelled in this research. Its performance is compared with silicon SET and consequently, research result reports that fullerene SET has lower leakage current and higher reliability than silicon counterpart. Based on the presented model, lower co-tunneling current is achieved by selection of fullerene as SET island material which leads to smaller value of the leakage current. Moreover, island length and the number of islands can affect on co-tunneling and then they tune the current flow in SET.
Vocal cord paralysis post patent ductus arteriosus ligation surgery: risks and co-morbidities.
Rukholm, Gavin; Farrokhyar, Forough; Reid, Diane
2012-11-01
1. To determine the prevalence of left vocal cord paralysis (LVCP) post patent ductus arteriosus (PDA) ligation at a Tertiary Care Centre. 2. To identify risk factors associated with LVCP. 3. To identify co-morbidities associated with LVCP. 4. To determine the frequency of pre- and post-operative nasopharyngolaryngoscopic (NPL) examination in this patient population. Retrospective chart review of all infants who underwent PDA ligation surgery at a tertiary care academic hospital between July 2003 and July 2010. Data on patient age, gender, weight, method of PDA ligation, and results of NPL scoping were collected, as well as patient co-morbidities post PDA ligation. One hundred and fifteen patients underwent PDA ligation surgery. Four patients were excluded due to bilateral vocal cord paralysis. Of the remaining 111 patients, nineteen patients (17.1%) were found to have LVCP. Low birth weight was identified as a significant risk factor for LVCP (p=0.002). Gastroesophageal reflux was identified as a significant co-morbidity associated with LVCP post PDA ligation (p=0.002). Only 0.9% of patients were scoped pre-operatively, and 27.9% were scoped postoperatively. LVCP is associated with multiple morbidities. The authors strongly recommend routine post-operative scoping of all patients post PDA ligation surgery, and preoperative scoping when possible. A prospective study is warranted, in order to confirm the prevalence of LVCP as well as risk factors and associated co-morbidities. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dobler, J. T.; Braun, M.; Zaccheo, T.
2012-12-01
The Laser Atmospheric Transmitter Receiver-Network (LAnTeRN) is a new measurement concept that will enable local, regional and continental determination of key greenhouse gases, with unparalleled accuracy and precision. This new approach will offer the ability to make low bias, high precision, quasi-continuous, measurements to the accuracies required for separating anthropogenic and biogenic sources and sinks. In 2004 ITT Exelis developed an airborne demonstration unit, based on an intensity modulated continuous wave (IM-CW) lidar approach, for actively measuring atmospheric CO2 and O2. The multi-functional fiber laser lidar (MFLL) system relies on low peak power, high reliability, and efficient telecom laser components to implement this unique measurement approach. While evaluating methods for discriminating against thin clouds for the MFLL instrument, a new measurement concept was conceived. LAnTeRN has several fundamental characteristics in common with the MFLL instrument, but is a fundamentally different implementation and capability. The key difference is that LAnTeRN operates in transmission rather than in the traditional backscatter lidar configuration, which has several distinct advantages. Operating as a forward scatter, bistatic lidar system, LAnTeRN enables consideration of continuous monitoring from a geostationary orbit to multiple locations on the ground. Having the receivers on the ground significantly lowers cost and risk compared to an all space based mission, and allows the transmitter subsystem to be implemented, near term, as a hosted payload. Furthermore, the LAnTeRN measurement approach is also applicable for ground to ground measurements where high precision measurements over a long open path is required, such as facilities monitoring, or monitoring of passive volcanoes and fault lines. Using narrow linewidth laser sources allows flexibility to select the position on the absorption feature being probed. This feature allows for weighting the absorption toward lower altitudes for the space implementation or to handle large dynamic range measurements as would be required for volcano monitoring. This presentation will discuss results from a detailed instrument performance analyses, retrieval simulations, and from initial testing of a proof of concept demonstration unit being developed by Exelis. Initial analysis indicate that measurements from a transmitter in geostationary orbit to 25 ground receivers in the eastern U.S. can retrieve column integrated CO2 values to a precision of <0.2 ppm on monthly averages and <0.06 ppm on yearly averages, using conservative estimates of cloud cover and aerosol loading. The capability for continuous monitoring over a fixed geometry makes it possible to independently characterize the atmospheric column, using existing capabilities (e.g. aircore, aircraft and in-situ instrumentation), for quantification of bias. Furthermore, the ability to selectively locate the ground receivers can enable focused studies for specific applications.
NASA Astrophysics Data System (ADS)
Xu, Haiying; Yuan, Yang; Yu, Youlong; Xu, Kebin; Xu, Yuhuan
1990-08-01
This paper presents a real time holographic associative memory implemented with photorefractive KNSBN:Co crystal as the memory element and a liquid crystal electrooptic switch array as the reflective thresholding device. The experiment stores and recalls two images and shows that the system has real-time multiple-image storage and recall functions. An associative memory with a dynamic threshold level to decide the closest match of an incomplete input is proposed.
Pathania, Shivalika; Bagler, Ganesh; Ahuja, Paramvir S.
2016-01-01
Comparative co-expression analysis of multiple species using high-throughput data is an integrative approach to determine the uniformity as well as diversification in biological processes. Rauvolfia serpentina and Catharanthus roseus, both members of Apocyanacae family, are reported to have remedial properties against multiple diseases. Despite of sharing upstream of terpenoid indole alkaloid pathway, there is significant diversity in tissue-specific synthesis and accumulation of specialized metabolites in these plants. This led us to implement comparative co-expression network analysis to investigate the modules and genes responsible for differential tissue-specific expression as well as species-specific synthesis of metabolites. Toward these goals differential network analysis was implemented to identify candidate genes responsible for diversification of metabolites profile. Three genes were identified with significant difference in connectivity leading to differential regulatory behavior between these plants. These genes may be responsible for diversification of secondary metabolism, and thereby for species-specific metabolite synthesis. The network robustness of R. serpentina, determined based on topological properties, was also complemented by comparison of gene-metabolite networks of both plants, and may have evolved to have complex metabolic mechanisms as compared to C. roseus under the influence of various stimuli. This study reveals evolution of complexity in secondary metabolism of R. serpentina, and key genes that contribute toward diversification of specific metabolites. PMID:27588023
Pathania, Shivalika; Bagler, Ganesh; Ahuja, Paramvir S
2016-01-01
Comparative co-expression analysis of multiple species using high-throughput data is an integrative approach to determine the uniformity as well as diversification in biological processes. Rauvolfia serpentina and Catharanthus roseus, both members of Apocyanacae family, are reported to have remedial properties against multiple diseases. Despite of sharing upstream of terpenoid indole alkaloid pathway, there is significant diversity in tissue-specific synthesis and accumulation of specialized metabolites in these plants. This led us to implement comparative co-expression network analysis to investigate the modules and genes responsible for differential tissue-specific expression as well as species-specific synthesis of metabolites. Toward these goals differential network analysis was implemented to identify candidate genes responsible for diversification of metabolites profile. Three genes were identified with significant difference in connectivity leading to differential regulatory behavior between these plants. These genes may be responsible for diversification of secondary metabolism, and thereby for species-specific metabolite synthesis. The network robustness of R. serpentina, determined based on topological properties, was also complemented by comparison of gene-metabolite networks of both plants, and may have evolved to have complex metabolic mechanisms as compared to C. roseus under the influence of various stimuli. This study reveals evolution of complexity in secondary metabolism of R. serpentina, and key genes that contribute toward diversification of specific metabolites.
Climate change mitigation and adaptation in the land use sector: from complementarity to synergy.
Duguma, Lalisa A; Minang, Peter A; van Noordwijk, Meine
2014-09-01
Currently, mitigation and adaptation measures are handled separately, due to differences in priorities for the measures and segregated planning and implementation policies at international and national levels. There is a growing argument that synergistic approaches to adaptation and mitigation could bring substantial benefits at multiple scales in the land use sector. Nonetheless, efforts to implement synergies between adaptation and mitigation measures are rare due to the weak conceptual framing of the approach and constraining policy issues. In this paper, we explore the attributes of synergy and the necessary enabling conditions and discuss, as an example, experience with the Ngitili system in Tanzania that serves both adaptation and mitigation functions. An in-depth look into the current practices suggests that more emphasis is laid on complementarity-i.e., mitigation projects providing adaptation co-benefits and vice versa rather than on synergy. Unlike complementarity, synergy should emphasize functionally sustainable landscape systems in which adaptation and mitigation are optimized as part of multiple functions. We argue that the current practice of seeking co-benefits (complementarity) is a necessary but insufficient step toward addressing synergy. Moving forward from complementarity will require a paradigm shift from current compartmentalization between mitigation and adaptation to systems thinking at landscape scale. However, enabling policy, institutional, and investment conditions need to be developed at global, national, and local levels to achieve synergistic goals.
Systematic Risk Reduction: Chances and Risks of Geological Storage of CO2
NASA Astrophysics Data System (ADS)
Schilling, F. R.; Wuerdemann, H.
2010-12-01
A profound risk assessment should be the basis of any underground activity such as the geological storage of CO2. The risks and benefits should be weighted, whereas the risks need to be systematically reduced. Even after some decades of geological storage of CO2 (as part of a carbon capture and storage CCS), only a few projects are based on an independent risk assessment. In some cases, a risk assessment was performed after the start of storage operation. Chances: - Are there alternatives to CCS with lower risk? - Is a significant CO2 reduction possible without CCS? - If we accept that CO2 emissions are responsible for climate change having a severe economical impact, we need to substantially reduce CO2 emissions. As long as economic growth is directly related to CO2 emissions, we need to decouple the two. - CCS is one of the few options - may be a necessity, if the energy market is not only dependent on demand. Risks: Beside the risk not to develop and implement CCS, the following risks need to be addressed, ideally in a multi independent risk assessment. - Personal Interests - Acceptance - Political interests - Company interests - HSE (Health Safety Environment) - Risk for Climate and ETS - Operational Risks If a multi independent risk assessment is performed and the risks are addressed in a proper way, a significant and systematic risk reduction can be achieved. Some examples will be given, based on real case studies, such as CO2SINK at Ketzin.
Study on the Effect of a Cogeneration System Capacity on its CO2 Emissions
NASA Astrophysics Data System (ADS)
Fonseca, J. G. S., Jr.; Asano, Hitoshi; Fujii, Terushige; Hirasawa, Shigeki
With the global warming problem aggravating and subsequent implementation of the Kyoto Protocol, CO2 emissions are becoming an important factor when verifying the usability of cogeneration systems. Considering this, the purpose of this work is to study the effect of the capacity of a cogeneration system on its CO2 emissions under two kinds of operation strategies: one focused on exergetic efficiency and another on running cost. The system meets the demand pattern typical of a hospital in Japan, operating during one year with an average heat-to-power ratio of 1.3. The main equipments of the cogeneration system are: a gas turbine with waste heat boiler, a main boiler and an auxiliary steam turbine. Each of these equipments was characterized with partial load models, and the turbine efficiencies at full load changed according to the system capacity. Still, it was assumed that eventual surplus of electricity generated could be sold. The main results showed that for any of the capacities simulated, an exergetic efficiency-focused operational strategy always resulted in higher CO2 emissions reduction when compared to the running cost-focused strategy. Furthermore, the amount of reduction in emissions decreased when the system capacity decreased, reaching a value of 1.6% when the system capacity was 33% of the maximum electricity demand with a heat-to-power ratio of 4.1. When the system operated focused on running cost, the economic savings increased with the capacity and reached 42% for a system capacity of 80% of maximum electricity demand and with a heat-to-power ratio of 2.3. In such conditions however, there was an increase in emissions of 8.5%. Still for the same capacity, an exergetic efficiency operation strategy presented the best balance between cost and emissions, generating economic savings of 29% with a decrease in CO2 emissions of 7.1%. The results found showed the importance of an exergy-focused operational strategy and also indicated that lower capacities resulted in lesser gains of both CO2 emissions and running cost reduction.
Alastalo, Mika; Salminen, Leena; Lakanmaa, Riitta-Liisa; Leino-Kilpi, Helena
2017-10-01
The aim of this study was to provide a comprehensive description of multiple skills in patient observation in critical care nursing. Data from semi-structured interviews were analysed using thematic analysis. Experienced critical care nurses (n=20) from three intensive care units in two university hospitals in Finland. Patient observation skills consist of: information gaining skills, information processing skills, decision-making skills and co-operation skills. The first three skills are integrated in the patient observation process, in which gaining information is a prerequisite for processing information that precedes making decisions. Co-operation has a special role as it occurs throughout the process. This study provided a comprehensive description of patient observation skills related to the three-phased patient observation process. The findings contribute to clarifying this part of the competence. The description of patient observation skills may be applied in both clinical practice and education as it may serve as a framework for orientation, ensuring clinical skills and designing learning environments. Based on this study, patient observation skills can be recommended to be included in critical care nursing education, orientation and as a part of critical care nurses' competence evaluation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Single-qubit unitary gates by graph scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumer, Benjamin A.; Underwood, Michael S.; Feder, David L.
2011-12-15
We consider the effects of plane-wave states scattering off finite graphs as an approach to implementing single-qubit unitary operations within the continuous-time quantum walk framework of universal quantum computation. Four semi-infinite tails are attached at arbitrary points of a given graph, representing the input and output registers of a single qubit. For a range of momentum eigenstates, we enumerate all of the graphs with up to n=9 vertices for which the scattering implements a single-qubit gate. As n increases, the number of new unitary operations increases exponentially, and for n>6 the majority correspond to rotations about axes distributed roughly uniformlymore » across the Bloch sphere. Rotations by both rational and irrational multiples of {pi} are found.« less
Linguistic analysis of verbal and non-verbal communication in the operating room.
Moore, Alison; Butt, David; Ellis-Clarke, Jodie; Cartmill, John
2010-12-01
Surgery can be a triumph of co-operation, the procedure evolving as a result of joint action between multiple participants. The communication that mediates the joint action of surgery is conveyed by verbal but particularly by non-verbal signals. Competing priorities superimposed by surgical learning must also be negotiated within this context and this paper draws on techniques of systemic functional linguistics to observe and analyse the flow of information during such a phase of surgery. © 2010 The Authors. ANZ Journal of Surgery © 2010 Royal Australasian College of Surgeons.
DOT National Transportation Integrated Search
2000-02-01
A Fuzzy Logic Ramp Metering Algorithm was implemented on 126 ramps in the greater Seattle area. Two multiple-ramp study sites were evaluted by comparing the fuzzy logic controller (FLC) to the other two ramp metering algorithms in operation at those ...
Evaluation and Decentralised Governance: Examples of Inspections in Polycentric Education Systems
ERIC Educational Resources Information Center
Ehren, M. C. M.; Janssens, F. J. G.; Brown, M.; McNamara, G.; O'Hara, J.; Shevlin, P.
2017-01-01
Across Europe schools and other service providers increasingly operate in networks to provide inclusive education or develop and implement more localized school-to-school improvement models. As some education systems move towards more decentralized decision-making where multiple actors have an active role in steering and governing schools, the…
DOT National Transportation Integrated Search
2012-03-01
This report introduces the design and implementation of a Web-based bridge information visual analytics system. This : project integrates Internet, multiple databases, remote sensing, and other visualization technologies. The result : combines a GIS ...
Development of the Tensoral Computer Language
NASA Technical Reports Server (NTRS)
Ferziger, Joel; Dresselhaus, Eliot
1996-01-01
The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.
Othman, Faridah; Taghieh, Mahmood
2016-01-01
Optimal operation of water resources in multiple and multipurpose reservoirs is very complicated. This is because of the number of dams, each dam’s location (Series and parallel), conflict in objectives and the stochastic nature of the inflow of water in the system. In this paper, performance optimization of the system of Karun and Dez reservoir dams have been studied and investigated with the purposes of hydroelectric energy generation and providing water demand in 6 dams. On the Karun River, 5 dams have been built in the series arrangements, and the Dez dam has been built parallel to those 5 dams. One of the main achievements in this research is the implementation of the structure of production of hydroelectric energy as a function of matrix in MATLAB software. The results show that the role of objective function structure for generating hydroelectric energy in weighting method algorithm is more important than water supply. Nonetheless by implementing ε- constraint method algorithm, we can both increase hydroelectric power generation and supply around 85% of agricultural and industrial demands. PMID:27248152
The ethics of conducting a co-operative inquiry with vulnerable people.
Tee, Stephen R; Lathlean, Judith A
2004-09-01
Mental health services users have been calling for greater participation in clinical research. Participation in this context means research 'with' rather than 'on' groups of people. Conducting a co-operative inquiry involving the participation of vulnerable individuals as co-researchers, in particular those with a history of mental health problems, places an obligation on researchers to articulate and justify sound ethical procedures. The aim of this paper is to consider how the ethical issues encountered when conducting participative research with vulnerable people can be addressed in the implementation of a co-operative inquiry with users of mental health services. The study was based on personal reflection and a critical review of associated literature obtained from a database search using Boolean logic. The findings, presented under the headings of the four prima facie moral principles, suggest the need for researchers using participative approaches to demonstrate the humanistic attributes required for engaging and working with people over a period of time. These include building and maintaining trusting relationships, assessing competence to participate, managing interpersonal and group dynamics and making complex collaborative decisions about participants' continued participation in a study. When using a co-operative inquiry approach involving vulnerable individuals, researchers need to demonstrate clearly how a balance between autonomy and paternalism will be achieved, how risks will be anticipated and managed and how fairness will be maintained throughout all procedures. Researchers using participative approaches need to have developed a level of personal insight and self-awareness through access to supervision which focuses on sources of unintended manipulation and interpersonal dynamics that may arise at the inception of a study and throughout its course. Researchers and ethics committees have a shared responsibility to ensure that vulnerable people are appropriately engaged to maintain the advancement of user knowledge which informs nursing practice.
NASA Astrophysics Data System (ADS)
Shen, Chien-wen; Chou, Ching-Chih
2010-02-01
As business process re-engineering (BPR) is an important foundation to ensure the success of enterprise systems, this study would like to investigate the relationships among BPR implementation, BPR success factors, and business performance for logistics companies. Our empirical findings show that BPR companies outperformed non-BPR companies, not only on information processing, technology applications, organisational structure, and co-ordination, but also on all of the major logistics operations. Comparing the different perceptions of the success factors for BPR, non-BPR companies place greater emphasis on the importance of employee involvement while BPR companies are more concerned about the influence of risk management. Our findings also suggest that management attitude towards BPR success factors could affect performance with regard to technology applications and logistics operations. Logistics companies which have not yet implemented the BPR approach could refer to our findings to evaluate the advantages of such an undertaking and to take care of those BPR success factors affecting performance before conducting BPR projects.
Observing with Sibling and Twin Telescopes
NASA Astrophysics Data System (ADS)
Plank, Lucia; Lovell, Jim; McCallum, Jamie; Mayer, David
2016-12-01
With the transition to VGOS, co-located radio telescopes will be common at many sites. This can be as a sibling telescope when a VGOS antenna is built next to a legacy one, or as the concept of a twin telescope with two identical VGOS antennas. The co-location of two antennas offers new possibilities in both operation and analysis. The immediate question for observing with sibling/twin telescopes is the applied observing strategy and its realization in the scheduling software. In this contribution we report about our efforts implementing new scheduling modes for sibling and twin telescopes in the Vienna VLBI Software. For the example of the sibling telescope in Hobart, several types of sessions will be discussed: an improved tag-along mode for the 26-m antenna (Ho), a proper implementation of the twin-mode using the antenna with the shorter slewing time, and an astrometric support mode enabling the observation of weak sources with the AuScope array.
2015-01-01
Implementing parallel and multivalued logic operations at the molecular scale has the potential to improve the miniaturization and efficiency of a new generation of nanoscale computing devices. Two-dimensional photon-echo spectroscopy is capable of resolving dynamical pathways on electronic and vibrational molecular states. We experimentally demonstrate the implementation of molecular decision trees, logic operations where all possible values of inputs are processed in parallel and the outputs are read simultaneously, by probing the laser-induced dynamics of populations and coherences in a rhodamine dye mounted on a short DNA duplex. The inputs are provided by the bilinear interactions between the molecule and the laser pulses, and the output values are read from the two-dimensional molecular response at specific frequencies. Our results highlights how ultrafast dynamics between multiple molecular states induced by light–matter interactions can be used as an advantage for performing complex logic operations in parallel, operations that are faster than electrical switching. PMID:25984269
Portable real-time color night vision
NASA Astrophysics Data System (ADS)
Toet, Alexander; Hogervorst, Maarten A.
2008-03-01
We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the implementation of this method in two prototype portable dual band realtime night vision systems. One system provides co-aligned visual and near-infrared bands of two image intensifiers, the other provides co-aligned images from a digital image intensifier and an uncooled longwave infrared microbolometer. The co-aligned images from both systems are further processed by a notebook computer. The color mapping is implemented as a realtime lookup table transform. The resulting colorised video streams can be displayed in realtime on head mounted displays and stored on the hard disk of the notebook computer. Preliminary field trials demonstrate the potential of these systems for applications like surveillance, navigation and target detection.
Combining real-time monitoring and knowledge-based analysis in MARVEL
NASA Technical Reports Server (NTRS)
Schwuttke, Ursula M.; Quan, A. G.; Angelino, R.; Veregge, J. R.
1993-01-01
Real-time artificial intelligence is gaining increasing attention for applications in which conventional software methods are unable to meet technology needs. One such application area is the monitoring and analysis of complex systems. MARVEL, a distributed monitoring and analysis tool with multiple expert systems, was developed and successfully applied to the automation of interplanetary spacecraft operations at NASA's Jet Propulsion Laboratory. MARVEL implementation and verification approaches, the MARVEL architecture, and the specific benefits that were realized by using MARVEL in operations are described.
Power control apparatus and methods for electric vehicles
Gadh, Rajit; Chung, Ching-Yen; Chu, Chi-Cheng; Qiu, Li
2016-03-22
Electric vehicle (EV) charging apparatus and methods are described which allow the sharing of charge current between multiple vehicles connected to a single source of charging energy. In addition, this charge sharing can be performed in a grid-friendly manner by lowering current supplied to EVs when necessary in order to satisfy the needs of the grid, or building operator. The apparatus and methods can be integrated into charging stations or can be implemented with a middle-man approach in which a multiple EV charging box, which includes an EV emulator and multiple pilot signal generation circuits, is coupled to a single EV charge station.
Implementation of a Digital Signal Processing Subsystem for a Long Wavelength Array Station
NASA Technical Reports Server (NTRS)
Soriano, Melissa; Navarro, Robert; D'Addario, Larry; Sigman, Elliott; Wang, Douglas
2011-01-01
This paper describes the implementation of a Digital Signal Processing (DP) subsystem for a single Long Wavelength Array (LWA) station.12 The LWA is a radio telescope that will consist of many phased array stations. Each LWA station consists of 256 pairs of dipole-like antennas operating over the 10-88 MHz frequency range. The Digital Signal Processing subsystem digitizes up to 260 dual-polarization signals at 196 MHz from the LWA Analog Receiver, adjusts the delay and amplitude of each signal, and forms four independent beams. Coarse delay is implemented using a first-in-first-out buffer and fine delay is implemented using a finite impulse response filter. Amplitude adjustment and polarization corrections are implemented using a 2x2 matrix multiplication
Jaworski, Piotr; Nikodem, Michał
2018-01-01
In this paper, we present a system for sequential detection of multiple gases using laser-based wavelength modulation spectroscopy (WMS) method combined with a Herriot-type multi-pass cell. Concentration of hydrogen sulfide (H2S), methane (CH4), carbon dioxide (CO2), and ammonia (NH3) are retrieved using three distributed feedback laser diodes operating at 1574.5 nm (H2S and CO2), 1651 nm (CH4), and 1531 nm (NH3). Careful adjustment of system parameters allows for H2S sensing at single parts-per-million by volume (ppmv) level with strongly reduced interference from adjacent CO2 transitions even at atmospheric pressure. System characterization in laboratory conditions is presented and the results from initial tests in real-world application are demonstrated. PMID:29425175
DOE Office of Scientific and Technical Information (OSTI.GOV)
Middleton, Richard S.; Levine, Jonathan S.; Bielicki, Jeffrey M.
CO 2 capture, utilization, and storage (CCUS) technology has yet to be widely deployed at a commercial scale despite multiple high-profile demonstration projects. We suggest that developing a large-scale, visible, and financially viable CCUS network could potentially overcome many barriers to deployment and jumpstart commercial-scale CCUS. To date, substantial effort has focused on technology development to reduce the costs of CO 2 capture from coal-fired power plants. Here, we propose that near-term investment could focus on implementing CO 2 capture on facilities that produce high-value chemicals/products. These facilities can absorb the expected impact of the marginal increase in the costmore » of production on the price of their product, due to the addition of CO 2 capture, more than coal-fired power plants. A financially viable demonstration of a large-scale CCUS network requires offsetting the costs of CO 2 capture by using the CO 2 as an input to the production of market-viable products. As a result, we demonstrate this alternative development path with the example of an integrated CCUS system where CO 2 is captured from ethylene producers and used for enhanced oil recovery in the U.S. Gulf Coast region.« less
PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan
PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors permore » realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.« less
TRICCS: A proposed teleoperator/robot integrated command and control system for space applications
NASA Technical Reports Server (NTRS)
Will, R. W.
1985-01-01
Robotic systems will play an increasingly important role in space operations. An integrated command and control system based on the requirements of space-related applications and incorporating features necessary for the evolution of advanced goal-directed robotic systems is described. These features include: interaction with a world model or domain knowledge base, sensor feedback, multiple-arm capability and concurrent operations. The system makes maximum use of manual interaction at all levels for debug, monitoring, and operational reliability. It is shown that the robotic command and control system may most advantageously be implemented as packages and tasks in Ada.
Feller, Etty
2008-01-01
Laboratories with a quality system accredited to the ISO/IEC 17025 standard have a definite advantage, compared to non-accredited laboratories, when preparing their facilities for the implementation of the principles of good laboratory practice (GLP) of the Organisation for Economic Co-operation and Development (OECD). Accredited laboratories have an established quality system covering the administrative and technical issues specified in the standard. The similarities and differences between the ISO/IEC 17025 standard and the OECD principles of GLP are compared and discussed.
NASA Astrophysics Data System (ADS)
Versteeg, R.; Leger, E.; Dafflon, B.
2016-12-01
Geologic sequestration of CO2 is one of the primary proposed approaches for reducing total atmospheric CO2 concentrations. MVAA (Monitoring, Verification, Accounting and Assessment) of CO2 sequestration is an essential part of the geologic CO2 sequestration cycle. MVAA activities need to meet multiple operational, regulatory and environmental objectives, including ensuring the protection of underground sources of drinking water. Anticipated negative consequences of CO2 leakage into groundwater, besides possible brine contamination and release of gaseous CO2, include a significant increase of dissolved CO2 into shallow groundwater systems, which will decrease groundwater pH and can potentially mobilize naturally occurring trace metals and ions that are commonly absorbed to or contained in sediments. Autonomous electrical geophysical monitoring in aquifers has the potential of allowing for rapid and automated detection of CO2 leakage. However, while the feasibility of such monitoring has been demonstrated by a number of different field experiments, automated interpretation of complex electrical resistivity data requires the development of quantitative relationships between complex electrical resistivity signatures and dissolved CO2 in the aquifer resulting from leakage Under a DOE SBIR funded effort we performed multiple tank scale experiments in which we investigated complex electrical resistivity signatures associated with dissolved CO2 plumes in saturated sediments. We also investigated the feasibility of distinguishing CO2 leakage signatures from signatures associated with other processes such as salt water movement, temperature variations and other variations in chemical or physical conditions. In addition to these experiments we also numerically modeled the tank experiments. These experiments showed that (a) we can distinguish CO2 leakage signatures from other signatures, (b) CO2 leakage signatures have a consistent characteristic, (c) laboratory experiments are in agreement with field results, and (d) we can numerically simulate the main characteristics of CO2 leakage and associated electrical geophysical signatures.
NASA Astrophysics Data System (ADS)
Obland, M. D.; Nehrir, A. R.; Liu, Z.; Chen, S.; Campbell, J. F.; Lin, B.; Kooi, S. A.; Fan, T. F.; Choi, Y.; Plant, J.; Yang, M. M.; Browell, E. V.; Harrison, F. W.; Meadows, B.; Dobler, J. T.; Zaccheo, T. S.
2015-12-01
This work describes advances in critical lidar technologies and techniques developed as part of the ASCENDS CarbonHawk Experiment Simulator (ACES) system for measuring atmospheric column carbon dioxide (CO2) mixing ratios in support of the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission. The ACES design demonstrates advancements in: (1) enhanced power-aperture product through the use and operation of multiple co-aligned laser transmitters and a multi-aperture telescope design; (2) high-efficiency, high-power Erbium-Doped Fiber Amplifiers (EDFAs); (3) high-bandwidth, low-noise HgCdTe detector and transimpedence amplifier (TIA) subsystem capable of long-duration operation; and (4) advanced algorithms for cloud and aerosol discrimination. The ACES instrument, an Intensity-Modulated Continuous-Wave (IM-CW) lidar, was designed for high-altitude aircraft operations and can be directly applied to space instrumentation to meet the ASCENDS mission requirements. Specifically, the lidar simultaneously transmits three IM-CW laser beams from the high power EDFAs operating near 1571 nm. The outgoing laser beams are aligned to the field of view of three fiber-coupled 17.8-cm diameter telescopes, and the backscattered light collected by the same three telescopes is sent to the detector/TIA subsystem, which has a bandwidth of 4.9 MHz and operates service-free with a tactical Dewar and cryocooler. The electronic bandwidth is only slightly higher than 1 MHz, effectively limiting the noise level. Two key laser modulation approaches are being tested to significantly mitigate the effects of thin clouds on the retrieved CO2 column amounts. This work provides an over view of these technologies, the modulation approaches, and results from recent test flights.
NASA Astrophysics Data System (ADS)
Obland, M. D.; Liu, Z.; Campbell, J. F.; Lin, B.; Kooi, S. A.; Carrion, W.; Hicks, J.; Fan, T. F.; Nehrir, A. R.; Browell, E. V.; Meadows, B.; Davis, K. J.
2016-12-01
This work describes advances in critical lidar technologies and techniques developed as part of the ASCENDS CarbonHawk Experiment Simulator (ACES) system for measuring atmospheric column carbon dioxide (CO2) mixing ratios in support of the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission. The ACES design demonstrates advancements in: (1) enhanced power-aperture product through the use and operation of multiple co-aligned laser transmitters and a multi-aperture telescope design; (2) high-efficiency, high-power Erbium-Doped Fiber Amplifiers (EDFAs); (3) high-bandwidth, low-noise HgCdTe detector and transimpedence amplifier (TIA) subsystem capable of long-duration operation; and (4) advanced algorithms for cloud and aerosol discrimination. The ACES instrument, an Intensity-Modulated Continuous-Wave (IM-CW) lidar, was designed for high-altitude aircraft operations and can be directly applied to space instrumentation to meet the ASCENDS mission requirements. Specifically, the lidar simultaneously transmits three IM-CW laser beams from the high power EDFAs operating near 1571 nm. The outgoing laser beams are aligned to the field of view of three fiber-coupled 17.8-cm diameter telescopes, and the backscattered light collected by the same three telescopes is sent to the detector/TIA subsystem, which has a bandwidth of 4.9 MHz and operates service-free with a tactical Dewar and cryocooler. The electronic bandwidth is only slightly higher than 1 MHz, effectively limiting the noise level. Two key laser modulation approaches are being tested to significantly mitigate the effects of thin clouds on the retrieved CO2 column amounts. This work provides an over view of these technologies, the modulation approaches, and results from recent test flights during the Atmospheric Carbon and Transport - America (ACT-America) Earth Venture Suborbital flight campaign.
Large clusters of co-expressed genes in the Drosophila genome.
Boutanaev, Alexander M; Kalmykova, Alla I; Shevelyov, Yuri Y; Nurminsky, Dmitry I
2002-12-12
Clustering of co-expressed, non-homologous genes on chromosomes implies their co-regulation. In lower eukaryotes, co-expressed genes are often found in pairs. Clustering of genes that share aspects of transcriptional regulation has also been reported in higher eukaryotes. To advance our understanding of the mode of coordinated gene regulation in multicellular organisms, we performed a genome-wide analysis of the chromosomal distribution of co-expressed genes in Drosophila. We identified a total of 1,661 testes-specific genes, one-third of which are clustered on chromosomes. The number of clusters of three or more genes is much higher than expected by chance. We observed a similar trend for genes upregulated in the embryo and in the adult head, although the expression pattern of individual genes cannot be predicted on the basis of chromosomal position alone. Our data suggest that the prevalent mechanism of transcriptional co-regulation in higher eukaryotes operates with extensive chromatin domains that comprise multiple genes.
Feasibility study of algae-based Carbon Dioxide capture ...
SUMMARY: The biomass of microalgae contains approximately 50% carbon, which is commonly obtained from the atmosphere, but can also be taken from commercial sources that produce CO2, such as coal-fired power plants. A study of operational demonstration projects is being undertaken to evaluate the benefits of using algae to reduce CO2 emissions from industrial and small-scale utility power boilers. The operations are being studied for the use of CO2 from flue gas for algae growth along with the production of biofuels and other useful products to prepare a comprehensive characterization of the economic feasibility of using algae to capture CO2. Information is being generated for analyses of the potential for these technologies to advance in the market and assist in meeting environmental goals, as well as to examine their associated environmental implications. Three electric power generation plants (coal and fuel oil fired) equipped to send flue-gas emissions to algae culture at demonstration facilities are being studied. Data and process information are being collected and developed to facilitate feasibility and modeling evaluations of the CO2 to algae technology. An understanding of process requirements to apply this technology to existing industries would go far in advancing carbon capture opportunities. Documenting the successful use of this technology could help bring “low-tech”, low-cost, CO2 to algae, carbon capture to multiple size industries and
A Multiprocessor Operating System Simulator
NASA Technical Reports Server (NTRS)
Johnston, Gary M.; Campbell, Roy H.
1988-01-01
This paper describes a multiprocessor operating system simulator that was developed by the authors in the Fall semester of 1987. The simulator was built in response to the need to provide students with an environment in which to build and test operating system concepts as part of the coursework of a third-year undergraduate operating systems course. Written in C++, the simulator uses the co-routine style task package that is distributed with the AT&T C++ Translator to provide a hierarchy of classes that represents a broad range of operating system software and hardware components. The class hierarchy closely follows that of the 'Choices' family of operating systems for loosely- and tightly-coupled multiprocessors. During an operating system course, these classes are refined and specialized by students in homework assignments to facilitate experimentation with different aspects of operating system design and policy decisions. The current implementation runs on the IBM RT PC under 4.3bsd UNIX.
Optimization-based manufacturing scheduling with multiple resources and setup requirements
NASA Astrophysics Data System (ADS)
Chen, Dong; Luh, Peter B.; Thakur, Lakshman S.; Moreno, Jack, Jr.
1998-10-01
The increasing demand for on-time delivery and low price forces manufacturer to seek effective schedules to improve coordination of multiple resources and to reduce product internal costs associated with labor, setup and inventory. This study describes the design and implementation of a scheduling system for J. M. Product Inc. whose manufacturing is characterized by the need to simultaneously consider machines and operators while an operator may attend several operations at the same time, and the presence of machines requiring significant setup times. The scheduling problem with these characteristics are typical for many manufacturers, very difficult to be handled, and have not been adequately addressed in the literature. In this study, both machine and operators are modeled as resources with finite capacities to obtain efficient coordination between them, and an operator's time can be shared by several operations at the same time to make full use of the operator. Setups are explicitly modeled following our previous work, with additional penalties on excessive setups to reduce setup costs and avoid possible scraps. An integer formulation with a separable structure is developed to maximize on-time delivery of products, low inventory and small number of setups. Within the Lagrangian relaxation framework, the problem is decomposed into individual subproblems that are effectively solved by using dynamic programming with additional penalties embedded in state transitions. Heuristics is then developed to obtain a feasible schedule following on our previous work with new mechanism to satisfy operator capacity constraints. The method has been implemented using the object-oriented programming language C++ with a user-friendly interface, and numerical testing shows that the method generates high quality schedules in a timely fashion. Through simultaneous consideration of machines and operators, machines and operators are well coordinated to facilitate the smooth flow of parts through the system. The explicit modeling of setups and the associated penalties let parts with same setup requirements clustered together to avoid excessive setups.
JPL Space Telecommunications Radio System Operating Environment
NASA Technical Reports Server (NTRS)
Lux, James P.; Lang, Minh; Peters, Kenneth J.; Taylor, Gregory H.; Duncan, Courtney B.; Orozco, David S.; Stern, Ryan A.; Ahten, Earl R.; Girard, Mike
2013-01-01
A flight-qualified implementation of a Software Defined Radio (SDR) Operating Environment for the JPL-SDR built for the CoNNeCT Project has been developed. It is compliant with the NASA Space Telecommunications Radio System (STRS) Architecture Standard, and provides the software infrastructure for STRS compliant waveform applications. This software provides a standards-compliant abstracted view of the JPL-SDR hardware platform. It uses industry standard POSIX interfaces for most functions, as well as exposing the STRS API (Application Programming In terface) required by the standard. This software includes a standardized interface for IP components instantiated within a Xilinx FPGA (Field Programmable Gate Array). The software provides a standardized abstracted interface to platform resources such as data converters, file system, etc., which can be used by STRS standards conformant waveform applications. It provides a generic SDR operating environment with a much smaller resource footprint than similar products such as SCA (Software Communications Architecture) compliant implementations, or the DoD Joint Tactical Radio Systems (JTRS).
Erichsen Andersson, Annette; Frödin, Maria; Dellenborg, Lisen; Wallin, Lars; Hök, Jesper; Gillespie, Brigid M; Wikström, Ewa
2018-01-04
Hand hygiene and aseptic techniques are essential preventives in combating hospital-acquired infections. However, implementation of these strategies in the operating room remains suboptimal. There is a paucity of intervention studies providing detailed information on effective methods for change. This study aimed to evaluate the process of implementing a theory-driven knowledge translation program for improved use of hand hygiene and aseptic techniques in the operating room. The study was set in an operating department of a university hospital. The intervention was underpinned by theories on organizational learning, culture and person centeredness. Qualitative process data were collected via participant observations and analyzed using a thematic approach. Doubts that hand-hygiene practices are effective in preventing hospital acquired infections, strong boundaries and distrust between professional groups and a lack of psychological safety were identified as barriers towards change. Facilitated interprofessional dialogue and learning in "safe spaces" worked as mechanisms for motivation and engagement. Allowing for the free expression of different opinions, doubts and viewing resistance as a natural part of any change was effective in engaging all professional categories in co-creation of clinical relevant solutions to improve hand hygiene. Enabling nurses and physicians to think and talk differently about hospital acquired infections and hand hygiene requires a shift from the concept of one-way directed compliance towards change and learning as the result of a participatory and meaning-making process. The present study is a part of the Safe Hands project, and is registered with ClinicalTrials.gov (ID: NCT02983136 ). Date of registration 2016/11/28, retrospectively registered.
Miniature EVA Software Defined Radio
NASA Technical Reports Server (NTRS)
Pozhidaev, Aleksey
2012-01-01
As NASA embarks upon developing the Next-Generation Extra Vehicular Activity (EVA) Radio for deep space exploration, the demands on EVA battery life will substantially increase. The number of modes and frequency bands required will continue to grow in order to enable efficient and complex multi-mode operations including communications, navigation, and tracking applications. Whether conducting astronaut excursions, communicating to soldiers, or first responders responding to emergency hazards, NASA has developed an innovative, affordable, miniaturized, power-efficient software defined radio that offers unprecedented power-efficient flexibility. This lightweight, programmable, S-band, multi-service, frequency- agile EVA software defined radio (SDR) supports data, telemetry, voice, and both standard and high-definition video. Features include a modular design, an easily scalable architecture, and the EVA SDR allows for both stationary and mobile battery powered handheld operations. Currently, the radio is equipped with an S-band RF section. However, its scalable architecture can accommodate multiple RF sections simultaneously to cover multiple frequency bands. The EVA SDR also supports multiple network protocols. It currently implements a Hybrid Mesh Network based on the 802.11s open standard protocol. The radio targets RF channel data rates up to 20 Mbps and can be equipped with a real-time operating system (RTOS) that can be switched off for power-aware applications. The EVA SDR's modular design permits implementation of the same hardware at all Network Nodes concept. This approach assures the portability of the same software into any radio in the system. It also brings several benefits to the entire system including reducing system maintenance, system complexity, and development cost.
ERIC Educational Resources Information Center
Weifang, Min
This case study on the experience of the University of Peking, China, in inter-university cooperation describes the process of identifying appropriate partner institutions and implementing collaborative programs with them. It also highlights a number of lessons for those managing inter-university cooperation and shows how such initiatives can be…
ERIC Educational Resources Information Center
Bengtsson, Jarl
2013-01-01
In this article, the late Jarl Bengtsson briefly traces the evolution of the concept of lifelong learning within the member states of the Organisation for Economic Co-operation and Development (OECD). He points out that on the one hand lifelong learning is accepted, in policy terms, by all OECD countries and many other countries, but on the other…
NASA Astrophysics Data System (ADS)
Brown, Michelle Cetner
In recent years, Science, Technology, Engineering, and Mathematics (STEM) education has become a significant focus of numerous theoretical and commentary articles as researchers have advocated for active and conceptually integrated learning in classrooms. Drawing connections between previously isolated subjects, especially mathematics and science, has been shown to increase student engagement, performance, and critical thinking skills. However, obstacles exist to the widespread implementation of integrated curricula in schools, such as teacher knowledge and school structure and culture. The Interdisciplinary Co-planning Team (ICT) model, in which teachers of different subjects come together regularly to discuss connections between content and to plan larger interdisciplinary activities and smaller examples and discussion points, offers a method for teachers to create sustainable interdisciplinary experiences for students within the bounds of the current school structure. The ICT model is designed to be an iterative, flexible model, providing teachers with both a regular time to come together as "experts" and "teach" each other important concepts from their separate disciplines, and then to bring their shared knowledge and language back to their own classrooms to implement with their students in ways that fit their individual classes. In this multiple-case study, which aims to describe the nature of the co-planning process, the nature of plans, and changes in teacher beliefs as a result of co-planning, three pairs of secondary mathematics and science teachers participated in a 10-week intervention with the ICT model. Each pair constituted one case. Data included observations, interviews, and artifact collection. All interviews, whole-group sessions, and co-planning sessions were transcribed and coded using both theory-based and data-based codes. Finally, a cross-case comparison was used to present similarities and differences across cases. Findings suggest that the ICT model can be implemented with pairs of mathematics and science teachers to create a sustainable way to share experience and expertise, and to create powerful interdisciplinary experiences for their students. In addition, there is evidence that participation with the ICT model positively influences teacher beliefs about the nature of mathematics and science, about teaching and learning, and about interdisciplinary connections. These findings seem to hold across grades, school type, and personal experience. Future implementation of the ICT model on a larger scale is recommended to continue to observe the effects on teachers and students.
Portable Automation of Static Chamber Sample Collection for Quantifying Soil Gas Flux
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Morgan P.; Groh, Tyler A.; Parkin, Timothy B.
Quantification of soil gas flux using the static chamber method is labor intensive. The number of chambers that can be sampled is limited by the spacing between chambers and the availability of trained research technicians. An automated system for collecting gas samples from chambers in the field would eliminate the need for personnel to return to the chamber during a flux measurement period and would allow a single technician to sample multiple chambers simultaneously. This study describes Chamber Automated Sampling Equipment (FluxCASE) to collect and store chamber headspace gas samples at assigned time points for the measurement of soil gasmore » flux. The FluxCASE design and operation is described, and the accuracy and precision of the FluxCASE system is evaluated. In laboratory measurements of nitrous oxide (N2O), carbon dioxide (CO2), and methane (CH4) concentrations of a standardized gas mixture, coefficients of variation associated with automated and manual sample collection were comparable, indicating no loss of precision. In the field, soil gas fluxes measured from FluxCASEs were in agreement with manual sampling for both N2O and CO2. Slopes of regression equations were 1.01 for CO2 and 0.97 for N2O. The 95% confidence limits of the slopes of the regression lines included the value of one, indicating no bias. Additionally, an expense analysis found a cost recovery ranging from 0.6 to 2.2 yr. Implementing the FluxCASE system is an alternative to improve the efficiency of the static chamber method for measuring soil gas flux while maintaining the accuracy and precision of manual sampling.« less
Tian, Wei; Han, Xu; Zuo, Wangda; ...
2018-01-31
This paper presents a comprehensive review of the open literature on motivations, methods and applications of linking stratified airflow simulation to building energy simulation (BES). First, we reviewed the motivations for coupling prediction models for building energy and indoor environment. This review classified various exchanged data in different applications as interface data and state data, and found that choosing different data sets may lead to varying performance of stability, convergence, and speed for the co-simulation. Second, our review shows that an external coupling scheme is substantially more popular in implementations of co-simulation than an internal coupling scheme. The external couplingmore » is shown to be generally faster in computational speed, as well as easier to implement, maintain and expand than the internal coupling. Third, the external coupling can be carried out in different data synchronization schemes, including static coupling and dynamic coupling. In comparison, the static coupling that performs data exchange only once is computationally faster and more stable than the dynamic coupling. However, concerning accuracy, the dynamic coupling that requires multiple times of data exchange is more accurate than the static coupling. Furthermore, the review identified that the implementation of the external coupling can be achieved through customized interfaces, middleware, and standard interfaces. The customized interface is straightforward but may be limited to a specific coupling application. The middleware is versatile and user-friendly but usually limited in data synchronization schemes. The standard interface is versatile and promising, but may be difficult to implement. Current applications of the co-simulation are mainly energy performance evaluation and control studies. Finally, we discussed the limitations of the current research and provided an overview for future research.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Wei; Han, Xu; Zuo, Wangda
This paper presents a comprehensive review of the open literature on motivations, methods and applications of linking stratified airflow simulation to building energy simulation (BES). First, we reviewed the motivations for coupling prediction models for building energy and indoor environment. This review classified various exchanged data in different applications as interface data and state data, and found that choosing different data sets may lead to varying performance of stability, convergence, and speed for the co-simulation. Second, our review shows that an external coupling scheme is substantially more popular in implementations of co-simulation than an internal coupling scheme. The external couplingmore » is shown to be generally faster in computational speed, as well as easier to implement, maintain and expand than the internal coupling. Third, the external coupling can be carried out in different data synchronization schemes, including static coupling and dynamic coupling. In comparison, the static coupling that performs data exchange only once is computationally faster and more stable than the dynamic coupling. However, concerning accuracy, the dynamic coupling that requires multiple times of data exchange is more accurate than the static coupling. Furthermore, the review identified that the implementation of the external coupling can be achieved through customized interfaces, middleware, and standard interfaces. The customized interface is straightforward but may be limited to a specific coupling application. The middleware is versatile and user-friendly but usually limited in data synchronization schemes. The standard interface is versatile and promising, but may be difficult to implement. Current applications of the co-simulation are mainly energy performance evaluation and control studies. Finally, we discussed the limitations of the current research and provided an overview for future research.« less
A test harness for accelerating physics parameterization advancements into operations
NASA Astrophysics Data System (ADS)
Firl, G. J.; Bernardet, L.; Harrold, M.; Henderson, J.; Wolff, J.; Zhang, M.
2017-12-01
The process of transitioning advances in parameterization of sub-grid scale processes from initial idea to implementation is often much quicker than the transition from implementation to use in an operational setting. After all, considerable work must be undertaken by operational centers to fully test, evaluate, and implement new physics. The process is complicated by the scarcity of like-to-like comparisons, availability of HPC resources, and the ``tuning problem" whereby advances in physics schemes are difficult to properly evaluate without first undertaking the expensive and time-consuming process of tuning to other schemes within a suite. To address this process shortcoming, the Global Model TestBed (GMTB), supported by the NWS NGGPS project and undertaken by the Developmental Testbed Center, has developed a physics test harness. It implements the concept of hierarchical testing, where the same code can be tested in model configurations of varying complexity from single column models (SCM) to fully coupled, cycled global simulations. Developers and users may choose at which level of complexity to engage. Several components of the physics test harness have been implemented, including a SCM and an end-to-end workflow that expands upon the one used at NOAA/EMC to run the GFS operationally, although the testbed components will necessarily morph to coincide with changes to the operational configuration (FV3-GFS). A standard, relatively user-friendly interface known as the Interoperable Physics Driver (IPD) is available for physics developers to connect their codes. This prerequisite exercise allows access to the testbed tools and removes a technical hurdle for potential inclusion into the Common Community Physics Package (CCPP). The testbed offers users the opportunity to conduct like-to-like comparisons between the operational physics suite and new development as well as among multiple developments. GMTB staff have demonstrated use of the testbed through a comparison between the 2017 operational GFS suite and one containing the Grell-Freitas convective parameterization. An overview of the physics test harness and its early use will be presented.
An efficient quantum circuit analyser on qubits and qudits
NASA Astrophysics Data System (ADS)
Loke, T.; Wang, J. B.
2011-10-01
This paper presents a highly efficient decomposition scheme and its associated Mathematica notebook for the analysis of complicated quantum circuits comprised of single/multiple qubit and qudit quantum gates. In particular, this scheme reduces the evaluation of multiple unitary gate operations with many conditionals to just two matrix additions, regardless of the number of conditionals or gate dimensions. This improves significantly the capability of a quantum circuit analyser implemented in a classical computer. This is also the first efficient quantum circuit analyser to include qudit quantum logic gates.
USDA-ARS?s Scientific Manuscript database
Tracing heavy stable isotopes from plant material through the ecosystem provides the most sensitive information about ecosystem processes; from CO2 fluxes and soil organic matter formation to small-scale stable-isotope biomarker probing. Coupling multiple stable isotopes such as 13C with 15N, 18O o...
Standards for space automation and robotics
NASA Technical Reports Server (NTRS)
Kader, Jac B.; Loftin, R. B.
1992-01-01
The AIAA's Committee on Standards for Space Automation and Robotics (COS/SAR) is charged with the identification of key functions and critical technologies applicable to multiple missions that reflect fundamental consideration of environmental factors. COS/SAR's standards/practices/guidelines implementation methods will be based on reliability, performance, and operations, as well as economic viability and life-cycle costs, simplicity, and modularity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohaghegh, Shahab D.
apability of underground carbon dioxide storage to confine and sustain injected CO2 for a very long time is the main concern for geologic CO2 sequestration. If a leakage from a geological CO2 sequestration site occurs, it is crucial to find the approximate amount and the location of the leak in order to implement proper remediation activity. An overwhelming majority of research and development for storage site monitoring has been concentrated on atmospheric, surface or near surface monitoring of the sequestered CO2. This study aims to monitor the integrity of CO2 storage at the reservoir level. This work proposes developing in-situmore » CO2 Monitoring and Verification technology based on the implementation of Permanent Down-hole Gauges (PDG) or Smart Wells along with Artificial Intelligence and Data Mining (AI&DM). The technology attempts to identify the characteristics of the CO2 leakage by de-convolving the pressure signals collected from Permanent Down-hole Gauges (PDG). Citronelle field, a saline aquifer reservoir, located in the U.S. was considered for this study. A reservoir simulation model for CO2 sequestration in the Citronelle field was developed and history matched. The presence of the PDGs were considered in the reservoir model at the injection well and an observation well. High frequency pressure data from sensors were collected based on different synthetic CO2 leakage scenarios in the model. Due to complexity of the pressure signal behaviors, a Machine Learning-based technology was introduced to build an Intelligent Leakage Detection System (ILDS). The ILDS was able to detect leakage characteristics in a short period of time (less than a day) demonstrating the capability of the system in quantifying leakage characteristics subject to complex rate behaviors. The performance of ILDS was examined under different conditions such as multiple well leakages, cap rock leakage, availability of an additional monitoring well, presence of pressure drift and noise in the pressure sensor and uncertainty in the reservoir model.« less
NASA Astrophysics Data System (ADS)
Ye, X.; Lauvaux, T.; Kort, E. A.; Lin, J. C.; Oda, T.; Yang, E.; Wu, D.
2016-12-01
Rapid economic development has given rise to a steady increase of global carbon emissions, which have accumulated in the atmosphere for the past 200 years. Urbanization has concentrated about 70% of the global fossil-fuel CO2 emissions in large metropolitan areas distributed around the world, which represents the most significant anthropogenic contribution to climate change. However, highly uncertain quantifications of urban CO2 emissions are commonplace for numerous cities because of poorly-documented inventories of energy consumption. Therefore, accurate estimates of carbon emissions from global observing systems are a necessity if mitigation strategies are meant to be implemented at global scales. Space-based observations of total column averaged CO2 concentration (XCO2) provide a very promising and powerful tool to quantify urban CO2 fluxes. For the first time, measurements from the Orbiting Carbon Observatory 2 (OCO-2) mission are assimilated in a high resolution inverse modeling system to quantify fossil-fuel CO2 emissions of multiple cities around the globe. The Open-source Data Inventory for Anthropogenic CO2 (ODIAC) emission inventory is employed as a first guess, while the atmospheric transport is simulated using the WRF-Chem model at 1-km resolution. Emission detection and quantification is performed with an Ensemble Kalman Filter method. We demonstrate here the potential of the inverse approach for assimilating thousands of OCO-2 retrievals along tracks near metropolitan areas. We present the detection potential of the system with real-case applications near power plants and present inverse emissions using actual OCO-2 measurements on various urban landscapes. Finally, we will discuss the potential of OCO-2-like satellite instruments for monitoring temporal variations of fossil-fuel CO2 emissions over multiple years, which can provide valuable insights for future satellite observation strategies.
FPGA-based coprocessor for matrix algorithms implementation
NASA Astrophysics Data System (ADS)
Amira, Abbes; Bensaali, Faycal
2003-03-01
Matrix algorithms are important in many types of applications including image and signal processing. These areas require enormous computing power. A close examination of the algorithms used in these, and related, applications reveals that many of the fundamental actions involve matrix operations such as matrix multiplication which is of O (N3) on a sequential computer and O (N3/p) on a parallel system with p processors complexity. This paper presents an investigation into the design and implementation of different matrix algorithms such as matrix operations, matrix transforms and matrix decompositions using an FPGA based environment. Solutions for the problem of processing large matrices have been proposed. The proposed system architectures are scalable, modular and require less area and time complexity with reduced latency when compared with existing structures.
NASA Astrophysics Data System (ADS)
Boroushaki, Soheil; Malczewski, Jacek
2008-04-01
This paper focuses on the integration of GIS and an extension of the analytical hierarchy process (AHP) using quantifier-guided ordered weighted averaging (OWA) procedure. AHP_OWA is a multicriteria combination operator. The nature of the AHP_OWA depends on some parameters, which are expressed by means of fuzzy linguistic quantifiers. By changing the linguistic terms, AHP_OWA can generate a wide range of decision strategies. We propose a GIS-multicriteria evaluation (MCE) system through implementation of AHP_OWA within ArcGIS, capable of integrating linguistic labels within conventional AHP for spatial decision making. We suggest that the proposed GIS-MCE would simplify the definition of decision strategies and facilitate an exploratory analysis of multiple criteria by incorporating qualitative information within the analysis.
Middleton, Richard S.; Levine, Jonathan S.; Bielicki, Jeffrey M.; ...
2015-04-27
CO 2 capture, utilization, and storage (CCUS) technology has yet to be widely deployed at a commercial scale despite multiple high-profile demonstration projects. We suggest that developing a large-scale, visible, and financially viable CCUS network could potentially overcome many barriers to deployment and jumpstart commercial-scale CCUS. To date, substantial effort has focused on technology development to reduce the costs of CO 2 capture from coal-fired power plants. Here, we propose that near-term investment could focus on implementing CO 2 capture on facilities that produce high-value chemicals/products. These facilities can absorb the expected impact of the marginal increase in the costmore » of production on the price of their product, due to the addition of CO 2 capture, more than coal-fired power plants. A financially viable demonstration of a large-scale CCUS network requires offsetting the costs of CO 2 capture by using the CO 2 as an input to the production of market-viable products. As a result, we demonstrate this alternative development path with the example of an integrated CCUS system where CO 2 is captured from ethylene producers and used for enhanced oil recovery in the U.S. Gulf Coast region.« less
Embracing value co-creation in primary care services research: a framework for success.
Janamian, Tina; Crossland, Lisa; Jackson, Claire L
2016-04-18
Value co-creation redresses a key criticism of researcher-driven approaches to research - that researchers may lack insight into the end users' needs and values across the research journey. Value co-creation creates, in a step-wise way, value with, and for, multiple stakeholders through regular, ongoing interactions leading to innovation, increased productivity and co-created outcomes of value to all parties - thus creating a "win more-win more" environment. The Centre of Research Excellence (CRE) in Building Primary Care Quality, Performance and Sustainability has co-created outcomes of value that have included robust and enduring partnerships, research findings that have value to end users (such as the Primary Care Practice Improvement Tool and the best-practice governance framework), an International Implementation Research Network in Primary Care and the International Primary Health Reform Conference. Key lessons learned in applying the strategies of value co-creation have included the recognition that partnership development requires an investment of time and effort to ensure meaningful interactions and enriched end user experiences, that research management systems including governance, leadership and communication also need to be "co-creative", and that openness and understanding is needed to work across different sectors and cultures with flexibility, fairness and transparency being essential to the value co-creation process.
Treatment Planning and Image Guidance for Radiofrequency Ablations of Large Tumors
Ren, Hongliang; Campos-Nanez, Enrique; Yaniv, Ziv; Banovac, Filip; Abeledo, Hernan; Hata, Nobuhiko; Cleary, Kevin
2014-01-01
This article addresses the two key challenges in computer-assisted percutaneous tumor ablation: planning multiple overlapping ablations for large tumors while avoiding critical structures, and executing the prescribed plan. Towards semi-automatic treatment planning for image-guided surgical interventions, we develop a systematic approach to the needle-based ablation placement task, ranging from pre-operative planning algorithms to an intra-operative execution platform. The planning system incorporates clinical constraints on ablations and trajectories using a multiple objective optimization formulation, which consists of optimal path selection and ablation coverage optimization based on integer programming. The system implementation is presented and validated in phantom studies and on an animal model. The presented system can potentially be further extended for other ablation techniques such as cryotherapy. PMID:24235279
Parallel algorithms for large-scale biological sequence alignment on Xeon-Phi based clusters.
Lan, Haidong; Chan, Yuandong; Xu, Kai; Schmidt, Bertil; Peng, Shaoliang; Liu, Weiguo
2016-07-19
Computing alignments between two or more sequences are common operations frequently performed in computational molecular biology. The continuing growth of biological sequence databases establishes the need for their efficient parallel implementation on modern accelerators. This paper presents new approaches to high performance biological sequence database scanning with the Smith-Waterman algorithm and the first stage of progressive multiple sequence alignment based on the ClustalW heuristic on a Xeon Phi-based compute cluster. Our approach uses a three-level parallelization scheme to take full advantage of the compute power available on this type of architecture; i.e. cluster-level data parallelism, thread-level coarse-grained parallelism, and vector-level fine-grained parallelism. Furthermore, we re-organize the sequence datasets and use Xeon Phi shuffle operations to improve I/O efficiency. Evaluations show that our method achieves a peak overall performance up to 220 GCUPS for scanning real protein sequence databanks on a single node consisting of two Intel E5-2620 CPUs and two Intel Xeon Phi 7110P cards. It also exhibits good scalability in terms of sequence length and size, and number of compute nodes for both database scanning and multiple sequence alignment. Furthermore, the achieved performance is highly competitive in comparison to optimized Xeon Phi and GPU implementations. Our implementation is available at https://github.com/turbo0628/LSDBS-mpi .
Unified Simulation and Analysis Framework for Deep Space Navigation Design
NASA Technical Reports Server (NTRS)
Anzalone, Evan; Chuang, Jason; Olsen, Carrie
2013-01-01
As the technology that enables advanced deep space autonomous navigation continues to develop and the requirements for such capability continues to grow, there is a clear need for a modular expandable simulation framework. This tool's purpose is to address multiple measurement and information sources in order to capture system capability. This is needed to analyze the capability of competing navigation systems as well as to develop system requirements, in order to determine its effect on the sizing of the integrated vehicle. The development for such a framework is built upon Model-Based Systems Engineering techniques to capture the architecture of the navigation system and possible state measurements and observations to feed into the simulation implementation structure. These models also allow a common environment for the capture of an increasingly complex operational architecture, involving multiple spacecraft, ground stations, and communication networks. In order to address these architectural developments, a framework of agent-based modules is implemented to capture the independent operations of individual spacecraft as well as the network interactions amongst spacecraft. This paper describes the development of this framework, and the modeling processes used to capture a deep space navigation system. Additionally, a sample implementation describing a concept of network-based navigation utilizing digitally transmitted data packets is described in detail. This developed package shows the capability of the modeling framework, including its modularity, analysis capabilities, and its unification back to the overall system requirements and definition.
NASA Astrophysics Data System (ADS)
Zolesi, Bruno; Cander, Ljiljana R.
2018-05-01
This paper consists of a review of the important contributions of four COST (European Co-operation in Science and Technology) Actions in the period 1991-2009 to terrestrial ionospheric research, with applications in modern communication and navigation systems. Within this context, new ionospheric studies were initiated, leading to the development of a number of models, algorithms for prediction, forecasting, and real-time specification, as well as numerical programs. These were successfully implemented in different collaborative projects within EU instruments, promoting co-operation between scientists and researchers across Europe. A further outcome was to bring together more than a hundred researchers from around 40 scientific institutions, agencies, and academia in about 25 countries worldwide. They collaborated with enthusiasm in research, as briefly described in this paper, forming a lively ionospheric community and presenting a strong intellectual response to the rapidly growing contemporary challenge of space weather research.
Chekov, Iu F
2009-01-01
The author describes a zeolite system for carbon dioxide removal integrated into a closed air regeneration cycle aboard spacecraft. The continuous operation of a double-adsorbent regeneration system with pCO2-dependable productivity is maintained through programmable setting of adsorption (desorption) semicycle time. The optimal system regulation curve is presented within the space of statistical performance family obtained in quasi-steady operating modes with controlled parameters of the recurrent adsorption-desorption cycle. The automatically changing system productivity ensures continuous intake of concentrated CO2. Control of the adsorption-desorption process is based on calculation of the differential adsorption (desorption) heat from gradient of adsorbent and test inert substance temperatures. The adaptive algorithm of digital control is implemented through the standard spacecraft interface with the board computer system and programmable microprocessor-based controllers.
ASIC implementation of recursive scaled discrete cosine transform algorithm
NASA Astrophysics Data System (ADS)
On, Bill N.; Narasimhan, Sam; Huang, Victor K.
1994-05-01
A program to implement the Recursive Scaled Discrete Cosine Transform (DCT) algorithm as proposed by H. S. Hou has been undertaken at the Institute of Microelectronics. Implementation of the design was done using top-down design methodology with VHDL (VHSIC Hardware Description Language) for chip modeling. When the VHDL simulation has been satisfactorily completed, the design is synthesized into gates using a synthesis tool. The architecture of the design consists of two processing units together with a memory module for data storage and transpose. Each processing unit is composed of four pipelined stages which allow the internal clock to run at one-eighth (1/8) the speed of the pixel clock. Each stage operates on eight pixels in parallel. As the data flows through each stage, there are various adders and multipliers to transform them into the desired coefficients. The Scaled IDCT was implemented in a similar fashion with the adders and multipliers rearranged to perform the inverse DCT algorithm. The chip has been verified using Field Programmable Gate Array devices. The design is operational. The combination of fewer multiplications required and pipelined architecture give Hou's Recursive Scaled DCT good potential of achieving high performance at a low cost in using Very Large Scale Integration implementation.
SPRUCE Whole Ecosystems Warming (WEW) Environmental Data Beginning August 2015
Hanson, P. J. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Riggs, J. S. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Nettles, W. R. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Krassovski, M. B. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Hook, L. A. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.
2016-01-01
This data set provides the environmental measurements collected during the implementation of operational methods to achieve both deep soil heating (0-3 m) and whole-ecosystem warming (WEW) appropriate to the scale of tall-stature, high-carbon, boreal forest peatlands. The methods were developed to allow scientists to provide a plausible set of ecosystem warming scenarios within which immediate and longer term (one decade) responses of organisms (microbes to trees) and ecosystem functions (carbon, water and nutrient cycles) could be measured. Elevated CO2 was also incorporated to test how temperature responses may be modified by atmospheric CO2 effects on carbon cycle processes.
SKIRT: Hybrid parallelization of radiative transfer simulations
NASA Astrophysics Data System (ADS)
Verstocken, S.; Van De Putte, D.; Camps, P.; Baes, M.
2017-07-01
We describe the design, implementation and performance of the new hybrid parallelization scheme in our Monte Carlo radiative transfer code SKIRT, which has been used extensively for modelling the continuum radiation of dusty astrophysical systems including late-type galaxies and dusty tori. The hybrid scheme combines distributed memory parallelization, using the standard Message Passing Interface (MPI) to communicate between processes, and shared memory parallelization, providing multiple execution threads within each process to avoid duplication of data structures. The synchronization between multiple threads is accomplished through atomic operations without high-level locking (also called lock-free programming). This improves the scaling behaviour of the code and substantially simplifies the implementation of the hybrid scheme. The result is an extremely flexible solution that adjusts to the number of available nodes, processors and memory, and consequently performs well on a wide variety of computing architectures.
Improved repetition rate mixed isotope CO2 TEA laser
NASA Astrophysics Data System (ADS)
Cohn, D. B.
2014-09-01
A compact CO2 TEA laser has been developed for remote chemical detection that operates at a repetition rate of 250 Hz. It emits 700 mJ/pulse at 10.6 μm in a multimode beam with the 12C16O2 isotope. With mixed 12C16O2 plus 13C16O2 isotopes it emits multiple lines in both isotope manifolds to improve detection of a broad range of chemicals. In particular, output pulse energies are 110 mJ/pulse at 9.77 μm, 250 mJ/pulse at 10 μm, and 550 mJ/pulse at 11.15 μm, useful for detection of the chemical agents Sarin, Tabun, and VX. Related work shows capability for long term sealed operation with a catalyst and an agile tuner at a wavelength shift rate of 200 Hz.
Impact of integrated programs on general surgery operative volume.
Jensen, Amanda R; Nickel, Brianne L; Dolejs, Scott C; Canal, David F; Torbeck, Laura; Choi, Jennifer N
2017-03-01
Integrated residencies are now commonplace, co-existing with categorical general surgery residencies. The purpose of this study was to define the impact of integrated programs on categorical general surgery operative volume. Case logs from categorical general, integrated plastics, vascular, and thoracic surgery residents from a single institution from 2008 to 2016 were collected and analyzed. Integrated residents have increased the number of cases they perform that would have previously been general surgery resident cases from 11 in 2009-2010 to 1392 in 2015-2016. Despite this, there was no detrimental effect on total major cases of graduating chief residents. Multiple integrated programs can co-exist with a general surgery program through careful collaboration and thoughtful consideration to longitudinal needs of individual trainees. As additional programs continue to be created, both integrated and categorical program directors must continue to collaborate to insure the integrity of training for all residents. Copyright © 2017 Elsevier Inc. All rights reserved.
Exploring the Feasibility of a DNA Computer: Design of an ALU Using Sticker-Based DNA Model.
Sarkar, Mayukh; Ghosal, Prasun; Mohanty, Saraju P
2017-09-01
Since its inception, DNA computing has advanced to offer an extremely powerful, energy-efficient emerging technology for solving hard computational problems with its inherent massive parallelism and extremely high data density. This would be much more powerful and general purpose when combined with other existing well-known algorithmic solutions that exist for conventional computing architectures using a suitable ALU. Thus, a specifically designed DNA Arithmetic and Logic Unit (ALU) that can address operations suitable for both domains can mitigate the gap between these two. An ALU must be able to perform all possible logic operations, including NOT, OR, AND, XOR, NOR, NAND, and XNOR; compare, shift etc., integer and floating point arithmetic operations (addition, subtraction, multiplication, and division). In this paper, design of an ALU has been proposed using sticker-based DNA model with experimental feasibility analysis. Novelties of this paper may be in manifold. First, the integer arithmetic operations performed here are 2s complement arithmetic, and the floating point operations follow the IEEE 754 floating point format, resembling closely to a conventional ALU. Also, the output of each operation can be reused for any next operation. So any algorithm or program logic that users can think of can be implemented directly on the DNA computer without any modification. Second, once the basic operations of sticker model can be automated, the implementations proposed in this paper become highly suitable to design a fully automated ALU. Third, proposed approaches are easy to implement. Finally, these approaches can work on sufficiently large binary numbers.
CheMin Instrument Performance and Calibration on Mars
NASA Technical Reports Server (NTRS)
Vaniman, D. T.; Blake, D. F.; Morookian, J. M.; Yen, A. S.; Ming, D. W.; Morris, R. V.; Achilles, C. N.; Bish, D. L.; Chipera, S. J.; Morrison, S. M.;
2013-01-01
The CheMin (Chemistry and Mineralogy) instrument on the Mars Science Laboratory rover Curiosity uses a CCD detector and a Co-anode X-ray tube source to acquire both mineralogy (from the pattern of Co diffraction) and chemical information (from energies of fluoresced X-rays). A key component of the CheMin instrument is the ability to move grains within sample cells during analysis, providing multiple, random grain orientations that disperse diffracted X-ray photons along Debye rings rather than producing discrete Laue spots. This movement is accomplished by piezoelectric vibration of the sample cells. A cryocooler is used to maintain the CCD at a temperature at about -50 C in order to obtain energy resolution better than 250 eV, allowing discrimination of diffracted Co K X-rays from Fe K and other fluorescent X-rays. A detailed description of CheMin is provided in [1]. The CheMin flight model (FM) is mounted within the body of Curiosity and has been operating on Mars since August 6, 2012. An essentially identical sister instrument, the CheMin demonstration model (DM), is operated in a Mars environment chamber at JPL.
Review: An Australian model of care for co-morbid diabetes and chronic kidney disease.
Lo, Clement; Zimbudzi, Edward; Teede, Helena; Cass, Alan; Fulcher, Greg; Gallagher, Martin; Kerr, Peter G; Jan, Stephen; Johnson, Greg; Mathew, Tim; Polkinghorne, Kevan; Russell, Grant; Usherwood, Tim; Walker, Rowan; Zoungas, Sophia
2018-02-05
Diabetes and chronic kidney disease (CKD) are two of the most prevalent co-morbid chronic diseases in Australia. The increasing complexity of multi-morbidity, and current gaps in health-care delivery for people with co-morbid diabetes and CKD, emphasise the need for better models of care for this population. Previously, proposed published models of care for co-morbid diabetes and CKD have not been co-designed with stake-holders or formally evaluated. Particular components of health-care shown to be effective in this population are interventions that: are structured, intensive and multifaceted (treating diabetes and multiple cardiovascular risk factors); involve multiple medical disciplines; improve self-management by the patient; and upskill primary health-care. Here we present an integrated patient-centred model of health-care delivery incorporating these components and co-designed with key stake-holders including specialist health professionals, general practitioners and Diabetes and Kidney Health Australia. The development of the model of care was informed by focus groups of patients and health-professionals; and semi-structured interviews of care-givers and health professionals. Other distinctives of this model of care are routine screening for psychological morbidity; patient-support through a phone advice line; and focused primary health-care support in the management of diabetes and CKD. Additionally, the model of care integrates with the patient-centred health-care home currently being rolled out by the Australian Department of Health. This model of care will be evaluated after implementation across two tertiary health services and their primary care catchment areas. Copyright © 2018 John Wiley & Sons, Ltd. This article is protected by copyright. All rights reserved.
Fingerprinting captured CO2 using natural tracers: Determining CO2 fate and proving ownership
NASA Astrophysics Data System (ADS)
Flude, Stephanie; Gilfillan, Stuart; Johnston, Gareth; Stuart, Finlay; Haszeldine, Stuart
2016-04-01
In the long term, captured CO2 will most likely be stored in large saline formations and it is highly likely that CO2 from multiple operators will be injected into a single saline formation. Understanding CO2 behavior within the reservoir is vital for making operational decisions and often uses geochemical techniques. Furthermore, in the event of a CO2 leak, being able to identify the owner of the CO2 is of vital importance in terms of liability and remediation. Addition of geochemical tracers to the CO2 stream is an effective way of tagging the CO2 from different power stations, but may become prohibitively expensive at large scale storage sites. Here we present results from a project assessing whether the natural isotopic composition (C, O and noble gas isotopes) of captured CO2 is sufficient to distinguish CO2 captured using different technologies and from different fuel sources, from likely baseline conditions. Results include analytical measurements of CO2 captured from a number of different CO2 capture plants and a comprehensive literature review of the known and hypothetical isotopic compositions of captured CO2 and baseline conditions. Key findings from the literature review suggest that the carbon isotope composition will be most strongly controlled by that of the feedstock, but significant fractionation is possible during the capture process; oxygen isotopes are likely to be controlled by the isotopic composition of any water used in either the industrial process or the capture technology; and noble gases concentrations will likely be controlled by the capture technique employed. Preliminary analytical results are in agreement with these predictions. Comparison with summaries of likely storage reservoir baseline and shallow or surface leakage reservoir baseline data suggests that C-isotopes are likely to be valuable tracers of CO2 in the storage reservoir, while noble gases may be particularly valuable as tracers of potential leakage.
Operator Interface for the ALMA Observing System
NASA Astrophysics Data System (ADS)
Grosbøl, P.; Schilling, M.
2009-09-01
The Atacama Large Millimeter/submillimeter Array (ALMA) is a major new ground-based radio-astronomical facility being constructed in Chile in an international collaboration between Europe, Japan and North America in cooperation with the Republic of Chile. The facility will include 54 12m and 12 7m antennas at the Altiplano de Chajnantor and be operated from the Operations Support Facilities (OSF) near San Pedro. This paper describes design and baseline implementation of the Graphical User Interface (GUI) used by operators to monitor and control the observing facility. It is written in Java and provides a simple plug-in interface which allows different subsystems to add their own panels to the GUI. The design is based on a client/server concept and supports multiple operators to share or monitor operations.
A Standard Platform for Testing and Comparison of MDAO Architectures
NASA Technical Reports Server (NTRS)
Gray, Justin S.; Moore, Kenneth T.; Hearn, Tristan A.; Naylor, Bret A.
2012-01-01
The Multidisciplinary Design Analysis and Optimization (MDAO) community has developed a multitude of algorithms and techniques, called architectures, for performing optimizations on complex engineering systems which involve coupling between multiple discipline analyses. These architectures seek to efficiently handle optimizations with computationally expensive analyses including multiple disciplines. We propose a new testing procedure that can provide a quantitative and qualitative means of comparison among architectures. The proposed test procedure is implemented within the open source framework, OpenMDAO, and comparative results are presented for five well-known architectures: MDF, IDF, CO, BLISS, and BLISS-2000. We also demonstrate how using open source soft- ware development methods can allow the MDAO community to submit new problems and architectures to keep the test suite relevant.
Multi-camera synchronization core implemented on USB3 based FPGA platform
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado
2015-03-01
Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
Analysis and design of a high power laser adaptive phased array transmitter
NASA Technical Reports Server (NTRS)
Mevers, G. E.; Soohoo, J. F.; Winocur, J.; Massie, N. A.; Southwell, W. H.; Brandewie, R. A.; Hayes, C. L.
1977-01-01
The feasibility of delivering substantial quantities of optical power to a satellite in low earth orbit from a ground based high energy laser (HEL) coupled to an adaptive antenna was investigated. Diffraction effects, atmospheric transmission efficiency, adaptive compensation for atmospheric turbulence effects, including the servo bandwidth requirements for this correction, and the adaptive compensation for thermal blooming were examined. To evaluate possible HEL sources, atmospheric investigations were performed for the CO2, (C-12)(O-18)2 isotope, CO and DF wavelengths using output antenna locations of both sea level and mountain top. Results indicate that both excellent atmospheric and adaption efficiency can be obtained for mountain top operation with a micron isotope laser operating at 9.1 um, or a CO laser operating single line (P10) at about 5.0 (C-12)(O-18)2um, which was a close second in the evaluation. Four adaptive power transmitter system concepts were generated and evaluated, based on overall system efficiency, reliability, size and weight, advanced technology requirements and potential cost. A multiple source phased array was selected for detailed conceptual design. The system uses a unique adaption technique of phase locking independent laser oscillators which allows it to be both relatively inexpensive and most reliable with a predicted overall power transfer efficiency of 53%.
CO2 and nutrient-driven changes across multiple levels of organization in Zostera noltii ecosystems
NASA Astrophysics Data System (ADS)
Martínez-Crego, B.; Olivé, I.; Santos, R.
2014-12-01
Increasing evidence emphasizes that the effects of human impacts on ecosystems must be investigated using designs that incorporate the responses across levels of biological organization as well as the effects of multiple stressors. Here we implemented a mesocosm experiment to investigate how the individual and interactive effects of CO2 enrichment and eutrophication scale-up from changes in primary producers at the individual (biochemistry) or population level (production, reproduction, and/or abundance) to higher levels of community (macroalgae abundance, herbivory, and global metabolism), and ecosystem organization (detritus release and carbon sink capacity). The responses of Zostera noltii seagrass meadows growing in low- and high-nutrient field conditions were compared. In both meadows, the expected CO2 benefits on Z. noltii leaf production were suppressed by epiphyte overgrowth, with no direct CO2 effect on plant biochemistry or population-level traits. Multi-level meadow response to nutrients was faster and stronger than to CO2. Nutrient enrichment promoted the nutritional quality of Z. noltii (high N, low C : N and phenolics), the growth of epiphytic pennate diatoms and purple bacteria, and shoot mortality. In the low-nutrient meadow, individual effects of CO2 and nutrients separately resulted in reduced carbon storage in the sediment, probably due to enhanced microbial degradation of more labile organic matter. These changes, however, had no effect on herbivory or on community metabolism. Interestingly, individual effects of CO2 or nutrient addition on epiphytes, shoot mortality, and carbon storage were attenuated when nutrients and CO2 acted simultaneously. This suggests CO2-induced benefits on eutrophic meadows. In the high-nutrient meadow, a striking shoot decline caused by amphipod overgrazing masked the response to CO2 and nutrient additions. Our results reveal that under future scenarios of CO2, the responses of seagrass ecosystems will be complex and context-dependent, being mediated by epiphyte overgrowth rather than by direct effects on plant biochemistry. Overall, we found that the responses of seagrass meadows to individual and interactive effects of CO2 and nutrient enrichment varied depending on interactions among species and connections between organization levels.
Yuen, W C; Wong, K; Cheung, Y S; Lai, P Bs
2018-04-01
Since 2008, the Hong Kong Hospital Authority has implemented a Surgical Outcomes Monitoring and Improvement Programme (SOMIP) at 17 public hospitals with surgical departments. This study aimed to assess the change in operative mortality rate after implementation of SOMIP. The SOMIP included all Hospital Authority patients undergoing major/ultra-major procedures in general surgery, urology, plastic surgery, and paediatric surgery. Patients undergoing liver or renal transplantation or who had multiple trauma or massive bowel ischaemia were excluded. In SOMIP, data retrieval from the Hospital Authority patient database was performed by six full-time nurse reviewers following a set of precise data definitions. A total of 230 variables were collected for each patient, on demographics, preoperative and operative variables, laboratory test results, and postoperative complications up to 30 days after surgery. In this study, we used SOMIP cumulative 5-year data to generate risk-adjusted 30-day mortality models by hierarchical logistic regression for both emergency and elective operations. The models expressed overall performance as an annual observed-to-expected mortality ratio. From 2009/2010 to 2015/2016, the overall crude mortality rate decreased from 10.8% to 5.6% for emergency procedures and from 0.9% to 0.4% for elective procedures. From 2011/2012 to 2015/2016, the risk-adjusted observed-to-expected mortality ratios showed a significant downward trend for both emergency and elective operations: from 1.126 to 0.796 and from 1.150 to 0.859, respectively (Mann- Kendall statistic = -0.8; P<0.05 for both). The Hospital Authority's overall crude mortality rates and risk-adjusted observed-to-expected mortality ratios for emergency and elective operations significantly declined after SOMIP was implemented.
e-Science platform for translational biomedical imaging research: running, statistics, and analysis
NASA Astrophysics Data System (ADS)
Wang, Tusheng; Yang, Yuanyuan; Zhang, Kai; Wang, Mingqing; Zhao, Jun; Xu, Lisa; Zhang, Jianguo
2015-03-01
In order to enable multiple disciplines of medical researchers, clinical physicians and biomedical engineers working together in a secured, efficient, and transparent cooperative environment, we had designed an e-Science platform for biomedical imaging research and application cross multiple academic institutions and hospitals in Shanghai and presented this work in SPIE Medical Imaging conference held in San Diego in 2012. In past the two-years, we implemented a biomedical image chain including communication, storage, cooperation and computing based on this e-Science platform. In this presentation, we presented the operating status of this system in supporting biomedical imaging research, analyzed and discussed results of this system in supporting multi-disciplines collaboration cross-multiple institutions.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-09
... Promulgation of Air Quality Implementation Plans; Minnesota; Carbon Monoxide (CO) Limited Maintenance Plan for... June 16, 2010, to revise the Minnesota State Implementation Plan (SIP) for carbon monoxide (CO) under the Clean Air Act (CAA). The State has submitted a limited maintenance plan for CO showing continued...
Monolithic 3D CMOS Using Layered Semiconductors.
Sachid, Angada B; Tosun, Mahmut; Desai, Sujay B; Hsu, Ching-Yi; Lien, Der-Hsien; Madhvapathy, Surabhi R; Chen, Yu-Ze; Hettick, Mark; Kang, Jeong Seuk; Zeng, Yuping; He, Jr-Hau; Chang, Edward Yi; Chueh, Yu-Lun; Javey, Ali; Hu, Chenming
2016-04-06
Monolithic 3D integrated circuits using transition metal dichalcogenide materials and low-temperature processing are reported. A variety of digital and analog circuits are implemented on two sequentially integrated layers of devices. Inverter circuit operation at an ultralow supply voltage of 150 mV is achieved, paving the way to high-density, ultralow-voltage, and ultralow-power applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nabe-Nielsen, Kirsten; Garde, Anne Helene; Aust, Birgit; Diderichsen, Finn
2012-01-01
This quasi-experimental study investigated how an intervention aiming at increasing eldercare workers' influence on their working hours affected the flexibility, variability, regularity and predictability of the working hours. We used baseline (n = 296) and follow-up (n = 274) questionnaire data and interviews with intervention-group participants (n = 32). The work units in the intervention group designed their own intervention comprising either implementation of computerised self-scheduling (subgroup A), collection of information about the employees' work-time preferences by questionnaires (subgroup B), or discussion of working hours (subgroup C). Only computerised self-scheduling changed the working hours and the way they were planned. These changes implied more flexible but less regular working hours and an experience of less predictability and less continuity in the care of clients and in the co-operation with colleagues. In subgroup B and C, the participants ended up discussing the potential consequences of more work-time influence without actually implementing any changes. Employee work-time influence may buffer the adverse effects of shift work. However, our intervention study suggested that while increasing the individual flexibility, increasing work-time influence may also result in decreased regularity of the working hours and less continuity in the care of clients and co-operation with colleagues.
Aarons, Gregory A; Fettes, Danielle L; Hurlburt, Michael S; Palinkas, Lawrence A; Gunderson, Lara; Willging, Cathleen E; Chaffin, Mark J
2014-01-01
Implementation and scale-up of evidence-based practices (EBPs) is often portrayed as involving multiple stakeholders collaborating harmoniously in the service of a shared vision. In practice, however, collaboration is a more complex process that may involve shared and competing interests and agendas, and negotiation. The present study examined the scale-up of an EBP across an entire service system using the Interagency Collaborative Team approach. Participants were key stakeholders in a large-scale county-wide implementation of an EBP to reduce child neglect, SafeCare. Semistructured interviews and/or focus groups were conducted with 54 individuals representing diverse constituents in the service system, followed by an iterative approach to coding and analysis of transcripts. The study was conceptualized using the Exploration, Preparation, Implementation, and Sustainment framework. Although community stakeholders eventually coalesced around implementation of SafeCare, several challenges affected the implementation process. These challenges included differing organizational cultures, strategies, and approaches to collaboration; competing priorities across levels of leadership; power struggles; and role ambiguity. Each of the factors identified influenced how stakeholders approached the EBP implementation process. System-wide scale-up of EBPs involves multiple stakeholders operating in a nexus of differing agendas, priorities, leadership styles, and negotiation strategies. The term collaboration may oversimplify the multifaceted nature of the scale-up process. Implementation efforts should openly acknowledge and consider this nexus when individual stakeholders and organizations enter into EBP implementation through collaborative processes.
Aarons, Gregory A.; Fettes, Danielle; Hurlburt, Michael; Palinkas, Lawrence; Gunderson, Lara; Willging, Cathleen; Chaffin, Mark
2014-01-01
Objective Implementation and scale-up of evidence-based practices (EBPs) is often portrayed as involving multiple stakeholders collaborating harmoniously in the service of a shared vision. In practice, however, collaboration is a more complex process that may involve shared and competing interests and agendas, and negotiation. The present study examined the scale-up of an EBP across an entire service system using the Interagency Collaborative Team (ICT) approach. Methods Participants were key stakeholders in a large-scale county-wide implementation of an EBP to reduce child neglect, SafeCare®. Semi-structured interviews and/or focus groups were conducted with 54 individuals representing diverse constituents in the service system, followed by an iterative approach to coding and analysis of transcripts. The study was conceptualized using the Exploration, Preparation, Implementation, and Sustainment (EPIS) framework. Results Although community stakeholders eventually coalesced around implementation of SafeCare, several challenges affected the implementation process. These challenges included differing organizational cultures, strategies, and approaches to collaboration, competing priorities across levels of leadership, power struggles, and role ambiguity. Each of the factors identified influenced how stakeholders approached the EBP implementation process. Conclusions System wide scale-up of EBPs involves multiple stakeholders operating in a nexus of differing agendas, priorities, leadership styles, and negotiation strategies. The term collaboration may oversimplify the multifaceted nature of the scale-up process. Implementation efforts should openly acknowledge and consider this nexus when individual stakeholders and organizations enter into EBP implementation through collaborative processes. PMID:24611580
Flexible robotic entry device for a nuclear materials production reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heckendorn, F.M. II
1988-01-01
The Savannah River Laboratory has developed and is implementing a flexible robotic entry device (FRED) for the nuclear materials production reactors now operating at the Savannah River Plant (SRP). FRED is designed for rapid deployment into confinement areas of operating reactors to assess unknown conditions. A unique smart tether method has been incorporated into FRED for simultaneous bidirectional transmission of multiple video/audio/control/power signals over a single coaxial cable. This system makes it possible to use FRED under all operating and standby conditions, including those where radio/microwave transmissions are not possible or permitted, and increases the quantity of data available.
Flexible software architecture for user-interface and machine control in laboratory automation.
Arutunian, E B; Meldrum, D R; Friedman, N A; Moody, S E
1998-10-01
We describe a modular, layered software architecture for automated laboratory instruments. The design consists of a sophisticated user interface, a machine controller and multiple individual hardware subsystems, each interacting through a client-server architecture built entirely on top of open Internet standards. In our implementation, the user-interface components are built as Java applets that are downloaded from a server integrated into the machine controller. The user-interface client can thereby provide laboratory personnel with a familiar environment for experiment design through a standard World Wide Web browser. Data management and security are seamlessly integrated at the machine-controller layer using QNX, a real-time operating system. This layer also controls hardware subsystems through a second client-server interface. This architecture has proven flexible and relatively easy to implement and allows users to operate laboratory automation instruments remotely through an Internet connection. The software architecture was implemented and demonstrated on the Acapella, an automated fluid-sample-processing system that is under development at the University of Washington.
Advantages of Brahms for Specifying and Implementing a Multiagent Human-Robotic Exploration System
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron
2003-01-01
We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, all-terrain vehicles, robotic assistant, crew in a local habitat, and mission support team. Software processes ('agents') implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a runtime system Thus, Brahms provides a language, engine, and system builder's toolkit for specifying and implementing multiagent systems.
Lester, Patricia; Mogil, Catherine; Saltzman, William; Woodward, Kirsten; Nash, William; Leskin, Gregory; Bursch, Brenda; Green, Sara; Pynoos, Robert; Beardslee, William
2011-01-01
The toll of multiple and prolonged deployments on families has become clearer in recent years as military families have seen an increase in childhood anxiety, parental psychological distress, and marital discord. Families overcoming under stress (FOCUS), a family-centered evidence-informed resiliency training program developed at University of California, Los Angeles and Harvard Medical School, is being implemented at military installations through an initiative from Navy Bureau of Medicine and Surgery. The research foundation for FOCUS includes evidence-based preventive interventions that were adapted to meet the specific needs of military families facing combat operational stress associated with wartime deployments. Using a family narrative approach, FOCUS includes a customized approach utilizing core intervention components, including psychoeducation, emotional regulation skills, goal setting and problem solving skills, traumatic stress reminder management techniques, and family communication skills. The purpose of this study is to describe the development and implementation of FOCUS for military families. A case example is also presented.
Rich, Alisa L; Patel, Jay T
2015-01-01
Carbon disulfide (CS2) has been historically associated with the production of rayon, cellophane, and carbon tetrachloride. This study identifies multiple mechanisms by which CS2 contributes to the formation of CO2 in the atmosphere. CS2 and other associated sulfide compounds were found by this study to be present in emissions from unconventional shale gas extraction and processing (E&P) operations. The breakdown products of CS2; carbonyl sulfide (COS), carbon monoxide (CO), and sulfur dioxide (SO2) are indirect greenhouse gases (GHGs) that contribute to CO2 levels in the atmosphere. The heat-trapping nature of CO2 has been found to increase the surface temperature, resulting in regional and global climate change. The purpose of this study is to identify five mechanisms by which CS2 and the breakdown products of CS2 contribute to atmospheric concentrations of CO2. The five mechanisms of CO2 formation are as follows: Chemical Interaction of CS2 and hydrogen sulfide (H2S) present in natural gas at high temperatures, resulting in CO2 formation;Combustion of CS2 in the presence of oxygen producing SO2 and CO2;Photolysis of CS2 leading to the formation of COS, CO, and SO2, which are indirect contributors to CO2 formation;One-step hydrolysis of CS2, producing reactive intermediates and ultimately forming H2S and CO2;Two-step hydrolysis of CS2 forming the reactive COS intermediate that reacts with an additional water molecule, ultimately forming H2S and CO2. CS2 and COS additionally are implicated in the formation of SO2 in the stratosphere and/or troposphere. SO2 is an indirect contributor to CO2 formation and is implicated in global climate change.
Park, Jong-Myeon; Cho, Yoon-Kyoung; Lee, Beom-Seok; Lee, Jeong-Gun; Ko, Christopher
2007-05-01
Valving is critical in microfluidic systems. Among many innovative microvalves used in lab-on-a-chip applications, phase change based microvalves using paraffin wax are particularly attractive for disposable biochip applications because they are simple to implement, cost-effective and biocompatible. However, previously reported paraffin-based valves require embedded microheaters and therefore multi-step operation of many microvalves was a difficult problem. Besides, the operation time was relatively long, 2-10 s. In this paper, we report a unique phase change based microvalve for rapid and versatile operation of multiple microvalves using a single laser diode. The valve is made of nanocomposite materials in which 10 nm-sized iron oxide nanoparticles are dispersed in paraffin wax and used as nanoheaters when excited by laser irradiation. Laser light of relatively weak intensity was able to melt the paraffin wax with the embedded iron oxide nanoparticles, whereas even a very intense laser beam does not melt wax alone. The microvalves are leak-free up to 403.0 +/- 7.6 kPa and the response times to operate both normally closed and normally opened microvalves are less than 0.5 s. Furthermore, a sequential operation of multiple microvalves on a centrifugal microfluidic device using a single laser diode was demonstrated. It showed that the optical control of multiple microvalves is fast, robust, simple to operate, and requires minimal chip space and thus is well suited for fully integrated lab-on-a-chip applications.
A Virtual Science Data Environment for Carbon Dioxide Observations
NASA Astrophysics Data System (ADS)
Verma, R.; Goodale, C. E.; Hart, A. F.; Law, E.; Crichton, D. J.; Mattmann, C. A.; Gunson, M. R.; Braverman, A. J.; Nguyen, H. M.; Eldering, A.; Castano, R.; Osterman, G. B.
2011-12-01
Climate science data are often distributed cross-institutionally and made available using heterogeneous interfaces. With respect to observational carbon-dioxide (CO2) records, these data span across national as well as international institutions and are typically distributed using a variety of data standards. Such an arrangement can yield challenges from a research perspective, as users often need to independently aggregate datasets as well as address the issue of data quality. To tackle this dispersion and heterogeneity of data, we have developed the CO2 Virtual Science Data Environment - a comprehensive approach to virtually integrating CO2 data and metadata from multiple missions and providing a suite of computational services that facilitate analysis, comparison, and transformation of that data. The Virtual Science Environment provides climate scientists with a unified web-based destination for discovering relevant observational data in context, and supports a growing range of online tools and services for analyzing and transforming the available data to suit individual research needs. It includes web-based tools to geographically and interactively search for CO2 observations collected from multiple airborne, space, as well as terrestrial platforms. Moreover, the data analysis services it provides over the Internet, including offering techniques such as bias estimation and spatial re-gridding, move computation closer to the data and reduce the complexity of performing these operations repeatedly and at scale. The key to enabling these services, as well as consolidating the disparate data into a unified resource, has been to focus on leveraging metadata descriptors as the foundation of our data environment. This metadata-centric architecture, which leverages the Dublin Core standard, forgoes the need to replicate remote datasets locally. Instead, the system relies upon an extensive, metadata-rich virtual data catalog allowing on-demand browsing and retrieval of CO2 records from multiple missions. In other words, key metadata information about remote CO2 records is stored locally while the data itself is preserved at its respective archive of origin. This strategy has been made possible by our method of encapsulating the heterogeneous sources of data using a common set of web-based services, including services provided by Jet Propulsion Laboratory's Climate Data Exchange (CDX). Furthermore, this strategy has enabled us to scale across missions, and to provide access to a broad array of CO2 observational data. Coupled with on-demand computational services and an intuitive web-portal interface, the CO2 Virtual Science Data Environment effectively transforms heterogeneous CO2 records from multiple sources into a unified resource for scientific discovery.
Simulations of Operation Dynamics of Different Type GaN Particle Sensors
Gaubas, Eugenijus; Ceponis, Tomas; Kalesinskas, Vidas; Pavlov, Jevgenij; Vysniauskas, Juozas
2015-01-01
The operation dynamics of the capacitor-type and PIN diode type detectors based on GaN have been simulated using the dynamic and drift-diffusion models. The drift-diffusion current simulations have been implemented by employing the software package Synopsys TCAD Sentaurus. The monopolar and bipolar drift regimes have been analyzed by using dynamic models based on the Shockley-Ramo theorem. The carrier multiplication processes determined by impact ionization have been considered in order to compensate carrier lifetime reduction due to introduction of radiation defects into GaN detector material. PMID:25751080
Implementation of an in situ qualitative debriefing tool for resuscitations.
Mullan, Paul C; Wuestner, Elizabeth; Kerr, Tarra D; Christopher, Daniel P; Patel, Binita
2013-07-01
Multiple guidelines recommend debriefing of resuscitations to improve clinical performance. We implemented a novel standardized debriefing program using a Debriefing In Situ Conversation after Emergent Resuscitation Now (DISCERN) tool. Following the development of the evidence-based DISCERN tool, we conducted an observational study of all resuscitations (intubation, CPR, and/or defibrillation) at a pediatric emergency department (ED) over one year. Resuscitation interventions, patient survival, and physician team leader characteristics were analyzed as predictors for debriefing. Each debriefing's participants, time duration, and content were recorded. Thematic content of debriefings was categorized by framework approach into Team Emergency Assessment Measure (TEAM) elements. There were 241 resuscitations and 63 (26%) debriefings. A higher proportion of debriefings occurred after CPR (p<0.001) or ED death (p<0.001). Debriefing participants always included an attending and nurse; the median number of staff roles present was six. Median intervals (from resuscitation end to start of debriefing) & debriefing durations were 33 (IQR 15, 67) and 10 min (IQR 5, 12), respectively. Common TEAM themes included co-operation/coordination (30%), communication (22%), and situational awareness (15%). Stated reasons for not debriefing included: unnecessary (78%), time constraints (19%), or other reasons (3%). Debriefings with the DISCERN tool usually involved higher acuity resuscitations, involved most of the indicated personnel, and lasted less than 10 min. Future studies are needed to evaluate the tool for adaptation to other settings and potential impacts on education, quality improvement programming, and staff emotional well-being. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Pitfalls of CITES implementation in Nepal: a policy gap analysis.
Dongol, Yogesh; Heinen, Joel T
2012-08-01
Implementation of policy involves multiple agencies operating at multiple levels in facilitating processes and actions to accomplish desired results. The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) was developed and implemented to regulate and control international wildlife trade, but violations of the agreement are widespread and growing worldwide, including in Nepal. This study attempts to understand how domestic CITES policies are translated into action and what effect actions and processes have on compliance. In doing so, this study provides insights into the implementation and enforcement pitfalls of national legislation that explain CITES violations in Nepal. Primarily, we used 26 key informants interviews to learn opinions of experts, and the grounded theory approach for further qualitative data analysis. In addition, we used Najman's (1995) policy implementation analysis framework to explain gaps. Many interrelated variables in the content of the policy, commitment and capacity of the agencies, the roles of clients and coalitions and contextual issues were observed. Variables that emerged suggest pitfalls in the regulatory policy represented by low probability of detection, arrest and punishment. Moreover, redistributive policies in buffer zones of protected areas are needed into perpetuity to benefit locals. Also, conservation organizations' support for building public and political salience is imperative.
Pitfalls of CITES Implementation in Nepal: A Policy Gap Analysis
NASA Astrophysics Data System (ADS)
Dongol, Yogesh; Heinen, Joel T.
2012-08-01
Implementation of policy involves multiple agencies operating at multiple levels in facilitating processes and actions to accomplish desired results. The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) was developed and implemented to regulate and control international wildlife trade, but violations of the agreement are widespread and growing worldwide, including in Nepal. This study attempts to understand how domestic CITES policies are translated into action and what effect actions and processes have on compliance. In doing so, this study provides insights into the implementation and enforcement pitfalls of national legislation that explain CITES violations in Nepal. Primarily, we used 26 key informants interviews to learn opinions of experts, and the grounded theory approach for further qualitative data analysis. In addition, we used Najman's (1995) policy implementation analysis framework to explain gaps. Many interrelated variables in the content of the policy, commitment and capacity of the agencies, the roles of clients and coalitions and contextual issues were observed. Variables that emerged suggest pitfalls in the regulatory policy represented by low probability of detection, arrest and punishment. Moreover, redistributive policies in buffer zones of protected areas are needed into perpetuity to benefit locals. Also, conservation organizations' support for building public and political salience is imperative.
ERIC Educational Resources Information Center
Davis, Sarah K.; Humphrey, Neil
2012-01-01
Theoretically, trait and ability emotional intelligence (EI) should mobilise coping processes to promote adaptation, plausibly operating as personal resources determining choice and/or implementation of coping style. However, there is a dearth of research deconstructing if/how EI impacts mental health via multiple coping strategies in adolescence.…
Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis
NASA Technical Reports Server (NTRS)
Slojkowski, Steven E.
2014-01-01
Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.
Centaur operations at the space station
NASA Technical Reports Server (NTRS)
Porter, J.; Thompson, W.; Bennett, F.; Holdridge, J.
1987-01-01
A study was conducted on the feasibility of using a Centaur vehicle as a testbed to demonstrate critical OTV technologies at the Space Station. Two Technology Demonstration Missions (TDMs) were identified: (1) Accommodations, and (2) Operations. The Accommodations TDM contained: (1) berthing, (2) checkout, maintenance and safing, and (3) payload integration missions. The Operations TDM contained: (1) a cryogenic propellant resupply mission, and (2) Centaur deployment activities. A modified Space Station Co-Orbiting Platform (COP) was selected as the optimum refueling and launch node due to safety and operational considerations. After completion of the TDMs, the fueled Centaur would carry out a mission to actually test deployment and help offset TDM costs. From the Station, the Centaur could carry a single payload in excess of 20,000 pounds to geosynchronous orbit or multiple payloads.
Efficient parallel simulation of CO2 geologic sequestration insaline aquifers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Doughty, Christine; Wu, Yu-Shu
2007-01-01
An efficient parallel simulator for large-scale, long-termCO2 geologic sequestration in saline aquifers has been developed. Theparallel simulator is a three-dimensional, fully implicit model thatsolves large, sparse linear systems arising from discretization of thepartial differential equations for mass and energy balance in porous andfractured media. The simulator is based on the ECO2N module of the TOUGH2code and inherits all the process capabilities of the single-CPU TOUGH2code, including a comprehensive description of the thermodynamics andthermophysical properties of H2O-NaCl- CO2 mixtures, modeling singleand/or two-phase isothermal or non-isothermal flow processes, two-phasemixtures, fluid phases appearing or disappearing, as well as saltprecipitation or dissolution. The newmore » parallel simulator uses MPI forparallel implementation, the METIS software package for simulation domainpartitioning, and the iterative parallel linear solver package Aztec forsolving linear equations by multiple processors. In addition, theparallel simulator has been implemented with an efficient communicationscheme. Test examples show that a linear or super-linear speedup can beobtained on Linux clusters as well as on supercomputers. Because of thesignificant improvement in both simulation time and memory requirement,the new simulator provides a powerful tool for tackling larger scale andmore complex problems than can be solved by single-CPU codes. Ahigh-resolution simulation example is presented that models buoyantconvection, induced by a small increase in brine density caused bydissolution of CO2.« less
NASA Technical Reports Server (NTRS)
Campbell, R. H.; Essick, R. B.; Grass, J.; Johnston, G.; Kenny, K.; Russo, V.
1986-01-01
The EOS project is investigating the design and construction of a family of real-time distributed embedded operating systems for reliable, distributed aerospace applications. Using the real-time programming techniques developed in co-operation with NASA in earlier research, the project staff is building a kernel for a multiple processor networked system. The first six months of the grant included a study of scheduling in an object-oriented system, the design philosophy of the kernel, and the architectural overview of the operating system. In this report, the operating system and kernel concepts are described. An environment for the experiments has been built and several of the key concepts of the system have been prototyped. The kernel and operating system is intended to support future experimental studies in multiprocessing, load-balancing, routing, software fault-tolerance, distributed data base design, and real-time processing.
Co-control of urban air pollutants and greenhouse gases in Mexico City.
West, J Jason; Osnaya, Patricia; Laguna, Israel; Martínez, Julia; Fernández, Adrián
2004-07-01
This study addresses the synergies of mitigation measures to control urban air pollutant and greenhouse gas (GHG) emissions, in developing integrated "co-control" strategies for Mexico City. First, existing studies of emissions reduction measures--PROAIRE (the air quality plan for Mexico City) and separate GHG studies--are used to construct a harmonized database of options. Second, linear programming (LP) is developed and applied as a decision-support tool to analyze least-cost strategies for meeting co-control targets for multiple pollutants. We estimate that implementing PROAIRE measures as planned will reduce 3.1% of the 2010 metropolitan CO2 emissions, in addition to substantial local air pollutant reductions. Applying the LP, PROAIRE emissions reductions can be met at a 20% lower cost, using only the PROAIRE measures, by adjusting investments toward the more cost-effective measures; lower net costs are possible by including cost-saving GHG mitigation measures, but with increased investment. When CO2 emission reduction targets are added to PROAIRE targets, the most cost-effective solutions use PROAIRE measures for the majority of local pollutant reductions, and GHG measures for additional CO2 control. Because of synergies, the integrated planning of urban-global co-control can be beneficial, but we estimate that for Mexico City these benefits are often small.
Multi-species detection using multi-mode absorption spectroscopy (MUMAS)
NASA Astrophysics Data System (ADS)
Northern, J. H.; Thompson, A. W. J.; Hamilton, M. L.; Ewart, P.
2013-06-01
The detection of multiple species using a single laser and single detector employing multi-mode absorption spectroscopy (MUMAS) is reported. An in-house constructed, diode-pumped, Er:Yb:glass micro-laser operating at 1,565 nm with 10 modes separated by 18 GHz was used to record MUMAS signals in a gas mixture containing C2H2, N2O and CO. The components of the mixture were detected simultaneously by identifying multiple transitions in each of the species. By using temperature- and pressure-dependent modelled spectral fits to the data, partial pressures of each species in the mixture were determined with an uncertainty of ±2 %.
'You see?' Teaching and learning how to interpret visual cues during surgery.
Cope, Alexandra C; Bezemer, Jeff; Kneebone, Roger; Lingard, Lorelei
2015-11-01
The ability to interpret visual cues is important in many medical specialties, including surgery, in which poor outcomes are largely attributable to errors of perception rather than poor motor skills. However, we know little about how trainee surgeons learn to make judgements in the visual domain. We explored how trainees learn visual cue interpretation in the operating room. A multiple case study design was used. Participants were postgraduate surgical trainees and their trainers. Data included observer field notes, and integrated video- and audio-recordings from 12 cases representing more than 11 hours of observation. A constant comparative methodology was used to identify dominant themes. Visual cue interpretation was a recurrent feature of trainer-trainee interactions and was achieved largely through the pedagogic mechanism of co-construction. Co-construction was a dialogic sequence between trainer and trainee in which they explored what they were looking at together to identify and name structures or pathology. Co-construction took two forms: 'guided co-construction', in which the trainer steered the trainee to see what the trainer was seeing, and 'authentic co-construction', in which neither trainer nor trainee appeared certain of what they were seeing and pieced together the information collaboratively. Whether the co-construction activity was guided or authentic appeared to be influenced by case difficulty and trainee seniority. Co-construction was shown to occur verbally, through discussion, and also through non-verbal exchanges in which gestures made with laparoscopic instruments contributed to the co-construction discourse. In the training setting, learning visual cue interpretation occurs in part through co-construction. Co-construction is a pedagogic phenomenon that is well recognised in the context of learning to interpret verbal information. In articulating the features of co-construction in the visual domain, this work enables the development of explicit pedagogic strategies for maximising trainees' learning of visual cue interpretation. This is relevant to multiple medical specialties in which judgements must be based on visual information. © 2015 John Wiley & Sons Ltd.
Sensor Webs and Virtual Globes: Enabling Understanding of Changes in a partially Glaciated Watershed
NASA Astrophysics Data System (ADS)
Heavner, M.; Fatland, D. R.; Habermann, M.; Berner, L.; Hood, E.; Connor, C.; Galbraith, J.; Knuth, E.; O'Brien, W.
2008-12-01
The University of Alaska Southeast is currently implementing a sensor web identified as the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education, and Research (SEAMONSTER). SEAMONSTER is operating in the partially glaciated Mendenhall and Lemon Creek Watersheds, in the Juneau area, on the margins of the Juneau Icefield. These watersheds are studied for both 1. long term monitoring of changes, and 2. detection and analysis of transient events (such as glacier lake outburst floods). The heterogeneous sensors (meteorologic, dual frequency GPS, water quality, lake level, etc), power and bandwidth constraints, and competing time scales of interest require autonomous reactivity of the sensor web. They also present challenges for operational management of the sensor web. The harsh conditions on the glaciers provide additional operating constraints. The tight integration of the sensor web and virtual global enabling technology enhance the project in multiple ways. We are utilizing virtual globe infrastructures to enhance both sensor web management and data access. SEAMONSTER utilizes virtual globes for education and public outreach, sensor web management, data dissemination, and enabling collaboration. Using a PosgreSQL with GIS extensions database coupled to the Open Geospatial Consortium (OGC) Geoserver, we generate near-real-time auto-updating geobrowser files of the data in multiple OGC standard formats (e.g KML, WCS). Additionally, embedding wiki pages in this database allows the development of a geospatially aware wiki describing the projects for better public outreach and education. In this presentation we will describe how we have implemented these technologies to date, the lessons learned, and our efforts towards greater OGC standard implementation. A major focus will be on demonstrating how geobrowsers and virtual globes have made this project possible.
Graphene-assisted multiple-input high-base optical computing
Hu, Xiao; Wang, Andong; Zeng, Mengqi; Long, Yun; Zhu, Long; Fu, Lei; Wang, Jian
2016-01-01
We propose graphene-assisted multiple-input high-base optical computing. We fabricate a nonlinear optical device based on a fiber pigtail cross-section coated with a single-layer graphene grown by chemical vapor deposition (CVD) method. An approach to implementing modulo 4 operations of three-input hybrid addition and subtraction of quaternary base numbers in the optical domain using multiple non-degenerate four-wave mixing (FWM) processes in graphene coated optical fiber device and (differential) quadrature phase-shift keying ((D)QPSK) signals is presented. We demonstrate 10-Gbaud modulo 4 operations of three-input quaternary hybrid addition and subtraction (A + B − C, A + C − B, B + C − A) in the experiment. The measured optical signal-to-noise ratio (OSNR) penalties for modulo 4 operations of three-input quaternary hybrid addition and subtraction (A + B − C, A + C − B, B + C − A) are measured to be less than 7 dB at a bit-error rate (BER) of 2 × 10−3. The BER performance as a function of the relative time offset between three signals (signal offset) is also evaluated showing favorable performance. PMID:27604866
List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor
NASA Astrophysics Data System (ADS)
Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.
2014-03-01
List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerka, M.
1998-07-01
The main objective of this presentation is to describe the current reform of the Polish electric power sector being under the transition from a state-owned and controlled system to the broadly liberalized, competitive and market-oriented industry structure. The Polish electric power system integration with Western European systems (UCPTE) in 1995, and the process of Poland accession to EU brings closer the issue of international competition, which the sector must be ready to face very soon. In the context of Polish aspiration for membership in the European Union, the electric power sector has many attributes that give one grounds to assumemore » that it is capable of meeting challenges posed by integration and may also facilitate the indispensable transformation in other areas of the Polish economy. Among the most important attributes the following should be mentioned: the implementation of new competition-promoting Energy Law determining the separation of three functions (creation of energy policy, regulation and ownership activities); implementation of the principle of regulated third party access to the grid ensuring the complete deregulation of electricity market; restructuring of the electric power sector with transparent determination of functioning of electric power sub sectors : generation, transmission and distribution; electricity market organization (determination of the position of PSE SA as the future Transmission System Operator and Pool Operator); determination of principles for the development of electricity generation sub sector with licensing procedures; co-operation with UCPTE and the development of co-operation within the CENTREL group (new CENTREL ad hoc group on hadronization of electricity markets).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baird, Benjamin; Loebick, Codruta; Roychoudhury, Subir
During Phase I both experimental evaluation and computational validation of an advanced Spouted Bed Reactor (SBR) approach for biomass and coal combustion was completed. All Phase I objectives were met and some exceeded. Comprehensive insight on SBR operation was achieved via design, fabrication, and testing of a small demonstration unit with pulverized coal and biomass as feedstock at University of Connecticut (UCONN). A scale-up and optimization tool for the next generation of coal and biomass co-firing for reducing GHG emissions was also developed. The predictive model was implemented with DOE’s MFIX computational model and was observed to accurately mimic evenmore » unsteady behavior. An updated Spouted Bed Reactor was fabricated, based on model feedback, and experimentally displayed near ideal behavior. This predictive capability based upon first principles and experimental correlation allows realistic simulation of mixed fuel combustion in these newly proposed power boiler designs. Compared to a conventional fluidized bed the SBR facilitates good mixing of coal and biomass, with relative insensitivity to particle size and densities, resulting in improved combustion efficiency. Experimental data with mixed coal and biomass fuels demonstrated complete oxidation at temperatures as low as 500ºC. This avoids NOx formation and residual carbon in the waste ash. Operation at stoichiometric conditions without requiring cooling or sintering of the carrier was also observed. Oxygen-blown operation were tested and indicated good performance. This highlighted the possibility of operating the SBR at a wide range of conditions suitable for power generation and partial oxidation byproducts. It also supports the possibility of implementing chemical looping (for readily capturing CO 2 and SO x).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baird, Benjamin; Loebick, Codruta; Roychoudhury, Subir
During Phase I both experimental evaluation and computational validation of an advanced Spouted Bed Reactor (SBR) approach for biomass and coal combustion was completed. All Phase I objectives were met and some exceeded. Comprehensive insight on SBR operation was achieved via design, fabrication, and testing of a small demonstration unit with pulverized coal and biomass as feedstock at University of Connecticut (UCONN). A scale-up and optimization tool for the next generation of coal and biomass co-firing for reducing GHG emissions was also developed. The predictive model was implemented with DOE’s MFIX computational model and was observed to accurately mimic evenmore » unsteady behavior. An updated Spouted Bed Reactor was fabricated, based on model feedback, and experimentally displayed near ideal behavior. This predictive capability based upon first principles and experimental correlation allows realistic simulation of mixed fuel combustion in these newly proposed power boiler designs. Compared to a conventional fluidized bed the SBR facilitates good mixing of coal and biomass, with relative insensitivity to particle size and densities, resulting in improved combustion efficiency. Experimental data with mixed coal and biomass fuels demonstrated complete oxidation at temperatures as low as 500C. This avoids NOx formation and residual carbon in the waste ash. Operation at stoichiometric conditions without requiring cooling or sintering of the carrier was also observed. Oxygen-blown operation were tested and indicated good performance. This highlighted the possibility of operating the SBR at a wide range of conditions suitable for power generation and partial oxidation byproducts. It also supports the possibility of implementing chemical looping (for readily capturing CO2 and SOx).« less
Implementation and benefits of advanced process control for lithography CD and overlay
NASA Astrophysics Data System (ADS)
Zavyalova, Lena; Fu, Chong-Cheng; Seligman, Gary S.; Tapp, Perry A.; Pol, Victor
2003-05-01
Due to the rapidly reduced imaging process windows and increasingly stingent device overlay requirements, sub-130 nm lithography processes are more severely impacted than ever by systamic fault. Limits on critical dimensions (CD) and overlay capability further challenge the operational effectiveness of a mix-and-match environment using multiple lithography tools, as such mode additionally consumes the available error budgets. Therefore, a focus on advanced process control (APC) methodologies is key to gaining control in the lithographic modules for critical device levels, which in turn translates to accelerated yield learning, achieving time-to-market lead, and ultimately a higher return on investment. This paper describes the implementation and unique challenges of a closed-loop CD and overlay control solution in high voume manufacturing of leading edge devices. A particular emphasis has been placed on developing a flexible APC application capable of managing a wide range of control aspects such as process and tool drifts, single and multiple lot excursions, referential overlay control, 'special lot' handling, advanced model hierarchy, and automatic model seeding. Specific integration cases, including the multiple-reticle complementary phase shift lithography process, are discussed. A continuous improvement in the overlay and CD Cpk performance as well as the rework rate has been observed through the implementation of this system, and the results are studied.
The Complexities of Implementing Cluster Supply Chain - Case Study of JCH
NASA Astrophysics Data System (ADS)
Xue, Xiao; Zhang, Jibiao; Wang, Yang
As a new type of management pattern, "cluster supply chain" (CSC) can help SMEs to face the global challenges through all kinds of collaboration. However, a major challenge in implementing CSC is the gap between theory and practice in the field. In an effort to provide a better understanding of this emerging phenomenon, this paper presents the implementation process of CSC in the context of JingCheng Mechanical & Electrical Holding co., ltd.(JCH) as a case study. The cast study of JCH suggests that the key problems in the practice of cluster supply chain: How do small firms use cluster supply chain? Only after we clarify the problem, the actual construction and operation of cluster supply chain does show successful results as it should be.
Should the scope of human mixture risk assessment span legislative/regulatory silos for chemicals?
Evans, Richard M; Martin, Olwenn V; Faust, Michael; Kortenkamp, Andreas
2016-02-01
Current chemicals regulation operates almost exclusively on a chemical-by-chemical basis, however there is concern that this approach may not be sufficiently protective if two or more chemicals have the same toxic effect. Humans are indisputably exposed to more than one chemical at a time, for example to the multiple chemicals found in food, air and drinking water, and in household and consumer products, and in cosmetics. Assessment of cumulative risk to human health and/or the environment from multiple chemicals and routes can be done in a mixture risk assessment (MRA). Whilst there is a broad consensus on the basic science of mixture toxicology, the path to regulatory implementation of MRA within chemical risk assessment is less clear. In this discussion piece we pose an open question: should the scope of human MRA cross legislative remits or 'silos'? We define silos as, for instance, legislation that defines risk assessment practice for a subset of chemicals, usually on the basis of substance/product, media or process orientation. Currently any form of legal mandate for human MRA in the EU is limited to only a few pieces of legislation. We describe two lines of evidence, illustrated with selected examples, that are particularly pertinent to this question: 1) evidence that mixture effects have been shown for chemicals regulated in different silos and 2) evidence that humans are co-exposed to chemicals from different silos. We substantiate the position that, because there is no reason why chemicals allocated to specific regulatory silos would have non-overlapping risk profiles, then there is also no reason to expect that MRA limited only to chemicals within one silo can fully capture the risk that may be present to human consumers. Finally, we discuss possible options for implementation of MRA and we hope to prompt wider discussion of this issue. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
A Survey of Alternative Oxygen Production Technologies
NASA Technical Reports Server (NTRS)
Lueck, Dale E.; Parrish, Clyde F.; Buttner, William J.; Surma, Jan M.; Delgado, H. (Technical Monitor)
2001-01-01
Utilization of the Martian atmosphere for the production of fuel and oxygen has been extensively studied. The baseline fuel production process is a Sabatier reactor, which produces methane and water from carbon dioxide and hydrogen. The oxygen produced from the electrolysis of the water is only half of that needed for methane-based rocket propellant, and additional oxygen is needed for breathing air, fuel cells and other energy sources. Zirconia electrolysis cells for the direct reduction of CO2 arc being developed as an alternative means of producing oxygen, but present many challenges for a large-scale oxygen production system. The very high operating temperatures and fragile nature of the cells coupled with fairly high operating voltages leave room for improvement. This paper will survey alternative oxygen production technologies, present data on operating characteristics, materials of construction, and some preliminary laboratory results on attempts to implement each. Our goal is to significantly improve upon the characteristics of proposed zirconia cells for oxygen production. To achieve that goal we are looking at electrolytic systems that operate at significantly lower temperatures, preferably below 31C to allow the incorporation of liquid CO2 in the electrolyte. Our preliminary results indicate that such a system will have much higher current densities and have simpler cathode construction than a porous gas feed electrode system. Such a system could be achieved based on nonaqueous electrolytes or ionic liquids. We are focusing our research on the anode reaction that will produce oxygen from a product generated at the cathode using CO2 as the feed. Operation at low temperatures also will open up the full range of polymer and metal materials, allowing a more robust system design to withstand the rigors of flight, landing, and long term unattended operation on the surface of Mars.
The clinical, operational, and financial worlds of neonatal palliative care: A focused ethnography.
Williams-Reade, Jackie; Lamson, Angela L; Knight, Sharon M; White, Mark B; Ballard, Sharon M; Desai, Priti P
2015-04-01
Due to multiple issues, integrated interdisciplinary palliative care teams in a neonatal intensive care unit (NICU) may be difficult to access, sometimes fail to be implemented, or provide inconsistent or poorly coordinated care. When implementing an effective institution-specific neonatal palliative care program, it is critical to include stakeholders from the clinical, operational, and financial worlds of healthcare. In this study, researchers sought to gain a multidisciplinary perspective into issues that may impact the implementation of a formal neonatal palliative care program at a tertiary regional academic medical center. In this focused ethnography, the primary researcher conducted semistructured interviews that explored the perspectives of healthcare administrators, finance officers, and clinicians about neonatal palliative care. The perspectives of 39 study participants informed the identification of institutional, financial, and clinical issues that impact the implementation of neonatal palliative care services at the medical center and the planning process for a formal palliative care program on behalf of neonates and their families. Healthcare professionals described experiences that influenced their views on neonatal palliative care. Key themes included: (a) uniqueness of neonatal palliative care, (b) communication and conflict among providers, (c) policy and protocol discrepancies, and (d) lack of administrative support. The present study highlighted several areas that are challenging in the provision of neonatal palliative care. Our findings underscored the importance of recognizing and procuring resources needed simultaneously from the clinical, operational, and financial worlds in order to implement and sustain a successful neonatal palliative care program.
Lynn, David C; De Lorenzo, Robert A
2011-09-01
Medical civil-military operations are important for deployed military medical units engaged in counter-insurgency missions. There are few reports on military support for a host nation's military medical infrastructure, and we describe an initiative of the 21st Combat Support Hospital in 2010 during the postsurge phase of Operation Iraqi Freedom and Operation New Dawn. The goal was to incrementally improve the quality of care provided by Iraqi 7th Army medical personnel using existing clinic infrastructure and a low budget. Direct bedside teaching to include screening and treatment of ambulatory patients (sick call), focused pharmacy and medical supply system support, medical records documentation, and basic infection control compliance were the objectives. Lessons learned include the requirement to implement culturally relevant changes, maintain focus on system processes, and maximize education and mentorship through multiple modalities. In summary, a combat hospital can successfully implement an advise and assist mission with minimal external resources.
Miniature atomic scalar magnetometer for space based on the rubidium isotope 87Rb.
Korth, Haje; Strohbehn, Kim; Tejada, Francisco; Andreou, Andreas G; Kitching, John; Knappe, Svenja; Lehtonen, S John; London, Shaughn M; Kafel, Matiwos
2016-08-01
A miniature atomic scalar magnetometer based on the rubidium isotope 87 Rb was developed for operation in space. The instrument design implements both M x and M z mode operation and leverages a novel microelectromechanical system (MEMS) fabricated vapor cell and a custom silicon-on-sapphire (SOS) complementary metal-oxide-semiconductor (CMOS) integrated circuit. The vapor cell has a volume of only 1 mm 3 so that it can be efficiently heated to its operating temperature by a specially designed, low-magnetic-field-generating resistive heater implemented in multiple metal layers of the transparent sapphire substrate of the SOS-CMOS chips. The SOS-CMOS chip also hosts the Helmholtz coil and associated circuitry to stimulate the magnetically sensitive atomic resonance and temperature sensors. The prototype instrument has a total mass of fewer than 500 g and uses less than 1 W of power, while maintaining a sensitivity of 15 pT/√Hz at 1 Hz, comparable to present state-of-the-art absolute magnetometers.
Modeling and Advanced Control for Sustainable Process ...
This book chapter introduces a novel process systems engineering framework that integrates process control with sustainability assessment tools for the simultaneous evaluation and optimization of process operations. The implemented control strategy consists of a biologically-inspired, multi-agent-based method. The sustainability and performance assessment of process operating points is carried out using the U.S. E.P.A.’s GREENSCOPE assessment tool that provides scores for the selected economic, material management, environmental and energy indicators. The indicator results supply information on whether the implementation of the controller is moving the process towards a more sustainable operation. The effectiveness of the proposed framework is illustrated through a case study of a continuous bioethanol fermentation process whose dynamics are characterized by steady-state multiplicity and oscillatory behavior. This book chapter contribution demonstrates the application of novel process control strategies for sustainability by increasing material management, energy efficiency, and pollution prevention, as needed for SHC Sustainable Uses of Wastes and Materials Management.
Test Waveform Applications for JPL STRS Operating Environment
NASA Technical Reports Server (NTRS)
Lux, James P.; Peters, Kenneth J.; Taylor, Gregory H.; Lang, Minh; Stern, Ryan A.; Duncan, Courtney B.
2013-01-01
This software demonstrates use of the JPL Space Telecommunications Radio System (STRS) Operating Environment (OE), tests APIs (application programming interfaces) presented by JPL STRS OE, and allows for basic testing of the underlying hardware platform. This software uses the JPL STRS Operating Environment ["JPL Space Tele com - munications Rad io System Operating Environment,"(NPO-4776) NASA Tech Briefs, commercial edition, Vol. 37, No. 1 (January 2013), p. 47] to interact with the JPL-SDR Software Defined Radio developed for the CoNNeCT (COmmunications, Navigation, and Networking rEconfigurable Testbed) Project as part of the SCaN Testbed installed on the International Space Station (ISS). These are the first applications that are compliant with the new NASA STRS Architecture Standard. Several example waveform applications are provided to demonstrate use of the JPL STRS OE for the JPL-SDR platform used for the CoNNeCT Project. The waveforms provide a simple digitizer and playback capability for the SBand RF slice, and a simple digitizer for the GPS slice [CoNNeCT Global Positioning System RF Module, (NPO-47764) NASA Tech Briefs, commercial edition, Vol. 36, No. 3 (March 2012), p. 36]. These waveforms may be used for hardware test, as well as for on-orbit or laboratory checkout. Additional example waveforms implement SpaceWire and timer modules, which can be used for time transfer and demonstration of communication between the two Xilinx FPGAs in the JPLSDR. The waveforms are also compatible with ground-based use of the JPL STRS OE on radio breadboards and Linux.
Modified signed-digit arithmetic based on redundant bit representation.
Huang, H; Itoh, M; Yatagai, T
1994-09-10
Fully parallel modified signed-digit arithmetic operations are realized based on redundant bit representation of the digits proposed. A new truth-table minimizing technique is presented based on redundant-bitrepresentation coding. It is shown that only 34 minterms are enough for implementing one-step modified signed-digit addition and subtraction with this new representation. Two optical implementation schemes, correlation and matrix multiplication, are described. Experimental demonstrations of the correlation architecture are presented. Both architectures use fixed minterm masks for arbitrary-length operands, taking full advantage of the parallelism of the modified signed-digit number system and optics.
NASA Astrophysics Data System (ADS)
Zhou, Yan-Hui; Wang, Lei
2012-04-01
The quantum logic network to implement 1 → M symmetric economical phase-covariant telecloning is presented. The scheme includes two parts: the first part is used to create the telecloning channel and the second part to teleport the input state. The telecloning channel which works without ancilla is constructed by two kinds of elementary unitary transformations, single-qubit rotation and multiple-qubit controlled operation. The probability of success is 50%, which is the same with the scheme in [Meng, F.Y.; Zhu, A.D. J. Mod. Opt. 2009, 56, 1255-1259].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reis, Chuck; Nelson, Eric; Armer, James
The purpose of this playbook and accompanying spreadsheets is to generalize the detailed CBP analysis and to put tools in the hands of experienced refrigeration designers to evaluate multiple applications of refrigeration waste heat reclaim across the United States. Supermarkets with large portfolios of similar buildings can use these tools to assess the impact of large-scale implementation of heat reclaim systems. In addition, the playbook provides best practices for implementing heat reclaim systems to achieve the best long-term performance possible. It includes guidance on operations and maintenance as well as measurement and verification.
Balasubramanian, Karthikeyan; Southerland, Joshua; Vaidya, Mukta; Qian, Kai; Eleryan, Ahmed; Fagg, Andrew H; Sluzky, Marc; Oweiss, Karim; Hatsopoulos, Nicholas
2013-01-01
Operant conditioning with biofeedback has been shown to be an effective method to modify neural activity to generate goal-directed actions in a brain-machine interface. It is particularly useful when neural activity cannot be mathematically mapped to motor actions of the actual body such as in the case of amputation. Here, we implement an operant conditioning approach with visual feedback in which an amputated monkey is trained to control a multiple degree-of-freedom robot to perform a reach-to-grasp behavior. A key innovation is that each controlled dimension represents a behaviorally relevant synergy among a set of joint degrees-of-freedom. We present a number of behavioral metrics by which to assess improvements in BMI control with exposure to the system. The use of non-human primates with chronic amputation is arguably the most clinically-relevant model of human amputation that could have direct implications for developing a neural prosthesis to treat humans with missing upper limbs.
Dong, Ming; Zheng, Chuantao; Miao, Shuzhuo; Zhang, Yu; Du, Qiaoling; Wang, Yiding; Tittel, Frank K
2017-09-27
A multi-gas sensor system was developed that uses a single broadband light source and multiple carbon monoxide (CO), carbon dioxide (CO₂) and methane (CH₄) pyroelectric detectors by use of the time division multiplexing (TDM) technique. A stepper motor-based rotating system and a single-reflection spherical optical mirror were designed and adopted to realize and enhance multi-gas detection. Detailed measurements under static detection mode (without rotation) and dynamic mode (with rotation) were performed to study the performance of the sensor system for the three gas species. Effects of the motor rotating period on sensor performances were also investigated and a rotation speed of 0.4π rad/s was required to obtain a stable sensing performance, corresponding to a detection period of ~10 s to realize one round of detection. Based on an Allan deviation analysis, the 1 σ detection limits under static operation are 2.96, 4.54 and 2.84 parts per million in volume (ppmv) for CO, CO₂ and CH₄, respectively and the 1 σ detection limits under dynamic operations are 8.83, 8.69 and 10.29 ppmv for the three gas species, respectively. The reported sensor has potential applications in various fields requiring CO, CO₂ and CH₄ detection such as in coal mines.
Weiner, Bryan J; Lewis, Megan A; Linnan, Laura A
2009-04-01
The field of worksite health promotion has moved toward the development and testing of comprehensive programs that target health behaviors with interventions operating at multiple levels of influence. Yet, observational and process evaluation studies indicate that such programs are challenging for worksites to implement effectively. Research has identified several organizational factors that promote or inhibit effective implementation of comprehensive worksite health promotion programs. However, no integrated theory of implementation has emerged from this research. This article describes a theory of the organizational determinants of effective implementation of comprehensive worksite health promotion programs. The model is adapted from theory and research on the implementation of complex innovations in manufacturing, education and health care settings. The article uses the Working Well Trial to illustrate the model's theoretical constructs. Although the article focuses on comprehensive worksite health promotion programs, the conceptual model may also apply to other types of complex health promotion programs. An organization-level theory of the determinants of effective implementation of worksite health promotion programs.
Xu, Li; Jiang, Yong; Qiu, Rong
2018-01-01
In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
WEB-server for search of a periodicity in amino acid and nucleotide sequences
NASA Astrophysics Data System (ADS)
E Frenkel, F.; Skryabin, K. G.; Korotkov, E. V.
2017-12-01
A new web server (http://victoria.biengi.ac.ru/splinter/login.php) was designed and developed to search for periodicity in nucleotide and amino acid sequences. The web server operation is based upon a new mathematical method of searching for multiple alignments, which is founded on the position weight matrices optimization, as well as on implementation of the two-dimensional dynamic programming. This approach allows the construction of multiple alignments of the indistinctly similar amino acid and nucleotide sequences that accumulated more than 1.5 substitutions per a single amino acid or a nucleotide without performing the sequences paired comparisons. The article examines the principles of the web server operation and two examples of studying amino acid and nucleotide sequences, as well as information that could be obtained using the web server.
Enhancing Power System Operational Flexibility With Flexible Ramping Products: A Review
Wang, Qin; Hodge, Bri-Mathias
2016-12-09
With the increased variability and uncertainty of net load induced from high penetrations of renewable energy resources and more flexible interchange schedules, power systems are facing great operational challenges in maintaining balance. Among these, the scarcity of ramp capability is an important culprit of power balance violations and high scarcity prices. To address this issue, market-based flexible ramping products (FRPs) have been proposed in the industry to improve the availability of ramp capacity. This paper presents an in-depth review of the modeling and implementation of FRPs. The major motivation is that although FRPs are widely discussed in the literature, itmore » is still unclear to many how they can be incorporated into a co-optimization framework that includes energy and ancillary services. The concept and a definition of power system operational flexibility as well as the needs for FRPs are introduced. The industrial practices of implementing FRPs under different market structures are presented. Market operation issues and future research topics are also discussed. In conclusion, this paper can provide researchers and power engineers with further insights into the state of the art, technical barriers, and potential directions for FRPs.« less
Makris, Susan L.; Raffaele, Kathleen; Allen, Sandra; Bowers, Wayne J.; Hass, Ulla; Alleva, Enrico; Calamandrei, Gemma; Sheets, Larry; Amcoff, Patric; Delrue, Nathalie; Crofton, Kevin M.
2009-01-01
Objective We conducted a review of the history and performance of developmental neurotoxicity (DNT) testing in support of the finalization and implementation of Organisation of Economic Co-operation and Development (OECD) DNT test guideline 426 (TG 426). Information sources and analysis In this review we summarize extensive scientific efforts that form the foundation for this testing paradigm, including basic neurotoxicology research, interlaboratory collaborative studies, expert workshops, and validation studies, and we address the relevance, applicability, and use of the DNT study in risk assessment. Conclusions The OECD DNT guideline represents the best available science for assessing the potential for DNT in human health risk assessment, and data generated with this protocol are relevant and reliable for the assessment of these end points. The test methods used have been subjected to an extensive history of international validation, peer review, and evaluation, which is contained in the public record. The reproducibility, reliability, and sensitivity of these methods have been demonstrated, using a wide variety of test substances, in accordance with OECD guidance on the validation and international acceptance of new or updated test methods for hazard characterization. Multiple independent, expert scientific peer reviews affirm these conclusions. PMID:19165382
Principles of logic and the use of digital geographic information systems
Robinove, Charles Joseph
1986-01-01
Digital geographic information systems allow many different types of data to be spatially and statistically analyzed. Logical operations can be performed on individual or multiple data planes by algorithms that can be implemented in computer systems. Users and creators of the systems should fully understand these operations. This paper describes the relationships of layers and features in geographic data bases and the principles of logic that can be applied by geographic information systems and suggests that a thorough knowledge of the data that are entered into a geographic data base and of the logical operations will produce results that are most satisfactory to the user. Methods of spatial analysis are reduced to their primitive logical operations and explained to further such understanding.
NASA Astrophysics Data System (ADS)
Georgiou, Mike F.; Sfakianakis, George N.; Johnson, Gary; Douligeris, Christos; Scandar, Silvia; Eisler, E.; Binkley, B.
1994-05-01
In an effort to improve patient care while considering cost-effectiveness, we developed a Picture Archiving and Communication System (PACS), which combines imaging cameras, computers and other peripheral equipment from multiple nuclear medicine vectors. The PACS provides fully-digital clinical operation which includes acquisition and automatic organization of patient data, distribution of the data to all networked units inside the department and other remote locations, digital analysis and quantitation of images, digital diagnostic reading of image studies and permanent data archival with the ability for fast retrieval. The PACS enabled us to significantly reduce the amount of film used, and we are currently proceeding with implementing a film-less laboratory. Hard copies are produced on paper or transparent sheets for non-digitally connected parts of the hospital. The PACS provides full-digital operation which is faster, more reliable, better organized and managed, and overall more efficient than a conventional film-based operation. In this paper, the integration of the various PACS components from multiple vendors is reviewed, and the impact of PACS, with its advantages and limitations on our clinical operation is analyzed.
Koppa, P; Chavel, P; Oudar, J L; Kuszelewicz, R; Schnell, J P; Pocholle, J P
1997-08-10
We present experimental results on a 1-to-64-channel free-space photonic switching demonstration system based on GaAs/GaAlAs multiple-quantum-well active device arrays. Two control schemes are demonstrated: data transparent optical self-routing usable in a packet-switching environment and direct optical control with potential signal amplification for circuit switching. The self-routing operation relies on the optical recognition of the binary destination address coded in each packet header. Address decoding is implemented with elementary optical bistable devices and modulator pixels as all-optical latches and electro-optical and gates, respectively. All 60 defect-free channels of the system could be operated one by one, but the simultaneous operation of only three channels could be achieved mainly because of the spatial nonhomogeneities of the devices. Direct-control operation is based on directly setting the bistable device reflectivity with a variable-control beam power. This working mode turned out to be much more tolerant of spatial noises: 37 channels of the system could be operated simultaneously. Further development of the system to a crossbar of N inputs and M outputs and system miniaturization are also considered.
CMOS imager for pointing and tracking applications
NASA Technical Reports Server (NTRS)
Sun, Chao (Inventor); Pain, Bedabrata (Inventor); Yang, Guang (Inventor); Heynssens, Julie B. (Inventor)
2006-01-01
Systems and techniques to realize pointing and tracking applications with CMOS imaging devices. In general, in one implementation, the technique includes: sampling multiple rows and multiple columns of an active pixel sensor array into a memory array (e.g., an on-chip memory array), and reading out the multiple rows and multiple columns sampled in the memory array to provide image data with reduced motion artifact. Various operation modes may be provided, including TDS, CDS, CQS, a tracking mode to read out multiple windows, and/or a mode employing a sample-first-read-later readout scheme. The tracking mode can take advantage of a diagonal switch array. The diagonal switch array, the active pixel sensor array and the memory array can be integrated onto a single imager chip with a controller. This imager device can be part of a larger imaging system for both space-based applications and terrestrial applications.
A multiprocessor operating system simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, G.M.; Campbell, R.H.
1988-01-01
This paper describes a multiprocessor operating system simulator that was developed by the authors in the Fall of 1987. The simulator was built in response to the need to provide students with an environment in which to build and test operating system concepts as part of the coursework of a third-year undergraduate operating systems course. Written in C++, the simulator uses the co-routine style task package that is distributed with the AT and T C++ Translator to provide a hierarchy of classes that represents a broad range of operating system software and hardware components. The class hierarchy closely follows thatmore » of the Choices family of operating systems for loosely and tightly coupled multiprocessors. During an operating system course, these classes are refined and specialized by students in homework assignments to facilitate experimentation with different aspects of operating system design and policy decisions. The current implementation runs on the IBM RT PC under 4.3bsd UNIX.« less
Flexible Unicast-Based Group Communication for CoAP-Enabled Devices †
Ishaq, Isam; Hoebeke, Jeroen; Van den Abeele, Floris; Rossey, Jen; Moerman, Ingrid; Demeester, Piet
2014-01-01
Smart embedded objects will become an important part of what is called the Internet of Things. Applications often require concurrent interactions with several of these objects and their resources. Existing solutions have several limitations in terms of reliability, flexibility and manageability of such groups of objects. To overcome these limitations we propose an intermediately level of intelligence to easily manipulate a group of resources across multiple smart objects, building upon the Constrained Application Protocol (CoAP). We describe the design of our solution to create and manipulate a group of CoAP resources using a single client request. Furthermore we introduce the concept of profiles for the created groups. The use of profiles allows the client to specify in more detail how the group should behave. We have implemented our solution and demonstrate that it covers the complete group life-cycle, i.e., creation, validation, flexible usage and deletion. Finally, we quantitatively analyze the performance of our solution and compare it against multicast-based CoAP group communication. The results show that our solution improves reliability and flexibility with a trade-off in increased communication overhead. PMID:24901978
Software systems for operation, control, and monitoring of the EBEX instrument
NASA Astrophysics Data System (ADS)
Milligan, Michael; Ade, Peter; Aubin, François; Baccigalupi, Carlo; Bao, Chaoyun; Borrill, Julian; Cantalupo, Christopher; Chapman, Daniel; Didier, Joy; Dobbs, Matt; Grainger, Will; Hanany, Shaul; Hillbrand, Seth; Hubmayr, Johannes; Hyland, Peter; Jaffe, Andrew; Johnson, Bradley; Kisner, Theodore; Klein, Jeff; Korotkov, Andrei; Leach, Sam; Lee, Adrian; Levinson, Lorne; Limon, Michele; MacDermid, Kevin; Matsumura, Tomotake; Miller, Amber; Pascale, Enzo; Polsgrove, Daniel; Ponthieu, Nicolas; Raach, Kate; Reichborn-Kjennerud, Britt; Sagiv, Ilan; Tran, Huan; Tucker, Gregory S.; Vinokurov, Yury; Yadav, Amit; Zaldarriaga, Matias; Zilic, Kyle
2010-07-01
We present the hardware and software systems implementing autonomous operation, distributed real-time monitoring, and control for the EBEX instrument. EBEX is a NASA-funded balloon-borne microwave polarimeter designed for a 14 day Antarctic flight that circumnavigates the pole. To meet its science goals the EBEX instrument autonomously executes several tasks in parallel: it collects attitude data and maintains pointing control in order to adhere to an observing schedule; tunes and operates up to 1920 TES bolometers and 120 SQUID amplifiers controlled by as many as 30 embedded computers; coordinates and dispatches jobs across an onboard computer network to manage this detector readout system; logs over 3 GiB/hour of science and housekeeping data to an onboard disk storage array; responds to a variety of commands and exogenous events; and downlinks multiple heterogeneous data streams representing a selected subset of the total logged data. Most of the systems implementing these functions have been tested during a recent engineering flight of the payload, and have proven to meet the target requirements. The EBEX ground segment couples uplink and downlink hardware to a client-server software stack, enabling real-time monitoring and command responsibility to be distributed across the public internet or other standard computer networks. Using the emerging dirfile standard as a uniform intermediate data format, a variety of front end programs provide access to different components and views of the downlinked data products. This distributed architecture was demonstrated operating across multiple widely dispersed sites prior to and during the EBEX engineering flight.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, So
2003-11-20
We develop a symbolic manipulation program and program generator (Tensor Contraction Engine or TCE) that automatically derives the working equations of a well-defined model of second-quantized many-electron theories and synthesizes efficient parallel computer programs on the basis of these equations. Provided an ansatz of a many-electron theory model, TCE performs valid contractions of creation and annihilation operators according to Wick's theorem, consolidates identical terms, and reduces the expressions into the form of multiple tensor contractions acted by permutation operators. Subsequently, it determines the binary contraction order for each multiple tensor contraction with the minimal operation and memory cost, factorizes commonmore » binary contractions (defines intermediate tensors), and identifies reusable intermediates. The resulting ordered list of binary tensor contractions, additions, and index permutations is translated into an optimized program that is combined with the NWChem and UTChem computational chemistry software packages. The programs synthesized by TCE take advantage of spin symmetry, Abelian point-group symmetry, and index permutation symmetry at every stage of calculations to minimize the number of arithmetic operations and storage requirement, adjust the peak local memory usage by index range tiling, and support parallel I/O interfaces and dynamic load balancing for parallel executions. We demonstrate the utility of TCE through automatic derivation and implementation of parallel programs for various models of configuration-interaction theory (CISD, CISDT, CISDTQ), many-body perturbation theory [MBPT(2), MBPT(3), MBPT(4)], and coupled-cluster theory (LCCD, CCD, LCCSD, CCSD, QCISD, CCSDT, and CCSDTQ).« less
Pilot-Scale Silicone Process for Low-Cost Carbon Dioxide Capture. Final Scientific/Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hancu, Dan; Wood, Benjamin; Genovese, Sarah
GE Global Research has developed, over the last 8 years, a platform of cost effective CO 2 capture technologies based on a non-aqueous aminosilicone solvent (GAP-1m). As demonstrated in a previous funded DOE project (DE-FE0007502), the GAP-1m solvent has increased CO 2 working capacity, lower volatility and corrosivity than the benchmark aqueous amine technology. The current report describes the cooperative program between GE Global Research (GE GRC), and the National Carbon Capture Center (NCCC) to design, construct, and operate a pilot-scale process using GAP-1m solvent to demonstrate its performance at 0.5 MWe. (i) Performance of the GAP-1m solvent was demonstratedmore » in a 0.5 MWe pilot with real flue gas for over 900 hrs. of operation using two alternative desorption designs: a Continuous Stirred Tank Reactor (CSTR), and a Steam Stripper Column (SSC). The CSTR is a one-stage separation unit with reduced space requirements, and capital cost. The alternative is a multi-stage separation column, with improved desorption efficiency. Testing the two desorber options allowed us to identify the most cost effective, and space efficient desorber solution. (ii) CSTR Campaign: The CSTR desorber unit was designed, fabricated and integrated with the pilot solvent test unit (PSTU), replacing the PSTU Steam Stripper Column at NCCC. Solvent management and waste water special procedures were implemented to accommodate operation of the non-aqueous solvent in the PSTU. Performance of the GAP-1m solvent with the CSTR was demonstrated for over 500 hrs. while varying temperature of the desorption (230 – 265 oF), solvent circulation rate (GAP-1m : CO 2 (molar) = 1.5 – 4), and flue gas flow rates (0.2 – 0.5 MWe). Solvent carry-over in the CO 2 product was minimized by maintaining water content below 5 wt.%, and desorption pressure at 7 psig. CO 2 capture efficiency achieved was 95% at 0.25 MWe (GAP-1m : CO 2 = 4 (molar), 230 oF desorption), and 65% at 0.5 MWe (GAP-1m : CO 2 (molar) = 1.5, 248 oF). Solvent loss was dominated by thermal degradation of the rich solvent. (iii) Steam Stripper Column Campaign: Higher expected cost of the solvent vs. aqueous amines makes solvent management a top priority to maintain the low cost for the process. During the testing of the GAP-1m solvent with the CSTR, thermal degradation of the rich solvent was found to be the main mechanism in solvent loss. Small amounts of water in the working solution were found to be an effective way to enable steam stripping, thereby lowering desorption temperature, and hence reducing thermal degradation. Steam stripping also increased working capacity by 30% due to a more efficient desorption. The concept was first tested in a glass stripping column (lab scale, GE GRC), optimized in a continuous bench scale system (2 kWe, GE GRC), and demonstrated in a 0.5 MWe PSTU at NCCC. No special system modifications were required to the PSTU to accommodate the testing of the non-aqueous GAP-1 solvent with the regenerator column. SSC was found to be more robust towards solvent entrainment (H 2O < 35 wt.%). 90 – 95% CO 2 capture efficiency was achieved under stoichiometric conditions at 0.5 MWe (235 oF desorption, 2 psig and 19 wt. % H 2O). Both CO 2 capture efficiency and specific duty reached optimum conditions at 18 wt.% H 2O. Low amine degradation (< 0.05 wt.%/day) was recorded over 350 hrs. of operation. Controlled water addition to GAP-1m solvent decreased the desorption temperature, thermal degradation, and improved the CO 2 working capacity due to more efficient absorption and desorption processes. Under these conditions, the GAP-1m solvent exhibited a 25% increased working capacity, and 10% reduction in specific steam duty vs. MEA, at 10 oF lower desorption temperature. (iv) Techno-economic Analysis: The pilot-scale PSTU engineering data were used to update the capture system process models, and the techno-economic analysis was performed for a 550 MW coal fired power plant. The 1st year CO 2 removal cost for the aminosilicone-based carbon-capture process was evaluated at $48/ton CO 2 using the steam stripper column. This is a 20% reduction compared to MEA, primarily due to lower overall capital cost. CO 2 cost using the CSTR desorber is dominated by the economics of the solvent make-up. The steam stripper desorber is the preferred unit operation due to a more efficient desorption, and reduced solvent make-up rate. Further reduction in CO 2 capture cost is expected by lowering the manufacturing cost of the solvent, implementing flowsheet optimization and/or implementing the next generation aminosilicone solvent with improved stability and increased CO 2 working capacity.« less
Nambiar, Devaki; Muralidharan, Arundati; Garg, Samir; Daruwalla, Nayreen; Ganesan, Prathibha
2015-11-17
Understanding health inequity in India is a challenge, given the complexity that characterise the lives of its residents. Interpreting constructive action to address health inequity in the country is rare, though much exhorted by the global research community. We critically analysed operational understandings of inequity embedded in convergent actions to address health-related inequalities by stakeholders in varying contexts within the country. Two implementer groups were purposively chosen to reflect on their experiences addressing inequalities in health (and its determinants) in the public sector working in rural areas and in the private non-profit sector working in urban areas. A representing co-author from each group developed narratives around how they operationally defined, monitored, and addressed health inequality in their work. These narratives were content analysed by two other co-authors to draw out common and disparate themes characterising each action context, operational definitions, shifts and changes in strategies and definitions, and outcomes (both intended and unintended). Findings were reviewed by all authors to develop case studies. We theorised that action to address health inequality converges around a unifying theme or pivot, and developed a heuristic that describes the features of this convergence. In one case, the convergence was a single decision-making platform for deliberation around myriad village development issues, while in the other, convergence brought together communities, legal, police, and health system action around one salient health issue. One case emphasized demand generation, the other was focussed on improving quality and supply of services. In both cases, the operationalization of equity broke beyond a biomedical or clinical focus. Dearth of data meant that implementers exercised various strategies to gather it, and to develop interventions - always around a core issue or population. This exercise demonstrated the possibility of constructive engagement between implementers and researchers to understand and theorize action on health equity and the social determinants of health. This heuristic developed may be of use not just for further research, but also for on-going appraisal and design of policy and praxis, both sensitive to and reflective of Indian concerns and understandings.
Botti, Mari; Kent, Bridie; Bucknall, Tracey; Duke, Maxine; Johnstone, Megan-Jane; Considine, Julie; Redley, Bernice; Hunter, Susan; de Steiger, Richard; Holcombe, Marlene; Cohen, Emma
2014-08-28
Evidence from clinical practice and the extant literature suggests that post-operative pain assessment and treatment is often suboptimal. Poor pain management is likely to persist until pain management practices become consistent with guidelines developed from the best available scientific evidence. This work will address the priority in healthcare of improving the quality of pain management by standardising evidence-based care processes through the incorporation of an algorithm derived from best evidence into clinical practice. In this paper, the methodology for the creation and implementation of such an algorithm that will focus, in the first instance, on patients who have undergone total hip or knee replacement is described. In partnership with clinicians, and based on best available evidence, the aim of the Management Algorithm for Post-operative Pain (MAPP) project is to develop, implement, and evaluate an algorithm designed to support pain management decision-making for patients after orthopaedic surgery. The algorithm will provide guidance for the prescription and administration of multimodal analgesics in the post-operative period, and the treatment of breakthrough pain. The MAPP project is a multisite study with one coordinating hospital and two supporting (rollout) hospitals. The design of this project is a pre-implementation-post-implementation evaluation and will be conducted over three phases. The Promoting Action on Research Implementation in Health Services (PARiHS) framework will be used to guide implementation. Outcome measurements will be taken 10 weeks post-implementation of the MAPP. The primary outcomes are: proportion of patients prescribed multimodal analgesics in accordance with the MAPP; and proportion of patients with moderate to severe pain intensity at rest. These data will be compared to the pre-implementation analgesic prescribing practices and pain outcome measures. A secondary outcome, the efficacy of the MAPP, will be measured by comparing pain intensity scores of patients where the MAPP guidelines were or were not followed. The outcomes of this study have relevance for nursing and medical professionals as well as informing health service evaluation. In establishing a framework for the sustainable implementation and evaluation of a standardised approach to post-operative pain management, the findings have implications for clinicians and patients within multiple surgical contexts.
Schneider, Kathrin; Skovran, Elizabeth
2012-01-01
Oxalate catabolism is conducted by phylogenetically diverse organisms, including Methylobacterium extorquens AM1. Here, we investigate the central metabolism of this alphaproteobacterium during growth on oxalate by using proteomics, mutant characterization, and 13C-labeling experiments. Our results confirm that energy conservation proceeds as previously described for M. extorquens AM1 and other characterized oxalotrophic bacteria via oxalyl-coenzyme A (oxalyl-CoA) decarboxylase and formyl-CoA transferase and subsequent oxidation to carbon dioxide via formate dehydrogenase. However, in contrast to other oxalate-degrading organisms, the assimilation of this carbon compound in M. extorquens AM1 occurs via the operation of a variant of the serine cycle as follows: oxalyl-CoA reduction to glyoxylate and conversion to glycine and its condensation with methylene-tetrahydrofolate derived from formate, resulting in the formation of C3 units. The recently discovered ethylmalonyl-CoA pathway operates during growth on oxalate but is nevertheless dispensable, indicating that oxalyl-CoA reductase is sufficient to provide the glyoxylate required for biosynthesis. Analysis of an oxalyl-CoA synthetase- and oxalyl-CoA-reductase-deficient double mutant revealed an alternative, although less efficient, strategy for oxalate assimilation via one-carbon intermediates. The alternative process consists of formate assimilation via the tetrahydrofolate pathway to fuel the serine cycle, and the ethylmalonyl-CoA pathway is used for glyoxylate regeneration. Our results support the notion that M. extorquens AM1 has a plastic central metabolism featuring multiple assimilation routes for C1 and C2 substrates, which may contribute to the rapid adaptation of this organism to new substrates and the eventual coconsumption of substrates under environmental conditions. PMID:22493020
Fast mix table construction for material discretization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, S. R.
2013-07-01
An effective hybrid Monte Carlo-deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a 'mix table,' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mixmore » table in O(number of voxels x log number of mixtures) time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation. (authors)« less
Fast Mix Table Construction for Material Discretization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Seth R
2013-01-01
An effective hybrid Monte Carlo--deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a ``mix table,'' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mix table inmore » $$O(\\text{number of voxels}\\times \\log \\text{number of mixtures})$$ time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation.« less
Effects of using multi-vide ruler kit in the acquisition of numeracy skills among PROTIM students
NASA Astrophysics Data System (ADS)
Arumugan, Hemalatha A./P.; Obeng, Sharifah Nasriah Wan; Talib, Corrienna Abdul; Bunyamin, Muhammad Abdul Hadi; Ali, Marlina; Ibrahim, Norhasniza; Zawadzki, Rainer
2017-08-01
One effective way to teach arithmetic more interestingly and make it easier to learn is through the use of instructional materials. These can help students master certain mathematical skills, particularly multiplication and division, often considered difficult amongst primary school pupils. Nevertheless, the insufficiency of appropriate instructional materials causes difficulty in understanding how to use the proper technique or apply the concept, especially in multiplication. With this in mind, this study investigated whether the innovative and creative instructional material designed to assist and enhance numeracy skills, namely the Multi-vide Ruler kit, could increase students' ability in solving multiplication and division questions and whether it affected their interest in solving numeracy problems. Participants in this study included ten PROTIM (Program Tiga M [Three M Program] - membaca [reading], menulis [writing] dan mengira [calculate]) students, 9-10 years old, who had difficulties in reading, writing and arithmetic. In order to get appropriate support for qualitative research, a pre and post-test containing ten basic mathematical operations, was implemented together with the Multi-vide Ruler Kit. The findings of the qualitative case study, with the pre and post-tests, showed significant differences in their achievement and interest in two-digit multiplication and division operations. The results suggest that this approach could improve PROTIM student's ability to solve basic mathematical operations. What was most encouraging was the increase in students' interest in solving numeracy problems.
Exploratory Analysis of Carbon Dioxide Levels and Ultrasound Measures of the Eye During ISS Missions
NASA Technical Reports Server (NTRS)
Schaefer, C.; Young, M.; Mason, S.; Coble, C.; Wear, M. L.; Sargsyan, A.; Garcia, K.; Law. J.; Alexander, D.; Ryder, V. Myers;
2016-01-01
Carbon dioxide (CO2) levels on ISS have typically averaged 2.3 to 5.3mm Hg, with large fluctuations occurring over periods of hours and days. CO2 has effects on cerebral vascular tone, resulting in vasodilation and alteration of cerebral blood flow(CBF). Increased CBF leads to elevated intracranial pressure(ICP), which is a factor leading to visual disturbance, headaches, and other central nervous system symptoms. Ultrasound of the optic nerve provides a surrogate measurement of ICP. Inflight ultrasounds were implemented as an enhanced screening tool for the Visual Impairment/Intracranial Pressure (VIIP) Syndrome. This analysis examines the relationships between ambient CO2 levels on ISS and ultrasound measures of the eye in an effort to understand how CO2 may be associated with VIIP and to inform future analysis of inflight VIIP data. Results as shown in Figure2, there was a large timeframe where CO2 readings were removed due to sensor fault errors(see Limitations), from June 2011 to January 2012. After extensive cleaning of the CO2 data, metrics for all of the data were calculated (Table2). Preliminary analyses showed possible associations between variability measures of CO2 and AP diameter (Figure3),and average CO2 exposure and ONSD(Figure4). Adjustments for multiple comparisons were not made due to the exploratory nature of the analysis.
HMC algorithm with multiple time scale integration and mass preconditioning
NASA Astrophysics Data System (ADS)
Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.
2006-01-01
We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.
Applying Lean principles and Kaizen rapid improvement events in public health practice.
Smith, Gene; Poteat-Godwin, Annah; Harrison, Lisa Macon; Randolph, Greg D
2012-01-01
This case study describes a local home health and hospice agency's effort to implement Lean principles and Kaizen methodology as a rapid improvement approach to quality improvement. The agency created a cross-functional team, followed Lean Kaizen methodology, and made significant improvements in scheduling time for home health nurses that resulted in reduced operational costs, improved working conditions, and multiple organizational efficiencies.
Design and implement of mobile equipment management system based on QRcode
NASA Astrophysics Data System (ADS)
Yu, Runze; Duan, Xiaohui; Jiao, Bingli
2017-08-01
A mobile equipment management system based on QRcode is proposed for remote and convenient device management. Unlike conventional systems, the system here makes managers accessible to real-time information with smart phones. Compared with the conventional method, which can only be operated with specific devices, this lightweight and efficient tele management mode is conducive to the asset management in multiple scenarios.
CAUSE Resiliency (West Coast) Experiment Final Report
2012-10-01
implemented in BCeMap and can therefore consume alerting messages direct from MASAS. This would solve the issue with the update frequency and speed of the...in production for use by the Provincial Emergency Operations Centres and brings together multiple static layers together with several dynamic data...executive order established the requirement for an “effective, reliable, integrated, flexible, and comprehensive system to alert and warn the
Concurrent airline fleet allocation and aircraft design with profit modeling for multiple airlines
NASA Astrophysics Data System (ADS)
Govindaraju, Parithi
A "System of Systems" (SoS) approach is particularly beneficial in analyzing complex large scale systems comprised of numerous independent systems -- each capable of independent operations in their own right -- that when brought in conjunction offer capabilities and performance beyond the constituents of the individual systems. The variable resource allocation problem is a type of SoS problem, which includes the allocation of "yet-to-be-designed" systems in addition to existing resources and systems. The methodology presented here expands upon earlier work that demonstrated a decomposition approach that sought to simultaneously design a new aircraft and allocate this new aircraft along with existing aircraft in an effort to meet passenger demand at minimum fleet level operating cost for a single airline. The result of this describes important characteristics of the new aircraft. The ticket price model developed and implemented here enables analysis of the system using profit maximization studies instead of cost minimization. A multiobjective problem formulation has been implemented to determine characteristics of a new aircraft that maximizes the profit of multiple airlines to recognize the fact that aircraft manufacturers sell their aircraft to multiple customers and seldom design aircraft customized to a single airline's operations. The route network characteristics of two simple airlines serve as the example problem for the initial studies. The resulting problem formulation is a mixed-integer nonlinear programming problem, which is typically difficult to solve. A sequential decomposition strategy is applied as a solution methodology by segregating the allocation (integer programming) and aircraft design (non-linear programming) subspaces. After solving a simple problem considering two airlines, the decomposition approach is then applied to two larger airline route networks representing actual airline operations in the year 2005. The decomposition strategy serves as a promising technique for future detailed analyses. Results from the profit maximization studies favor a smaller aircraft in terms of passenger capacity due to its higher yield generation capability on shorter routes while results from the cost minimization studies favor a larger aircraft due to its lower direct operating cost per seat mile.
The HAL 9000 Space Operating System Real-Time Planning Engine Design and Operations Requirements
NASA Technical Reports Server (NTRS)
Stetson, Howard; Watson, Michael D.; Shaughnessy, Ray
2012-01-01
In support of future deep space manned missions, an autonomous/automated vehicle, providing crew autonomy and an autonomous response planning system, will be required due to the light time delays in communication. Vehicle capabilities as a whole must provide for tactical response to vehicle system failures and space environmental effects induced failures, for risk mitigation of permanent loss of communication with Earth, and for assured crew return capabilities. The complexity of human rated space systems and the limited crew sizes and crew skills mix drive the need for a robust autonomous capability on-board the vehicle. The HAL 9000 Space Operating System[2] designed for such missions and space craft includes the first distributed real-time planning / re-planning system. This paper will detail the software architecture of the multiple planning engine system, and the interface design for plan changes, approval and implementation that is performed autonomously. Operations scenarios will be defined for analysis of the planning engines operations and its requirements for nominal / off nominal activities. An assessment of the distributed realtime re-planning system, in the defined operations environment, will be provided as well as findings as it pertains to the vehicle, crew, and mission control requirements needed for implementation.
A challenging hysteresis operator for the simulation of Goss-textured magnetic materials
NASA Astrophysics Data System (ADS)
Cardelli, Ermanno; Faba, Antonio; Laudani, Antonino; Pompei, Michele; Quondam Antonio, Simone; Fulginei, Francesco Riganti; Salvini, Alessandro
2017-06-01
A new hysteresis operator for the simulation of Goss-textured ferromagnets is here defined. The operator is derived from the classic Stoner-Wohlfarth model, where the anisotropy energy is assumed to be cubic instead of uniaxial, in order to reproduce the magnetic behavior of Goss textured ferromagnetic materials, such as grain-oriented Fe-Si alloys, Ni-Fe alloys, and Ni-Co alloys. A vector hysteresis model based on a single hysteresis operator is then implemented and used for the prediction of the rotational magnetizations that have been measured in a sample of grain-oriented electrical steel. This is especially promising for FEM based calculations, where the magnetization state in each point must be recalculated at each time step. Finally, the computed loops, as well as the magnetic losses, are compared to the measured data.
Your View or Mine: Spatially Quantifying CO2 Storage Risk from Various Stakeholder Perspectives
NASA Astrophysics Data System (ADS)
Bielicki, J. M.; Pollak, M.; Wilson, E.; Elliot, T. R.; Guo, B.; Nogues, J. P.; Peters, C. A.
2011-12-01
CO2 capture and storage involves injecting captured CO2 into geologic formations, such as deep saline aquifers. This injected CO2 is to be "stored" within the rock matrix for hundreds to thousands of years, but injected CO2, or the brine it displaces, may leak from the target reservoir. Such leakage could interfere with other subsurface activities-water production, energy production, energy storage, and waste disposal-or migrate to the surface. Each of these interferences will incur multiple costs to a variety of stakeholders. Even if injected or displaced fluids do not interfere with other subsurface activities or make their way to the surface, costs will be incurred to find and fix the leak. Consequently, the suitability of a site for CO2 storage must therefore include an assessment of the risk of leakage and interference with various other activities within a three-dimensional proximity of where CO2 is being injected. We present a spatial analysis of leakage and interference risk associated with injecting CO2 into a portion of the Mount Simon sandstone in the Michigan Basin. Risk is the probability of an outcome multiplied by the impact of that outcome (Ro=po*Io). An outcome is the result of the leakage (e.g., interference with oil production), and the impact is the cost associated with the outcome. Each outcome has costs that will vary by stakeholder. Our analysis presents CO2 storage risk for multiple outcomes in a spatially explicit manner that varies by stakeholder. We use the ELSA semi-analytical model for estimating CO2 and brine leakage from aquifers to determine plume and pressure front radii, and CO2 and brine leakage probabilities for the Mount Simon sandstone and multiple units above it. Results of ELSA simulations are incorporated into RISCS: the Risk Interference Subsurface CO2 Storage model. RISCS uses three-dimensional data on subsurface geology and the locations of wells and boreholes to spatially estimate risks associated with CO2 leakage from injection reservoirs. Where plumes probabilistically intersect subsurface activities, reach groundwater, or reach the surface, RISCS uses cost estimates from the Leakage Impact Valuation framework to estimate CO2 storage leakage and interference risk in monetary terms. This framework estimates costs that might be incurred if CO2 leaks from an injection reservoir. Such leakage could beget a variety of costs, depending on the nature and extent of the impacts. The framework identifies multiple costs under headings of: (a) finding and fixing the leak, (b) business disruption, and (c) cleaning up and paying for damages. The framework also enumerates the distribution of costs between ten different stakeholders, and allocates these costs along four leakage scenarios: 1) No interference, 2) interference with a subsurface activity, 3) interference with groundwater, and 4) migration to the surface. Our methodology facilitates research along two lines. First, it allows a probabilistic assessment of leakage costs to an injection operator, and thus what the effect of leakage might be on CCS market effectiveness. Second, it allows a broader inquiry about injection site prioritization from the point of view of various stakeholders.
Back-end and interface implementation of the STS-XYTER2 prototype ASIC for the CBM experiment
NASA Astrophysics Data System (ADS)
Kasinski, K.; Szczygiel, R.; Zabolotny, W.
2016-11-01
Each front-end readout ASIC for the High-Energy Physics experiments requires robust and effective hit data streaming and control mechanism. A new STS-XYTER2 full-size prototype chip for the Silicon Tracking System and Muon Chamber detectors in the Compressed Baryonic Matter experiment at Facility for Antiproton and Ion Research (FAIR, Germany) is a 128-channel time and amplitude measuring solution for silicon microstrip and gas detectors. It operates at 250 kHit/s/channel hit rate, each hit producing 27 bits of information (5-bit amplitude, 14-bit timestamp, position and diagnostics data). The chip back-end implements fast front-end channel read-out, timestamp-wise hit sorting, and data streaming via a scalable interface implementing the dedicated protocol (STS-HCTSP) for chip control and hit transfer with data bandwidth from 9.7 MHit/s up to 47 MHit/s. It also includes multiple options for link diagnostics, failure detection, and throttling features. The back-end is designed to operate with the data acquisition architecture based on the CERN GBTx transceivers. This paper presents the details of the back-end and interface design and its implementation in the UMC 180 nm CMOS process.
NASA Technical Reports Server (NTRS)
Mckee, James W.
1990-01-01
This volume (3 of 4) contains the specification for the command language for the AMPS system. The volume contains a requirements specification for the operating system and commands and a design specification for the operating system and command. The operating system and commands sits on top of the protocol. The commands are an extension of the present set of AMPS commands in that the commands are more compact, allow multiple sub-commands to be bundled into one command, and have provisions for identifying the sender and the intended receiver. The commands make no change to the actual software that implement the commands.
Network command processing system overview
NASA Technical Reports Server (NTRS)
Nam, Yon-Woo; Murphy, Lisa D.
1993-01-01
The Network Command Processing System (NCPS) developed for the National Aeronautics and Space Administration (NASA) Ground Network (GN) stations is a spacecraft command system utilizing a MULTIBUS I/68030 microprocessor. This system was developed and implemented at ground stations worldwide to provide a Project Operations Control Center (POCC) with command capability for support of spacecraft operations such as the LANDSAT, Shuttle, Tracking and Data Relay Satellite, and Nimbus-7. The NCPS consolidates multiple modulation schemes for supporting various manned/unmanned orbital platforms. The NCPS interacts with the POCC and a local operator to process configuration requests, generate modulated uplink sequences, and inform users of the ground command link status. This paper presents the system functional description, hardware description, and the software design.
Witt, Adam; Magee, Timothy; Stewart, Kevin; ...
2017-08-10
Managing energy, water, and environmental priorities and constraints within a cascade hydropower system is a challenging multiobjective optimization effort that requires advanced modeling and forecasting tools. Within the mid-Columbia River system, there is currently a lack of specific solutions for predicting how coordinated operational decisions can mitigate the impacts of total dissolved gas (TDG) supersaturation while satisfying multiple additional policy and hydropower generation objectives. In this study, a reduced-order TDG uptake equation is developed that predicts tailrace TDG at seven hydropower facilities on the mid-Columbia River. The equation is incorporated into a general multiobjective river, reservoir, and hydropower optimization toolmore » as a prioritized operating goal within a broader set of system-level objectives and constraints. A test case is presented to assess the response of TDG and hydropower generation when TDG supersaturation is optimized to remain under state water-quality standards. Satisfaction of TDG as an operating goal is highly dependent on whether constraints that limit TDG uptake are implemented at a higher priority than generation requests. According to the model, an opportunity exists to reduce TDG supersaturation and meet hydropower generation requirements by shifting spillway flows to different time periods. In conclusion, a coordinated effort between all project owners is required to implement systemwide optimized solutions that satisfy the operating policies of all stakeholders.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witt, Adam; Magee, Timothy; Stewart, Kevin
Managing energy, water, and environmental priorities and constraints within a cascade hydropower system is a challenging multiobjective optimization effort that requires advanced modeling and forecasting tools. Within the mid-Columbia River system, there is currently a lack of specific solutions for predicting how coordinated operational decisions can mitigate the impacts of total dissolved gas (TDG) supersaturation while satisfying multiple additional policy and hydropower generation objectives. In this study, a reduced-order TDG uptake equation is developed that predicts tailrace TDG at seven hydropower facilities on the mid-Columbia River. The equation is incorporated into a general multiobjective river, reservoir, and hydropower optimization toolmore » as a prioritized operating goal within a broader set of system-level objectives and constraints. A test case is presented to assess the response of TDG and hydropower generation when TDG supersaturation is optimized to remain under state water-quality standards. Satisfaction of TDG as an operating goal is highly dependent on whether constraints that limit TDG uptake are implemented at a higher priority than generation requests. According to the model, an opportunity exists to reduce TDG supersaturation and meet hydropower generation requirements by shifting spillway flows to different time periods. In conclusion, a coordinated effort between all project owners is required to implement systemwide optimized solutions that satisfy the operating policies of all stakeholders.« less
Implementation of a Relay Coordination System for the Mars Network
NASA Technical Reports Server (NTRS)
Allard, Daniel A.
2010-01-01
Mars network relay operations involve the coordination of lander and orbiter teams through long-term and short-term planning, tactical changes and post-pass analysis. Much of this coordination is managed through email traffic and point-to-point file data exchanges. It is often difficult to construct a complete and accurate picture of the relay situation at any given moment, as there is no centralized store of correlated relay data. The Mars Relay Operations Service (MaROS) is being implemented to address the problem of relay coordination for current and next-generation relay missions. The service is provided for the purpose of coordinating communications sessions between landed spacecraft assets and orbiting spacecraft assets at Mars. The service centralizes a set of functions previously distributed across multiple spacecraft operations teams, and as such greatly improves visibility into the end-to-end strategic coordination process. Most of the process revolves around the scheduling of communications sessions between the spacecraft during periods of time when a landed asset on Mars is geometrically visible by an orbiting spacecraft. These "relay" sessions are used to transfer data both to and from the landed asset via the orbiting asset on behalf of Earth-based spacecraft operators. This paper will discuss the relay coordination problem space, overview the architecture and design selected to meet system requirements, and describe the first phase of system implementation
Making a move in exercise referral: co-development of a physical activity referral scheme.
Buckley, B J R; Thijssen, D H J; Murphy, R C; Graves, L E F; Whyte, G; Gillison, F B; Crone, D; Wilson, P M; Watson, P M
2018-04-24
Translational research is required to ensure exercise referral schemes (ERSs) are evidence-based and reflect local needs. This article reports process data from the co-development phase of an ERS, providing an insight into (i) factors that must be considered when translating evidence to practice in an ERS setting, and (ii) challenges and facilitators of conducting participatory research involving multiple stakeholders. An ERS was iteratively co-developed by a multidisciplinary stakeholder group (commissioners, managers, practitioners, patients and academics) via five participatory meetings and an online survey. Audio data (e.g. group discussions) and visual data (e.g. whiteboard notes) were recorded and analysed using NVivo-10 electronic software. Factors to consider when translating evidence to practice in an ERS setting included (i) current ERS culture; (ii) skills, safety and accountability; and (iii) resources and capacity. The co-development process was facilitated by needs-analysis, open questions, multidisciplinary debate and reflective practice. Challenges included contrasting views, irregular attendance and (mis)perceptions of evaluation. The multidisciplinary co-development process highlighted cultural and pragmatic issues related to exercise referral provision, resulting in an evidence-based intervention framework designed to be implemented within existing infrastructures. Further work is required to establish the feasibility and effectiveness of the co-developed intervention in practice.
Lee, Hencher Han-Chih; Mak, Chloe Miu; Poon, Grace Wing-Kit; Wong, Kar-Yin; Lam, Ching-Wan
2014-06-01
To evaluate the cost-benefit of implementing an expanded newborn screening programme for hyperphenylalaninemias due to 6-pyruvoyl-tetrahydropterin synthase (PTPS) deficiency in Hong Kong. Regional public hospitals in Hong Kong providing care for cases of inborn errors of metabolism. Implementational and operational costs of a new expanded mass spectrometry-based newborn screening programme were estimated. Data on various medical expenditures for the mild and severe phenotypic subtypes were gathered from a case cohort diagnosed with PTPS deficiency from 2001 to 2009. Local incidence from a previously published study was used. Implementation and operational costs of an expanded newborn screening programme in Hong Kong were estimated at HKD 10,473,848 (USD 1,342,801) annually. Assuming a birthrate of 50,000 per year and an incidence of 1 in 29,542 live births, the medical costs and adjusted loss of workforce per year would be HKD 20,773,207 (USD 2,663,232). Overall the annual savings from implementing the programme would be HKD 9,632,750 (USD 1,234,968). Our estimates show that implementation of an expanded newborn screening programme in Hong Kong is cost-effective, with a significant annual saving for public expenditure. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Lai, Samson Y; Ding, Dong; Liu, Mingfei; Liu, Meilin; Alamgir, Faisal M
2014-11-01
Information from ex situ characterization can fall short in describing complex materials systems simultaneously exposed to multiple external stimuli. Operando X-ray absorption spectroscopy (XAS) was used to probe the local atomistic and electronic structure of specific elements in a La0.6Sr0.4Co0.2Fe0.8O(3-δ) (LSCF) thin film cathode exposed to air contaminated with H2O and CO2 under operating conditions. While impedance spectroscopy showed that the polarization resistance of the LSCF cathode increased upon exposure to both contaminants at 750 °C, XAS near-edge and extended fine structure showed that the degree of oxidation for Fe and Co decreases with increasing temperature. Synchrotron-based X-ray photoelectron spectroscopy tracked the formation and removal of a carbonate species, a Co phase, and different oxygen moieties as functions of temperature and gas. The combined information provides insight into the fundamental mechanism by which H2O and CO2 cause degradation in the cathode of solid oxide fuel cells. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Co-Operation: The Antidote to Isolated Misery
ERIC Educational Resources Information Center
Jones, Sarah
2013-01-01
This is a case study demonstrating the impact the co-operative movement has had on one co-operative school in south-west England. Lipson Co-operative Academy in Plymouth was one of the first schools to convert to become a co-operative school in 2009. The article has been co-written by members of the Academy and focuses on three transformational…
Lunar surface mining for automated acquisition of helium-3: Methods, processes, and equipment
NASA Technical Reports Server (NTRS)
Li, Y. T.; Wittenberg, L. J.
1992-01-01
In this paper, several techniques considered for mining and processing the regolith on the lunar surface are presented. These techniques have been proposed and evaluated based primarily on the following criteria: (1) mining operations should be relatively simple; (2) procedures of mineral processing should be few and relatively easy; (3) transferring tonnages of regolith on the Moon should be minimized; (4) operations outside the lunar base should be readily automated; (5) all equipment should be maintainable; and (6) economic benefit should be sufficient for commercial exploitation. The economic benefits are not addressed in this paper; however, the energy benefits have been estimated to be between 250 and 350 times the mining energy. A mobile mining scheme is proposed that meets most of the mining objectives. This concept uses a bucket-wheel excavator for excavating the regolith, several mechanical electrostatic separators for beneficiation of the regolith, a fast-moving fluidized bed reactor to heat the particles, and a palladium diffuser to separate H2 from the other solar wind gases. At the final stage of the miner, the regolith 'tailings' are deposited directly into the ditch behind the miner and cylinders of the valuable solar wind gases are transported to a central gas processing facility. During the production of He-3, large quantities of valuable H2, H2O, CO, CO2, and N2 are produced for utilization at the lunar base. For larger production of He-3 the utilization of multiple-miners is recommended rather than increasing their size. Multiple miners permit operations at more sites and provide redundancy in case of equipment failure.
Lunar surface mining for automated acquisition of helium-3: Methods, processes, and equipment
NASA Astrophysics Data System (ADS)
Li, Y. T.; Wittenberg, L. J.
1992-09-01
In this paper, several techniques considered for mining and processing the regolith on the lunar surface are presented. These techniques have been proposed and evaluated based primarily on the following criteria: (1) mining operations should be relatively simple; (2) procedures of mineral processing should be few and relatively easy; (3) transferring tonnages of regolith on the Moon should be minimized; (4) operations outside the lunar base should be readily automated; (5) all equipment should be maintainable; and (6) economic benefit should be sufficient for commercial exploitation. The economic benefits are not addressed in this paper; however, the energy benefits have been estimated to be between 250 and 350 times the mining energy. A mobile mining scheme is proposed that meets most of the mining objectives. This concept uses a bucket-wheel excavator for excavating the regolith, several mechanical electrostatic separators for beneficiation of the regolith, a fast-moving fluidized bed reactor to heat the particles, and a palladium diffuser to separate H2 from the other solar wind gases. At the final stage of the miner, the regolith 'tailings' are deposited directly into the ditch behind the miner and cylinders of the valuable solar wind gases are transported to a central gas processing facility. During the production of He-3, large quantities of valuable H2, H2O, CO, CO2, and N2 are produced for utilization at the lunar base. For larger production of He-3 the utilization of multiple-miners is recommended rather than increasing their size. Multiple miners permit operations at more sites and provide redundancy in case of equipment failure.
Liese, Eric; Zitney, Stephen E.
2017-06-26
A multi-stage centrifugal compressor model is presented with emphasis on analyzing use of an exit flow coefficient vs. an inlet flow coefficient performance parameter to predict off-design conditions in the critical region of a supercritical carbon dioxide (CO 2) power cycle. A description of the performance parameters is given along with their implementation in a design model (number of stages, basic sizing, etc.) and a dynamic model (for use in transient studies). A design case is shown for two compressors, a bypass compressor and a main compressor, as defined in a process simulation of a 10 megawatt (MW) supercritical COmore » 2 recompression Brayton cycle. Simulation results are presented for a simple open cycle and closed cycle process with changes to the inlet temperature of the main compressor which operates near the CO 2 critical point. Results showed some difference in results using the exit vs. inlet flow coefficient correction, however, it was not significant for the range of conditions examined. Here, this paper also serves as a reference for future works, including a full process simulation of the 10 MW recompression Brayton cycle.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liese, Eric; Zitney, Stephen E.
A multi-stage centrifugal compressor model is presented with emphasis on analyzing use of an exit flow coefficient vs. an inlet flow coefficient performance parameter to predict off-design conditions in the critical region of a supercritical carbon dioxide (CO 2) power cycle. A description of the performance parameters is given along with their implementation in a design model (number of stages, basic sizing, etc.) and a dynamic model (for use in transient studies). A design case is shown for two compressors, a bypass compressor and a main compressor, as defined in a process simulation of a 10 megawatt (MW) supercritical COmore » 2 recompression Brayton cycle. Simulation results are presented for a simple open cycle and closed cycle process with changes to the inlet temperature of the main compressor which operates near the CO 2 critical point. Results showed some difference in results using the exit vs. inlet flow coefficient correction, however, it was not significant for the range of conditions examined. Here, this paper also serves as a reference for future works, including a full process simulation of the 10 MW recompression Brayton cycle.« less
Characterizing Uncertainties in Atmospheric Inversions of Fossil Fuel CO2 Emissions in California
NASA Astrophysics Data System (ADS)
Brophy, K. J.; Graven, H. D.; Manning, A.; Arnold, T.; Fischer, M. L.; Jeong, S.; Cui, X.; Parazoo, N.
2016-12-01
In 2006 California passed a law requiring greenhouse gas emissions be reduced to 1990 levels by 2020, equivalent to a 20% reduction over 2006-2020. Assessing compliance with greenhouse gas mitigation policies requires accurate determination of emissions, particularly for CO2 emitted by fossil fuel combustion (ffCO2). We found differences in inventory-based ffCO2 flux estimates for California total emissions of 11% (standard deviation relative to the mean), and even larger differences on some smaller sub-state levels. Top-down studies may be useful for validating ffCO2 flux estimates, but top-down studies of CO2 typically focus on biospheric CO2 fluxes and they are not yet well-developed for ffCO2. Implementing top-down studies of ffCO2 requires observations of a fossil fuel combustion tracer such as 14C to distinguish ffCO2 from biospheric CO2. However, even if a large number of 14C observations are available, multiple other sources of uncertainty will contribute to the uncertainty in posterior ffCO2 flux estimates. With a Bayesian inverse modelling approach, we use simulated atmospheric observations of ffCO2 at a network of 11 tower sites across California in an observing system simulation experiment to investigate uncertainties. We use four different prior ffCO2 flux estimates, two different atmospheric transport models, different types of spatial aggregation, and different assumptions for observational and model transport uncertainties to investigate contributions to posterior ffCO2 emission uncertainties. We show how various sources of uncertainty compare and which uncertainties are likely to limit top-down estimation of ffCO2 fluxes in California.
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.
2017-02-01
This study is focused on solving an inverse mathematical modelling problem for dynamical systems based on observation data and control inputs. The mathematical model is being searched in the form of a linear differential equation, which determines the system with multiple inputs and a single output, and a vector of the initial point coordinates. The described problem is complex and multimodal and for this reason the proposed evolutionary-based optimization technique, which is oriented on a dynamical system identification problem, was applied. To improve its performance an algorithm restart operator was implemented.
Real-time operating system for selected Intel processors
NASA Technical Reports Server (NTRS)
Pool, W. R.
1980-01-01
The rationale for system development is given along with reasons for not using vendor supplied operating systems. Although many system design and performance goals were dictated by problems with vendor supplied systems, other goals surfaced as a result of a design for a custom system able to span multiple projects. System development and management problems and areas that required redesign or major code changes for system implementation are examined as well as the relative successes of the initial projects. A generic description of the actual project is provided and the ongoing support requirements and future plans are discussed.
Demonstration of an SOA-assisted open metro-access infrastructure for heterogeneous services.
Schmuck, H; Bonk, R; Poehlmann, W; Haslach, C; Kuebart, W; Karnick, D; Meyer, J; Fritzsche, D; Weis, E; Becker, J; Freude, W; Pfeiffer, T
2014-01-13
An open converged metro-access network approach allows for sharing optical layer resources like fibers and optical spectrum among different services and operators. We demonstrated experimentally the feasibility of such a concept by the simultaneous operation of multiple services showing different modulation formats and multiplexing techniques. Flexible access nodes are implemented including semiconductor optical amplifiers to create a transparent and reconfigurable optical ring network. The impact of cascaded optical amplifiers on the signal quality is studied along the ring. In addition, the influence of high power rival signals in the same waveband and in the same fiber is analyzed.
Comparisons between stellar models and reliability of the theoretical models
NASA Astrophysics Data System (ADS)
Lebreton, Yveline; Montalbán, Josefina
2010-07-01
The high quality of the asteroseismic data provided by space missions such as CoRoT (Michel et al. in The CoRoT Mission, ESA Spec. Publ. vol. 1306, p. 39, 2006) or expected from new operating missions such as Kepler (Christensen-Dalsgaard et al. in Commun. Asteroseismol. 150:350, 2007) requires the capacity of stellar evolution codes to provide accurate models whose numerical precision is better than the expected observational errors (i.e. below 0.1 μHz on the frequencies in the case of CoRoT). We present a review of some thorough comparisons of stellar models produced by different evolution codes, involved in the CoRoT/ESTA activities (Monteiro in Evolution and Seismic Tools for Stellar Astrophysics, 2009). We examine the numerical aspects of the computations as well as the effects of different implementations of the same physics on the global quantities, physical structure and oscillations properties of the stellar models. We also discuss a few aspects of the input physics.
Multiple Ships and Multiple Media: A Flexible Telepresence Program
NASA Astrophysics Data System (ADS)
Pelz, M.; Hoeberechts, M.; Riddell, D. J.; Ewing, N.
2016-02-01
Ocean Networks Canada (ONC) uses a number of research and exploration vessels equipped with remotely operated vehicles (ROVs) to maintain the NEPTUNE and VENUS cabled ocean observatories off the west coast of British Columbia, Canada. Maintenance expeditions range from several days to multiple weeks and encompass a range of activities including deploying new instruments, laying cable, recovering platforms, scientific sampling and conducting multibeam and visual surveys. In order to engage the widest possible participation in at-sea work, ONC uses telepresence technology to communicate from ship to shore and back with scientists, students, teachers and online viewers. In this presentation, we explore the challenge of designing a sustainable and flexible telepresence program which can be supported across multiple ship and ROV platforms, sometimes simultaneously. To meet outreach and education objectives, onboard educators conduct presentations to K-12 and post-secondary classrooms, museums and science centres on a daily basis. Online commentary by the educators, dive chief and ROV pilots accompanies the ROV dive footage and is streamed online 24/7 during underwater operations. Sharing the sights and sounds of the expeditions with students and educators ashore, including those in remote and inland communities, creates a unique learning environment for both formal and informal education audiences. As space is always a limiting factor on expeditions, the use of telepresence and other communication media enables ONC to simultaneously achieve engineering and science priorities at sea while communicating the successes and challenges of the expedition back to shore. Scientists and engineers provide guidance for operations from shore using a variety of communication technologies. We give examples from Ocean Networks Canada's most recent expedition, Fall 2015, which involved co-ordinated operations with three vessels - the R/V Thompson, the E/V Nautilus and the C/S Wave Venture.
Optimizing and Quantifying CO 2 Storage Resource in Saline Formations and Hydrocarbon Reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosshart, Nicholas W.; Ayash, Scott C.; Azzolina, Nicholas A.
In an effort to reduce carbon dioxide (CO 2) emissions from large stationary sources, carbon capture and storage (CCS) is being investigated as one approach. This work assesses CO 2 storage resource estimation methods for deep saline formations (DSFs) and hydrocarbon reservoirs undergoing CO 2 enhanced oil recovery (EOR). Project activities were conducted using geologic modeling and simulation to investigate CO 2 storage efficiency. CO 2 storage rates and efficiencies in DSFs classified by interpreted depositional environment were evaluated at the regional scale over a 100-year time frame. A focus was placed on developing results applicable to future widespread commercial-scalemore » CO 2 storage operations in which an array of injection wells may be used to optimize storage in saline formations. The results of this work suggest future investigations of prospective storage resource in closed or semiclosed formations need not have a detailed understanding of the depositional environment of the reservoir to generate meaningful estimates. However, the results of this work also illustrate the relative importance of depositional environment, formation depth, structural geometry, and boundary conditions on the rate of CO 2 storage in these types of systems. CO 2 EOR occupies an important place in the realm of geologic storage of CO 2, as it is likely to be the primary means of geologic CO 2 storage during the early stages of commercial implementation, given the lack of a national policy and the viability of the current business case. This work estimates CO 2 storage efficiency factors using a unique industry database of CO 2 EOR sites and 18 different reservoir simulation models capturing fluvial clastic and shallow shelf carbonate depositional environments for reservoir depths of 1219 and 2438 meters (4000 and 8000 feet) and 7.6-, 20-, and 64-meter (25-, 66,- and 209-foot) pay zones. The results of this work provide practical information that can be used to quantify CO 2 storage resource estimates in oil reservoirs during CO 2 EOR operations (as opposed to storage following depletion) and the uncertainty associated with those estimates.« less
Direct carbon dioxide emissions from civil aircraft
NASA Astrophysics Data System (ADS)
Grote, Matt; Williams, Ian; Preston, John
2014-10-01
Global airlines consume over 5 million barrels of oil per day, and the resulting carbon dioxide (CO2) emitted by aircraft engines is of concern. This article provides a contemporary review of the literature associated with the measures available to the civil aviation industry for mitigating CO2 emissions from aircraft. The measures are addressed under two categories - policy and legal-related measures, and technological and operational measures. Results of the review are used to develop several insights into the challenges faced. The analysis shows that forecasts for strong growth in air-traffic will result in civil aviation becoming an increasingly significant contributor to anthropogenic CO2 emissions. Some mitigation-measures can be left to market-forces as the key-driver for implementation because they directly reduce airlines' fuel consumption, and their impact on reducing fuel-costs will be welcomed by the industry. Other mitigation-measures cannot be left to market-forces. Speed of implementation and stringency of these measures will not be satisfactorily resolved unattended, and the current global regulatory-framework does not provide the necessary strength of stewardship. A global regulator with ‘teeth' needs to be established, but investing such a body with the appropriate level of authority requires securing an international agreement which history would suggest is going to be very difficult. If all mitigation-measures are successfully implemented, it is still likely that traffic growth-rates will continue to out-pace emissions reduction-rates. Therefore, to achieve an overall reduction in CO2 emissions, behaviour change will be necessary to reduce demand for air-travel. However, reducing demand will be strongly resisted by all stakeholders in the industry; and the ticket price-increases necessary to induce the required reduction in traffic growth-rates place a monetary-value on CO2 emissions of approximately 7-100 times greater than other common valuations. It is clear that, whilst aviation must remain one piece of the transport-jigsaw, environmentally a global regulator with ‘teeth' is urgently required.
Ausserhofer, Dietmar; Rakic, Severin; Novo, Ahmed; Dropic, Emira; Fisekovic, Eldin; Sredic, Ana; Van Malderen, Greet
2016-06-01
We explored how selected 'positive deviant' healthcare facilities in Bosnia and Herzegovina approach the continuous development, adaptation, implementation, monitoring and evaluation of nursing-related standard operating procedures. Standardized nursing care is internationally recognized as a critical element of safe, high-quality health care; yet very little research has examined one of its key instruments: nursing-related standard operating procedures. Despite variability in Bosnia and Herzegovina's healthcare and nursing care quality, we assumed that some healthcare facilities would have developed effective strategies to elevate nursing quality and safety through the use of standard operating procedures. Guided by the 'positive deviance' approach, we used a multiple-case study design to examine a criterion sample of four facilities (two primary healthcare centres and two hospitals), collecting data via focus groups and individual interviews. In each studied facility, certification/accreditation processes were crucial to the initiation of continuous development, adaptation, implementation, monitoring and evaluation of nursing-related SOPs. In one hospital and one primary healthcare centre, nurses working in advanced roles (i.e. quality coordinators) were responsible for developing and implementing nursing-related standard operating procedures. Across the four studied institutions, we identified a consistent approach to standard operating procedures-related processes. The certification/accreditation process is enabling necessary changes in institutions' organizational cultures, empowering nurses to take on advanced roles in improving the safety and quality of nursing care. Standardizing nursing procedures is key to improve the safety and quality of nursing care. Nursing and Health Policy are needed in Bosnia and Herzegovina to establish a functioning institutional framework, including regulatory bodies, educational systems for developing nurses' capacities or the inclusion of nursing-related standard operating procedures in certification/accreditation standards. © 2016 International Council of Nurses.
Measurement and analysis of operating system fault tolerance
NASA Technical Reports Server (NTRS)
Lee, I.; Tang, D.; Iyer, R. K.
1992-01-01
This paper demonstrates a methodology to model and evaluate the fault tolerance characteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major software problems and error distributions, we develop two levels of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a distributed environment. Based on the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated.
NASA Technical Reports Server (NTRS)
Rede, Leonard J.; Booth, Andrew; Hsieh, Jonathon; Summer, Kellee
2004-01-01
This paper presents a discussion of the evolution of a sequencer from a simple EPICS (Experimental Physics and Industrial Control System) based sequencer into a complex implementation designed utilizing UML (Unified Modeling Language) methodologies and a CASE (Computer Aided Software Engineering) tool approach. The main purpose of the sequencer (called the IF Sequencer) is to provide overall control of the Keck Interferometer to enable science operations be carried out by a single operator (and/or observer). The interferometer links the two 10m telescopes of the W. M. Keck Observatory at Mauna Kea, Hawaii. The IF Sequencer is a high-level, multi-threaded, Hare1 finite state machine, software program designed to orchestrate several lower-level hardware and software hard real time subsystems that must perform their work in a specific and sequential order. The sequencing need not be done in hard real-time. Each state machine thread commands either a high-speed real-time multiple mode embedded controller via CORB A, or slower controllers via EPICS Channel Access interfaces. The overall operation of the system is simplified by the automation. The UML is discussed and our use of it to implement the sequencer is presented. The decision to use the Rhapsody product as our CASE tool is explained and reflected upon. Most importantly, a section on lessons learned is presented and the difficulty of integrating CASE tool automatically generated C++ code into a large control system consisting of multiple infrastructures is presented.
Knowledge Framework Implementation with Multiple Architectures - 13090
DOE Office of Scientific and Technical Information (OSTI.GOV)
Upadhyay, H.; Lagos, L.; Quintero, W.
2013-07-01
Multiple kinds of knowledge management systems are operational in public and private enterprises, large and small organizations with a variety of business models that make the design, implementation and operation of integrated knowledge systems very difficult. In recent days, there has been a sweeping advancement in the information technology area, leading to the development of sophisticated frameworks and architectures. These platforms need to be used for the development of integrated knowledge management systems which provides a common platform for sharing knowledge across the enterprise, thereby reducing the operational inefficiencies and delivering cost savings. This paper discusses the knowledge framework andmore » architecture that can be used for the system development and its application to real life need of nuclear industry. A case study of deactivation and decommissioning (D and D) is discussed with the Knowledge Management Information Tool platform and framework. D and D work is a high priority activity across the Department of Energy (DOE) complex. Subject matter specialists (SMS) associated with DOE sites, the Energy Facility Contractors Group (EFCOG) and the D and D community have gained extensive knowledge and experience over the years in the cleanup of the legacy waste from the Manhattan Project. To prevent the D and D knowledge and expertise from being lost over time from the evolving and aging workforce, DOE and the Applied Research Center (ARC) at Florida International University (FIU) proposed to capture and maintain this valuable information in a universally available and easily usable system. (authors)« less
NASA Astrophysics Data System (ADS)
Reder, Leonard J.; Booth, Andrew; Hsieh, Jonathan; Summers, Kellee R.
2004-09-01
This paper presents a discussion of the evolution of a sequencer from a simple Experimental Physics and Industrial Control System (EPICS) based sequencer into a complex implementation designed utilizing UML (Unified Modeling Language) methodologies and a Computer Aided Software Engineering (CASE) tool approach. The main purpose of the Interferometer Sequencer (called the IF Sequencer) is to provide overall control of the Keck Interferometer to enable science operations to be carried out by a single operator (and/or observer). The interferometer links the two 10m telescopes of the W. M. Keck Observatory at Mauna Kea, Hawaii. The IF Sequencer is a high-level, multi-threaded, Harel finite state machine software program designed to orchestrate several lower-level hardware and software hard real-time subsystems that must perform their work in a specific and sequential order. The sequencing need not be done in hard real-time. Each state machine thread commands either a high-speed real-time multiple mode embedded controller via CORBA, or slower controllers via EPICS Channel Access interfaces. The overall operation of the system is simplified by the automation. The UML is discussed and our use of it to implement the sequencer is presented. The decision to use the Rhapsody product as our CASE tool is explained and reflected upon. Most importantly, a section on lessons learned is presented and the difficulty of integrating CASE tool automatically generated C++ code into a large control system consisting of multiple infrastructures is presented.
'Part of the solution': Developing sustainable energy through co-operatives and learning
NASA Astrophysics Data System (ADS)
Duguid, Fiona C. B.
After five years of development, WindShare Co-operative in Toronto, Ontario became the first urban wind turbine in North America and the first co-operatively owned and operated wind turbine in Canada. The development of WindShare Co-operative has spurred the growth of a green energy co-operative sector in Ontario. This study, which included 27 interviews and a focus group with members of WindShare Co-operative, focuses on the roles of community-based green energy co-operatives in advancing sustainable energy development and energy literacy. Sustainable energy development is firmly rooted in the triple bottom line of environmental, social and economic success, and green energy co-operatives can be a way to help achieve those successes. Green energy co-operatives are structures for providing renewable energy generation or energy conservation practices, both of which have important environmental impacts regarding climate change and pollution levels. Co-operative structures are supported by processes that include local ownership, democracy, participation, community organizing, learning and social change. These processes have a significant social impact by creating a venue for people to be directly involved in the energy industry, by involving learning through participation in a community-based organization, and by advancing energy literacy within the membership and the general public. In regards to the economic impacts, green energy co-operatives foster a local economy and local investment opportunities, which have repercussions regarding building expertise within Ontario's green energy and co-operative development future, and more generally, captures members' interest because they have a direct stake in the co-operative. This thesis shows that green energy co-operatives, like WindShare, play an important role in advancing sustainable energy development, energy literacy and the triple bottom line. Members of WindShare expressed resounding feelings of pride, efficacy and understanding of WindShare's role in sustainable energy. WindShare Co-operative provided the structure whereby members felt a part of the solution in terms of sustainable energy development. Policies and practices at all levels of government should encourage the advancement of green energy co-operatives to support Canada's efforts at public involvement in combating climate change and pollution.
Jackson, Matthew A; Bonder, Marc Jan; Kuncheva, Zhana; Zierer, Jonas; Fu, Jingyuan; Kurilshikov, Alexander; Wijmenga, Cisca; Zhernakova, Alexandra; Bell, Jordana T; Spector, Tim D; Steves, Claire J
2018-01-01
Microbes in the gut microbiome form sub-communities based on shared niche specialisations and specific interactions between individual taxa. The inter-microbial relationships that define these communities can be inferred from the co-occurrence of taxa across multiple samples. Here, we present an approach to identify comparable communities within different gut microbiota co-occurrence networks, and demonstrate its use by comparing the gut microbiota community structures of three geographically diverse populations. We combine gut microbiota profiles from 2,764 British, 1,023 Dutch, and 639 Israeli individuals, derive co-occurrence networks between their operational taxonomic units, and detect comparable communities within them. Comparing populations we find that community structure is significantly more similar between datasets than expected by chance. Mapping communities across the datasets, we also show that communities can have similar associations to host phenotypes in different populations. This study shows that the community structure within the gut microbiota is stable across populations, and describes a novel approach that facilitates comparative community-centric microbiome analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Gary E.
The Estuary/Ocean Subgroup (EOS) is part of the research, monitoring, and evaluation (RME) effort that the Action Agencies (Bonneville Power Administration, U.S. Army Corps of Engineers, U.S. Bureau of Reclamation) developed in response to obligations arising from the Endangered Species Act as applied to operation of the Federal Columbia River Power System (FCRPS). The goal of the EOS project is to facilitate activities of the estuary/ocean RME subgroup as it coordinates design and implementation of federal RME in the lower Columbia River and estuary. The EOS is one of multiple work groups in the federal research, monitoring, and evaluation (RME)more » effort developed in response to responsibilities arising from the Endangered Species Act as a result of operation of the FCRPS. The EOS is tasked by NOAA Fisheries and the Action Agencies to design and coordinate implementation of the federal RME plan for the lower Columbia River and estuary, including the plume.« less
Design and Development of a Baseband Processor for the Advanced Communications Technology Satellite
NASA Technical Reports Server (NTRS)
Lee, Kerry D.
1996-01-01
This paper describes the implementation of the operational baseband processor (BBP) subsystem on board the NASA Advanced Communications Technology Satellite (ACTS). The BBP supports the network consisting of the NASA ground station (NGS) low burst rate (LBR) terminals, and the T1 very small aperture terminals (VSAT's), to provide flexible, demand assigned satellite switched (SS), baseband processed frequency division modulated (FDM)/time division multiple access (TDMA) operations. This paper presents an overview of the baseband processor and includes a description of the data flow, functional block diagrams, and a discussion of the implementation of BBP. A discussion of the supporting technologies for the BBP is presented. A brief summary of BBP-level performance testing is also presented. Finally, a discussion of the implications of current technology on the BBP design, if it were to be developed today, is presented.
NASA Astrophysics Data System (ADS)
Palacz, M.; Haida, M.; Smolka, J.; Nowak, A. J.; Hafner, A.
2016-09-01
In this study, the comparison of the accuracy of the homogeneous equilibrium model (HEM) and homogeneous relaxation model (HRM) is presented. Both models were applied to simulate the CO2 expansion inside the two-phase ejectors. Moreover, the mentioned models were implemented in the robust and efficient computational tool ejectorPL. That tool guarantees the fully automated computational process and the repeatable computations for the various ejector shapes and operating conditions. The simulated motive nozzle mass flow rates were compared to the experimentally measured mass flow rates. That comparison was made for both, HEM and HRM. The results showed the unsatisfying fidelity of the HEM for the operating regimes far from the carbon dioxide critical point. On the other hand, the HRM accuracy for such conditions was slightly higher. The approach presented in this paper, showed the limitation of applicability of both two-phase models for the expansion phenomena inside the ejectors.
Climate change and food security in East Asia.
Su, Yi-Yuan; Weng, Yi-Hao; Chiu, Ya-Wen
2009-01-01
Climate change causes serious food security risk for East Asian countries. The United Nations Framework Convention on Climate Change (UNFCCC) has recognized that the climate change will impact agriculture and all nations should prepare adaptations to the impacts on food security. This article reviews the context of adaptation rules and current policy development in East Asian region. The UNFCCC and Kyoto Protocol have established specific rules for countries to develop national or regional adaptation policies and measurements. The current development of the ASEAN Strategic Plan on food security is inspiring, but the commitments to implementation by its members remain an issue of concern. We suggest that the UNFCCC enhances co-operation with the Food and Agriculture Organization (FAO) and other international organizations to further develop methodologies and technologies for all parties. Our findings suggest that agriculture is one of the most vulnerable sectors in terms of risks associated with climate change and distinct programmatic initiatives are necessary. It's imperative to promote co-operation among multilateral organizations, including the UNFCCC, FAO, World Health Organization, and others.
NASA Astrophysics Data System (ADS)
Jones, Jerry; Rhoades, Valerie; Arner, Radford; Clem, Timothy; Cuneo, Adam
2007-04-01
NDE measurements, monitoring, and control of smart and adaptive composite structures requires that the central knowledge system have an awareness of the entire structure. Achieving this goal necessitates the implementation of an integrated network of significant numbers of sensors. Additionally, in order to temporally coordinate the data from specially distributed sensors, the data must be time relevant. Early adoption precludes development of sensor technology specifically for this application, instead it will depend on the ability to utilize legacy systems. Partially supported by the U.S. Department of Commerce, National Institute of Standards and Technology, Advanced Technology Development Program (NIST-ATP), a scalable integrated system has been developed to implement monitoring of structural integrity and the control of adaptive/intelligent structures. The project, called SHIELD (Structural Health Identification and Electronic Life Determination), was jointly undertaken by: Caterpillar, N.A. Tech., Motorola, and Microstrain. SHIELD is capable of operation with composite structures, metallic structures, or hybrid structures. SHIELD consists of a real-time processing core on a Motorola MPC5200 using a C language based real-time operating system (RTOS). The RTOS kernel was customized to include a virtual backplane which makes the system completely scalable. This architecture provides for multiple processes to be operating simultaneously. They may be embedded as multiple threads on the core hardware or as separate independent processors connected to the core using a software driver called a NAT-Network Integrator (NATNI). NATNI's can be created for any communications application. In it's current embodiment, NATNI's have been created for CAN bus, TCP/IP (Ethernet) - both wired and 802.11 b and g, and serial communications using RS485 and RS232. Since SHIELD uses standard C language, it is easy to port any monitoring or control algorithm, thus providing for legacy technology which may use other hardware processors and various communications means. For example, two demonstrations of SHIELD have been completed, in January and May 2005 respectively. One demonstration used algorithms in C running in multiple threads in the SHIELD core and utilizing two different sensor networks, one CAN bus and one wireless. The second had algorithms operating in C on the SHIELD core and other algorithms running on multiple Texas Instruments DSP processors using a NATNI that communicated via wired TCP/IP. A key feature of SHIELD is the implementation of a wireless ZIGBEE (802.15.4) network for implementing large numbers of small, low cost, low power sensors communication via a meshstar wireless network. While SHIELD was designed to integrate with a wide variety of existing communications protocols, a ZIGBEE network capability was implemented specifically for SHIELD. This will facilitate the monitoring of medium to very large structures including marine applications, utility scale multi-megawatt wind energy systems, and aircraft/spacecraft. The SHIELD wireless network will facilitate large numbers of sensors (up to 32000), accommodate sensors embedded into the composite material, can communicate to both sensors and actuators, and prevents obsolescence by providing for re-programming of the nodes via remote RF communications. The wireless network provides for ultra-low energy use, spatial location, and accurate timestamping, utilizing the beaconing feature of ZIGBEE.
NASA Technical Reports Server (NTRS)
Simpson, Robert W.
1993-01-01
This presentation outlines a concept for an adaptive, interactive decision support system to assist controllers at a busy airport in achieving efficient use of multiple runways. The concept is being implemented as a computer code called FASA (Final Approach Spacing for Aircraft), and will be tested and demonstrated in ATCSIM, a high fidelity simulation of terminal area airspace and airport surface operations. Objectives are: (1) to provide automated cues to assist controllers in the sequencing and spacing of landing and takeoff aircraft; (2) to provide the controller with a limited ability to modify the sequence and spacings between aircraft, and to insert takeoffs and missed approach aircraft in the landing flows; (3) to increase spacing accuracy using more complex and precise separation criteria while reducing controller workload; and (4) achieve higher operational takeoff and landing rates on multiple runways in poor visibility.
Stochastic determination of matrix determinants.
Dorn, Sebastian; Ensslin, Torsten A
2015-07-01
Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations-matrices-acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hager, Robert, E-mail: rhager@pppl.gov; Yoon, E.S., E-mail: yoone@rpi.edu; Ku, S., E-mail: sku@pppl.gov
2016-06-15
Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. In this article, the non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. The finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable onmore » high-performance computing systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. The collision operator's good weak and strong scaling behavior are shown.« less
Hager, Robert; Yoon, E. S.; Ku, S.; ...
2016-04-04
Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. The non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. Moreover, the finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable on high-performance computingmore » systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. As a result, the collision operator's good weak and strong scaling behavior are shown.« less
Moving Toward Space Internetworking via DTN: Its Operational Challenges, Benefits, and Management
NASA Technical Reports Server (NTRS)
Barkley, Erik; Burleigh, Scott; Gladden, Roy; Malhotra, Shan; Shames, Peter
2010-01-01
The international space community has begun to recognize that the established model for management of communications with spacecraft - commanded data transmission over individual pair-wise contacts - is operationally unwieldy and will not scale in support of increasingly complex and sophisticated missions such as NASA's Constellation project. Accordingly, the international Inter-Agency Operations Advisory Group (IOAG) ichartered a Space Internetworking Strategy Group (SISG), which released its initial recommendations in a November 2008 report. The report includes a recommendation that the space flight community adopt Delay-Tolerant Networking (DTN) to address the problem of interoperability and communication scaling, especially in mission environments where there are multiple spacecraft operating in concert. This paper explores some of the issues that must be addressed in implementing, deploying, and operating DTN as part of a multi-mission, multi-agency space internetwork as well as benefits and future operational scenarios afforded by DTN-based space internetworking.
Variability of pCO2 in surface waters and development of prediction model.
Chung, Sewoong; Park, Hyungseok; Yoo, Jisu
2018-05-01
Inland waters are substantial sources of atmospheric carbon, but relevant data are rare in Asian monsoon regions including Korea. Emissions of CO 2 to the atmosphere depend largely on the partial pressure of CO 2 (pCO 2 ) in water; however, measured pCO 2 data are scarce and calculated pCO 2 can show large uncertainty. This study had three objectives: 1) to examine the spatial variability of pCO 2 in diverse surface water systems in Korea; 2) to compare pCO 2 calculated using pH-total alkalinity (Alk) and pH-dissolved inorganic carbon (DIC) with pCO 2 measured by an in situ submersible nondispersive infrared detector; and 3) to characterize the major environmental variables determining the variation of pCO 2 based on physical, chemical, and biological data collected concomitantly. Of 30 samples, 80% were found supersaturated in CO 2 with respect to the overlying atmosphere. Calculated pCO 2 using pH-Alk and pH-DIC showed weak prediction capability and large variations with respect to measured pCO 2 . Error analysis indicated that calculated pCO 2 is highly sensitive to the accuracy of pH measurements, particularly at low pH. Stepwise multiple linear regression (MLR) and random forest (RF) techniques were implemented to develop the most parsimonious model based on 10 potential predictor variables (pH, Alk, DIC, Uw, Cond, Turb, COD, DOC, TOC, Chla) by optimizing model performance. The RF model showed better performance than the MLR model, and the most parsimonious RF model (pH, Turb, Uw, Chla) improved pCO 2 prediction capability considerably compared with the simple calculation approach, reducing the RMSE from 527-544 to 105μatm at the study sites. Copyright © 2017 Elsevier B.V. All rights reserved.
Castrillón, L; Marañón, E; Fernández-Nava, Y; Ormaechea, P; Quiroga, G
2013-05-01
The aim of the present research work was to boost biogas production from cattle manure (CM) by adding food waste (FW) and crude glycerin (Gly) from the biodiesel industry as co-substrates. For this purpose, different quantities of FW and Gly were added to CM and co-digested in an induced bed reactor (IBR) at 55 °C. Sonication pre-treatment was implemented in the CM+Gly mixture, applying 550 kJ/kg TS to enhance the biodegradability of these co-substrates. The best results were obtained with mixtures of 87/10/3 (CM/FW/Gly) (w/w) operating at an organic loading rate of 7 g COD/L day, obtaining 92% COD removal, a specific methane yield of 640 L CH4/kg VS and a methane production rate of 2.6L CH4/L day. These results doubled those obtained in the co-digestion of CM and FW without the addition of Gly (330 L CH4/kg VS and 1.2L CH4/L day). Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dysart, Jonathan
An Eckert & Ziegler Bebig Co0.A86 cobalt 60 high dose rate (HDR) brachytherapy source was commissioned for clinical use. Long-lived Co-60 HDR sources offer potential logistical and economic advantages over Ir-192 sources, and should be considered for low to medium workload brachytherapy departments where modest increases in treatment times are not a factor. In optimized plans, the Co-60 source provides a similar dose distribution to Ir-192 despite the difference in radiation energy. By switching to Co-60, source exchange frequency can be reduced by a factor of 20, resulting in overall financial savings of more than 50% compared to Ir-192 sources.more » In addition, a reduction in Physicist QA workload of roughly 200 hours over the 5 year life of the Co-60 source is also expected. These benefits should be considered against the modest increases in average treatment time compared to those of Ir-192 sources, as well as the centre-specific needs for operating room shielding modification.« less
Rodehaver, Claire; Fearing, Deb
2005-07-01
Several factors contribute to the potential for patient confusion regarding his or her medication regimen, including multiple names for a single drug and formulary variations when the patient receives medications from more than one pharmacy. A 68-year-old woman was discharged from the hospital on a HMG-CoA reductase inhibitor (statin) and resumed her home statin. Eleven days later she returned to the hospital with a diagnosis of severe rhabdomyolysis due to statin overdose. IMPLEMENTING SOLUTIONS: Miami Valley Hospital, Dayton, Ohio, implemented a reconciliation process and order form at admission and discharge to reduce the likelihood that this miscommunication would recur. Initial efforts were trialed on a 44-bed orthopedic unit, with spread of the initiative to the cardiac units and finally to the remaining 22 nursing units. The team successfully implemented initiation of the order sheet, yet audits indicated the need for improvement in reconciling the medications within 24 hours of admission and in reconciling the home medications at the point of discharge. Successful implementation of the order sheet to drive reconciliation takes communication, perseverance, and a multidisciplinary team approach.
Tensor Algebra Library for NVidia Graphics Processing Units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liakh, Dmitry
This is a general purpose math library implementing basic tensor algebra operations on NVidia GPU accelerators. This software is a tensor algebra library that can perform basic tensor algebra operations, including tensor contractions, tensor products, tensor additions, etc., on NVidia GPU accelerators, asynchronously with respect to the CPU host. It supports a simultaneous use of multiple NVidia GPUs. Each asynchronous API function returns a handle which can later be used for querying the completion of the corresponding tensor algebra operation on a specific GPU. The tensors participating in a particular tensor operation are assumed to be stored in local RAMmore » of a node or GPU RAM. The main research area where this library can be utilized is the quantum many-body theory (e.g., in electronic structure theory).« less
Interactive intelligent remote operations: application to space robotics
NASA Astrophysics Data System (ADS)
Dupuis, Erick; Gillett, G. R.; Boulanger, Pierre; Edwards, Eric; Lipsett, Michael G.
1999-11-01
A set of tolls addressing the problems specific to the control and monitoring of remote robotic systems from extreme distances has been developed. The tools include the capability to model and visualize the remote environment, to generate and edit complex task scripts, to execute the scripts to supervisory control mode and to monitor and diagnostic equipment from multiple remote locations. Two prototype systems are implemented for demonstration. The first demonstration, using a prototype joint design called Dexter, shows the applicability of the approach to space robotic operation in low Earth orbit. The second demonstration uses a remotely controlled excavator in an operational open-pit tar sand mine. This demonstrates that the tools developed can also be used for planetary exploration operations as well as for terrestrial mining applications.
Improved repetition rate mixed isotope CO{sub 2} TEA laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohn, D. B., E-mail: dbctechnology@earthlink.net
2014-09-15
A compact CO{sub 2} TEA laser has been developed for remote chemical detection that operates at a repetition rate of 250 Hz. It emits 700 mJ/pulse at 10.6 μm in a multimode beam with the {sup 12}C{sup 16}O{sub 2} isotope. With mixed {sup 12}C{sup 16}O{sub 2} plus {sup 13}C{sup 16}O{sub 2} isotopes it emits multiple lines in both isotope manifolds to improve detection of a broad range of chemicals. In particular, output pulse energies are 110 mJ/pulse at 9.77 μm, 250 mJ/pulse at 10 μm, and 550 mJ/pulse at 11.15 μm, useful for detection of the chemical agents Sarin, Tabun, and VX. Relatedmore » work shows capability for long term sealed operation with a catalyst and an agile tuner at a wavelength shift rate of 200 Hz.« less
Ship-in-a-bottle synthesis of amine-functionalized ionic liquids in NaY zeolite for CO2 capture
Yu, Yinghao; Mai, Jingzhang; Wang, Lefu; Li, Xuehui; Jiang, Zheng; Wang, Furong
2014-01-01
CO2 capture on solid materials possesses significant advantages on the operation cost, process for large-scale CO2 capture and storage (CCS) that stimulates great interest in exploring high-performance solid CO2 adsorbents. A ship-in-a-bottle strategy was successfully developed to prepare the [APMIM]Br@NaY host–guest system in which an amine-functionalized ionic liquid (IL), 1-aminopropyl-3-methylimidazolium bromide ([APMIM]Br), was in-situ encapsulated in the NaY supercages. The genuine host-guest systems were thoroughly characterized and tested in CO2 capture from simulated flue gas. It was evidenced the encapsulated ILs are more stable than the bulk ILs. These host–guest systems exhibited superb overall CO2 capture capacity up to 4.94 mmol g−1 and the chemically adsorbed CO2 achieved 1.85 mmol g−1 depending on the [APMIM]Br loading amount. The chemisorbed CO2 can be desorbed rapidly by flushing with N2 gas at 50°C. The optimized [APMIM]Br@NaY system remains its original CO2 capture capacity in multiple cycling tests under prolonged harsh adsorption-desorption conditions. The excellent physicochemical properties and the CO2 capture performance of the host-guest systems offer them great promise for the future practice in the industrial CO2 capture. PMID:25104324
Biochar for reducing GHG emissions in Norway: opportunities and barriers to implementation.
NASA Astrophysics Data System (ADS)
Rasse, Daniel; O'Toole, Adam; Joner, Erik; Borgen, Signe
2017-04-01
Norway has ratified the Paris Agreement with a target nationally determined contribution (NDC) of 40% reduction of greenhouse gas emissions by 2030, with the land sector (AFOLU) expected to contribute to this effort. Increased C sequestration in soil, as argued by the 4 per 1000 initiative, can provide C negative solutions towards reaching this goal. However, Norway has only 3% of its land surface that is cultivated, and management options are fairly limited because the major part is already under managed grasslands, which are assumed to be close to C saturation. By contrast, the country has ample forest resources, allowing Norway to report 25 Mt CO2-eq per year of net CO2 uptake by forest. In addition, the forest industry generates large amounts of unused residues, both at the processing plants but also left decaying on the forest floor. Because of the unique characteristics of the Norwegian land sector, the Norwegian Environment Agency reported as early as 2010 that biochar production for soil C storage had the largest potential for reducing GHG emissions through land-use measures. Although straw is a potential feedstock, the larger quantities of forest residues are a prime candidate for this purpose, as exemplified by our first experimental facility at a production farm, which is using wood chips as feedstock for biochar production. The highly controlled and subsidised Norwegian agriculture might offer a unique test case for implementing incentives that would support farmers for biochar-based C sequestration. However, multiple barriers remain, which mostly revolve around the complexity of finding the right implementation scheme (including price setting) in a changing landscape of competition for biomass (with e.g. bioethanol and direct combustion), methods of verification and variable co-benefits to the farmer. Here we will present some of these schemes, from on-farm biochar production to factories for biochar-compound fertilizers, and discuss barriers and opportunities towards implementation as a soil C sequestration measure.
Microbial genotype-phenotype mapping by class association rule mining.
Tamura, Makio; D'haeseleer, Patrik
2008-07-01
Microbial phenotypes are typically due to the concerted action of multiple gene functions, yet the presence of each gene may have only a weak correlation with the observed phenotype. Hence, it may be more appropriate to examine co-occurrence between sets of genes and a phenotype (multiple-to-one) instead of pairwise relations between a single gene and the phenotype. Here, we propose an efficient class association rule mining algorithm, netCAR, in order to extract sets of COGs (clusters of orthologous groups of proteins) associated with a phenotype from COG phylogenetic profiles and a phenotype profile. netCAR takes into account the phylogenetic co-occurrence graph between COGs to restrict hypothesis space, and uses mutual information to evaluate the biconditional relation. We examined the mining capability of pairwise and multiple-to-one association by using netCAR to extract COGs relevant to six microbial phenotypes (aerobic, anaerobic, facultative, endospore, motility and Gram negative) from 11,969 unique COG profiles across 155 prokaryotic organisms. With the same level of false discovery rate, multiple-to-one association can extract about 10 times more relevant COGs than one-to-one association. We also reveal various topologies of association networks among COGs (modules) from extracted multiple-to-one correlation rules relevant with the six phenotypes; including a well-connected network for motility, a star-shaped network for aerobic and intermediate topologies for the other phenotypes. netCAR outperforms a standard CAR mining algorithm, CARapriori, while requiring several orders of magnitude less computational time for extracting 3-COG sets. Source code of the Java implementation is available as Supplementary Material at the Bioinformatics online website, or upon request to the author. Supplementary data are available at Bioinformatics online.
The Impact of Specific and Complex Trauma on the Mental Health of Homeless Youth.
Wong, Carolyn F; Clark, Leslie F; Marlotte, Lauren
2016-03-01
This study investigates the relative impact of trauma experiences that occurred prior to and since becoming homeless on depressive symptoms, posttraumatic stress disorder (PTSD) symptoms, and self-injurious behaviors among a sample of homeless youth (N = 389). Youth (aged 13 to 25) who had been homeless or precariously housed in the past year completed a survey about housing history, experiences of violence and victimization, mental health, and service utilization. In addition to examining the impact associated with specific trauma types, we also considered the effect of "early-on" poly-victimization (i.e., cumulative number of reported traumas prior to homelessness) and the influence of a compound sexual trauma variable created to represent earlier complex trauma. This created-variable has values ranging from no reported trauma, single trauma, multiple non-sexual traumas, and multiple traumas that co-occurred with sexual abuse. Multivariate analyses revealed that specific traumatic experiences prior to homelessness, including sexual abuse, emotional abuse/neglect, and adverse home environment, predicted greater mental health symptoms. Poly-victimization did not add to the prediction of mental health symptoms after the inclusion of specific traumas. Results with early compound sexual trauma revealed significant differences between lower-order trauma exposures and multiple-trauma exposures. Specifically, experience of multiple traumas that co-occurred with sexual trauma was significantly more detrimental in predicting PTSD symptoms than multiple traumas of non-sexual nature. Findings support the utility of an alternate/novel conceptualization of complex trauma, and support the need to carefully evaluate complex traumatic experiences that occurred prior to homelessness, which can impact the design and implementation of mental health care and services for homeless youth. © The Author(s) 2014.
Reasons to Co-Operate: Co-Operative Solutions for Schools
ERIC Educational Resources Information Center
Roach, Patrick
2013-01-01
The NASUWT's landmark agreement with the Schools Co-operative Society has provided a new spur to co-operation, collaboration and collegiality in schools. Against a background of rapid and radical changes to the education landscape, co-operative schools are viewed by many as a means to maintaining public service ethos and values in education. The…
NASA Technical Reports Server (NTRS)
Habiby, Sarry F.
1987-01-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. The objective is to demonstrate the operation of an optical processor designed to minimize computation time in performing a practical computing application. This is done by using the large array of processing elements in a Hughes liquid crystal light valve, and relying on the residue arithmetic representation, a holographic optical memory, and position coded optical look-up tables. In the design, all operations are performed in effectively one light valve response time regardless of matrix size. The features of the design allowing fast computation include the residue arithmetic representation, the mapping approach to computation, and the holographic memory. In addition, other features of the work include a practical light valve configuration for efficient polarization control, a model for recording multiple exposures in silver halides with equal reconstruction efficiency, and using light from an optical fiber for a reference beam source in constructing the hologram. The design can be extended to implement larger matrix arrays without increasing computation time.