Application of logic models in a large scientific research program.
O'Keefe, Christine M; Head, Richard J
2011-08-01
It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts mission-driven scientific research focussed on delivering results with relevance and impact for Australia, where impact is defined and measured in economic, environmental and social terms at the national level. The Australian Government has recently signalled an increasing emphasis on performance assessment and evaluation, which in the CSIRO context implies an increasing emphasis on ensuring and demonstrating the impact of its research programs. CSIRO continues to develop and improve its approaches to impact planning and evaluation, including conducting a trial of a program logic approach in the CSIRO Preventative Health National Research Flagship. During the trial, improvements were observed in clarity of the research goals and path to impact, as well as in alignment of science and support function activities with national challenge goals. Further benefits were observed in terms of communication of the goals and expected impact of CSIRO's research programs both within CSIRO and externally. The key lesson learned was that significant value was achieved through the process itself, as well as the outcome. Recommendations based on the CSIRO trial may be of interest to managers of scientific research considering developing similar logic models for their research projects. The CSIRO experience has shown that there are significant benefits to be gained, especially if the project participants have a major role in the process of developing the logic model. Copyright © 2011 Elsevier Ltd. All rights reserved.
Energy Systems Integration Partnerships: NREL + CSIRO
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-12-01
This fact sheet highlights work done at the ESIF in partnership with CSIRO. The Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia's science agency, has teamed up with NREL to evaluate advanced control solutions for integrating solar energy in hybrid distributed generation applications. NREL and CSIRO demonstrated a plug-and play microgrid controller at the ESIF and also tested other control techniques for integrating solar power with Australian and U.S. electrical distribution systems.
Spotsizer: High-throughput quantitative analysis of microbial growth.
Bischof, Leanne; Převorovský, Martin; Rallis, Charalampos; Jeffares, Daniel C; Arzhaeva, Yulia; Bähler, Jürg
2016-10-01
Microbial colony growth can serve as a useful readout in assays for studying complex genetic interactions or the effects of chemical compounds. Although computational tools for acquiring quantitative measurements of microbial colonies have been developed, their utility can be compromised by inflexible input image requirements, non-trivial installation procedures, or complicated operation. Here, we present the Spotsizer software tool for automated colony size measurements in images of robotically arrayed microbial colonies. Spotsizer features a convenient graphical user interface (GUI), has both single-image and batch-processing capabilities, and works with multiple input image formats and different colony grid types. We demonstrate how Spotsizer can be used for high-throughput quantitative analysis of fission yeast growth. The user-friendly Spotsizer tool provides rapid, accurate, and robust quantitative analyses of microbial growth in a high-throughput format. Spotsizer is freely available at https://data.csiro.au/dap/landingpage?pid=csiro:15330 under a proprietary CSIRO license.
Monitoring of Global Acoustic Transmissions: Signal Processing and Preliminary Data Analysis
1991-09-01
Approved by: A .- • -, Jam t . Miller, Thesis Advisor Ching-San Chiu, Thesis Co-Advisor Curtis A. Collins Chairman, Department of Oceanography ii ABSTRACT A...Island Sonobuoys CSIRO, Australia Mawson Station Sonobuoys CSIRO, Australia Kerguelen Island Sonobuoys INSU-TAAF, France Indian Ocean Sonobuoys NIO, India...three major uncertainties underlying the use of global acoustic transmissions: 5 *0" NN N $ o* w W o W" .a 4W r N 1W o0" M fo4 ° v t &W "w s Figure
From Field to the Web: Management and Publication of Geoscience Samples in CSIRO Mineral Resources
NASA Astrophysics Data System (ADS)
Devaraju, A.; Klump, J. F.; Tey, V.; Fraser, R.; Reid, N.; Brown, A.; Golodoniuc, P.
2016-12-01
Inaccessible samples are an obstacle to the reproducibility of research and may cause waste of time and resources through duplication of sample collection and management. Within the Commonwealth Scientific and Industrial Research Organisation (CSIRO) Mineral Resources there are various research communities who collect or generate physical samples as part of their field studies and analytical processes. Materials can be varied and could be rock, soil, plant materials, water, and even synthetic materials. Given the wide range of applications in CSIRO, each researcher or project may follow their own method of collecting, curating and documenting samples. In many cases samples and their documentation are often only available to the sample collector. For example, the Australian Resources Research Centre stores rock samples and research collections dating as far back as the 1970s. Collecting these samples again would be prohibitively expensive and in some cases impossible because the site has been mined out. These samples would not be easily discoverable by others without an online sample catalog. We identify some of the organizational and technical challenges to provide unambiguous and systematic access to geoscience samples, and present their solutions (e.g., workflow, persistent identifier and tools). We present the workflow starting from field sampling to sample publication on the Web, and describe how the International Geo Sample Number (IGSN) can be applied to identify samples along the process. In our test case geoscientific samples are collected as part of the Capricorn Distal Footprints project, a collaboration project between the CSIRO, the Geological Survey of Western Australia, academic institutions and industry partners. We conclude by summarizing the values of our solutions in terms of sample management and publication.
Hansen, David P; Gurney, Phil; Morgan, Gary; Barraclough, Bruce
2011-02-21
The CSIRO (Commonwealth Scientific and Industrial Research Organisation) and the Queensland Government have jointly established the Australian e-Health Research Centre (AEHRC) with the aim of developing innovative information and communication technologies (ICT) for a sustainable health care system. The AEHRC, as part of the CSIRO ICT Centre, has access to new technologies in information processing, wireless and networking technologies, and autonomous systems. The AEHRC's 50 researchers, software engineers and PhD students, in partnership with the CSIRO and clinicians, are developing and applying new technologies for improving patients' experience, building a more rewarding workplace for the health workforce, and improving the efficiency of delivering health care. The capabilities of the AEHRC fall into four broad areas: smart methods for using medical data; advanced medical imaging technologies; new models for clinical and health care interventions; and tools for medical skills development. Since its founding in 2004, new technology from the AEHRC has been adopted within Queensland (eg, a mobile phone-based cardiac rehabilitation program), around Australia (eg, medical imaging technologies) and internationally (eg, our clinical terminology tools).
A soil-canopy scheme for use in a numerical model of the atmosphere: 1D stand-alone model
NASA Astrophysics Data System (ADS)
Kowalczyk, E. A.; Garratt, J. R.; Krummel, P. B.
We provide a detailed description of a soil-canopy scheme for use in the CSIRO general circulation models (GCMs) (CSIRO-4 and CSIRO-9), in the form of a one-dimensional stand-alone model. In addition, the paper documents the model's ability to simulate realistic surface fluxes by comparison with mesoscale model simulations (involving more sophisticated soil and boundary-layer treatments) and observations, and the diurnal range in surface quantities, including extreme maximum surface temperatures. The sensitivity of the model to values of the surface resistance is also quantified. The model represents phase 1 of a longer-term plan to improve the atmospheric boundary layer (ABL) and surface schemes in the CSIRO GCMs.
Carbothermal Production of Magnesium: Csiro's Magsonic™ Process
NASA Astrophysics Data System (ADS)
Prentice, Leon H.; Nagle, Michael W.; Barton, Timothy R. D.; Tassios, Steven; Kuan, Benny T.; Witt, Peter J.; Constanti-Carey, Keri K.
Carbothermal production has been recognized as conceptually the simplest and cleanest route to magnesium metal, but has suffered from technical challenges of development and scale-up. Work by CSIRO has now successfully demonstrated the technology using supersonic quenching of magnesium vapor (the MagSonic™ Process). Key barriers to process development have been overcome: the experimental program has achieved sustained operation, no nozzle blockage, minimal reversion, and safe handling of pyrophoric powders. The laboratory equipment has been operated at industrially relevant magnesium vapor concentrations (>25% Mg) for multiple runs with no blockage. Novel computational fluid dynamics (CFD) modeling of the shock quenching and metal vapor condensation has informed nozzle design and is supported by experimental data. Reversion below 10% has been demonstrated, and magnesium successfully purified (>99.9%) from the collected powder. Safe operating procedures have been developed and demonstrated, minimizing the risk of powder explosion. The MagSonic™ Process is now ready to progress to significantly larger scale and continuous operation.
NREL and CSIRO Validating Advanced Microgrid Control Solution | Energy
Organisation NREL and CSIRO Validating Advanced Microgrid Control Solution Australia's Commonwealth Scientific microgrid control solution. This technology helps hybrid microgrids to automatically recognize when solar
Dialogue Systems and Dialogue Management
2016-12-01
dialogue management capability within DST Group’s Consensus project . UNCLASSIFIED UNCLASSIFIED Author Deeno Burgan National Security...3.1 Survey Process This research into dialogue management is part of a joint collaboration between DST Group and CSIRO. The project team comprised...
Application of Logic Models in a Large Scientific Research Program
ERIC Educational Resources Information Center
O'Keefe, Christine M.; Head, Richard J.
2011-01-01
It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…
Identifying and Researching Market Opportunities for New High Technology Products.
ERIC Educational Resources Information Center
Dunstan, Peter
Using a product called the synchro-pulse welder as a case study example, this paper discusses the activities of CSIRO (Commonwealth Scientific and Industrial Research Organisation) in identifying and marketing new high-technology products. A general discussion of CSIRO's market research plans includes two goals to be attained within the next 5…
Research continues on Julia Creek shale oil project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1986-09-01
CSR Limited and the CSIRO Division of Mineral Engineering in Australia are working jointly on the development of a new retorting process for Julia Creek oil shale. This paper describes the retorting process which integrates a fluid bed combustor with a retort in which heat is transferred from hot shale ash to cold raw shale. The upgrading of shale oil into transport fuels is also described.
The Global ASTER Geoscience and Mineralogical Maps
NASA Astrophysics Data System (ADS)
Abrams, M.
2017-12-01
In 2012, Australia's Commonwealth Scientific and Industrial Research Organization (CSIRO) released 17 Geoscience mineral maps for the continent of Australia We are producing the CSIRO Geoscience data products for the entire land surface of the Earth. These maps are created from Advanced Spacecraft Thermal Emission and Reflection Radiometer (ASTER) data, acquired between 2000 and 2008. ASTER, onboard the United States' Terra satellite, is part of NASA's Earth Observing System. This multispectral satellite system has 14 spectral bands spanning: the visible and near-infrared (VNIR) @ 15 m pixel resolution; shortwave-infrared (SWIR) @ 30 m pixel resolution; and thermal infrared (TIR) @ 90 m pixel resolution. In a polar-orbit, ASTER acquires a 60 km swath of data.The CSIRO maps are the first continental-scale mineral maps generated from an imaging satellite designed to measure clays, quartz and other minerals. Besides their obvious use in resource exploration, the data have applicability to climatological studies. Over Australia, these satellite mineral maps improved our understanding of weathering, erosional and depositional processes in the context of changing weather, climate and tectonics. The clay composition map showed how kaolinite has developed over tectonically stable continental crust in response to deep weathering. The same clay composition map, in combination with one sensitive to water content, enabled the discrimination of illite from montmorillonite clays that typically develop in large depositional environments over thin (sinking) continental crust. This product was also used to measure temporal gains/losses of surface clay caused by periodic wind erosion (dust) and rainfall inundation (flood) events. The two-year project is undertaken by JPL with collaboration from CSIRO. JPL has in-house the entire ASTER global archive of Level 1B image data—more than 1,500,000 scenes. This cloud-screened and vegetation-masked data set will be the basis for creation of the suite of global Geoscience products using all of ASTER's 14 VNIR-SWIR-TIR spectral bands resampled to 100 m pixel resolution. We plan a staged release of the geoscience products through NASA's LPDAAC.
Allocation of R&D Equipment Expenditure Based on Organisation Discipline Profiles
ERIC Educational Resources Information Center
Wells, Xanthe E.; Foster, Nigel; Finch, Adam; Elsum, Ian
2017-01-01
Sufficient and state-of-the-art research equipment is one component required to maintain the research competitiveness of a R&D organisation. This paper describes an approach to inform more optimal allocation of equipment expenditure levels in a large and diverse R&D organisation, such as CSIRO. CSIRO is Australia's national science agency,…
Student Outcomes from Engaging in Open Science Investigations
ERIC Educational Resources Information Center
Hubber, Peter; Darby, Linda; Tytler, Russell
2010-01-01
This is the first of two papers that draw on a study of the national BHP Billiton Science Awards, a peak competition funded by BHP Billiton and administered by CSIRO. BHP Billiton, CSIRO and ASTA together oversee the strategic direction of the Awards. This paper reports an analysis focussed on the outcomes for students of participation in open…
Searching and Filtering Tweets: CSIRO at the TREC 2012 Microblog Track
2012-11-01
stages. We first evaluate the effect of tweet corpus pre- processing in vanilla runs (no query expansion), and then assess the effect of query expansion...Effect of a vanilla run on D4 index (both realtime and non-real-time), and query expansion methods based on the submitted runs for two sets of queries
Atmospheric Hydrogen (H2) Concentrations from the CSIRO GASLAB Flask Sampling Network (1992 - 2001)
Steele, L. P. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Atmospheric Research, Aspendale, Victoria, Australia; Krummel, P. B. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Atmospheric Research, Aspendale, Victoria, Australia; Langenfelds, R. L. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Atmospheric Research, Aspendale, Victoria, Australia
2003-01-01
Air samples from nine sites were collected from the CSIRO GASLAB Flask Sampling Network for the purpose of monitoring the atmospheric hydrogen (H2) concentrations. The listed data were obtained from flask air samples returned to the CSIRO GASLAB for analysis. Typical sample storage times ranged from days to weeks for some sites (e.g., Cape Grim) to as much as one year for Macquarie Island and the Antarctic sites. Experiments carried out to test for any change in sample H22 mixing ratio during storage have shown no consistent and systematic drift in these flask types over test periods of several months to years (Cooper et al., 1999). An annual cycle of H2 is evident, reflecting the seasonal nature of some of the major sources and sinks (Novelli et al., 1999).
Delta 13C in CO2 at Alert, NWT, Canada (June 1991 - December 2001)
Allison, C. E. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Aspendale, Victoria, Australia; Francey, R. J. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Aspendale, Victoria, Australia; Krummel, P. B. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Aspendale, Victoria, Australia
2003-04-01
Measurements have been made on air collected in flasks at Alert, Canada, through the CSIRO GASLAB worldwide network. Flasks are filled with air at Alert and returned to the CSIRO GASLAB for analysis; typical sample storage times for flasks collected at Alert range from a few weeks to a few months. No significant effect on the stable carbon isotopic composition, δ13C, has been detected as a consequence of the sample storage time.
Steele, L. P. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Aspendale, Victoria, Australia; Krummel, P. B. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Aspendale, Victoria, Australia; Langenfelds, R. L. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Aspendale, Victoria, Australia
2003-01-01
The listed data were obtained from flask air samples returned to the CSIRO GASLAB for analysis. Typical sample storage times ranged from days to weeks for some sites (e.g., Cape Grim) to as much as one year for Macquarie Island and the Antarctic sites. Experiments carried out to test for any change in sample CH4 mixing ratio during storage have shown no drift to within detection limits over test periods of several months to years (Cooper et al., 1999).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundstrom, Blake R.
The Commonwealth Scientific and Industrial Research Organisation (CSIRO) is Australia's national science agency. CSIRO received funding from the Australian Solar Institute (ASI) for the United States-Australia Solar Energy Collaboration (USASEC) project 1-USO032 Plug and Play Solar Power: Simplifying the Integration of Solar Energy in Hybrid Applications (Broader Project). The Australian Solar Institute (ASI) operated from August 2009 to December 2012 before being merged into the Australian Renewable Energy Agency (ARENA). The Broader Project sought to simplify the integration, accelerate the deployment, and lower the cost of solar energy in hybrid distributed generation applications by creating plug and play solar technology.more » CSIRO worked with the National Renewable Energy Laboratory (NREL) as set out in a Joint Work Statement to review communications protocols relevant to plug-and-play technology and perform prototype testing in its Energy System Integration Facility (ESIF). For the avoidance of doubt, this CRADA did not cover the whole of the Broader Project and only related to the work described in the Joint Work Statement, which was carried out by NREL.« less
AIRSAR deployment in Australia, September 1993: Management and objectives
NASA Technical Reports Server (NTRS)
Milne, A. K.; Tapley, I. J.
1993-01-01
Past co-operation between the NASA Earth Science and Applications Division and the CSIRO and Australian university researchers has led to a number of mutually beneficial activities. These include the deployment of the C-130 aircraft with TIMS, AIS, and NS001 sensors in Australia in 1985; collaboration between scientists from the USA and Australia in soils research which has extended for the past decade; and in the development of imaging spectroscopy where DSIRO and NASA have worked closely together and regularly exchanged visiting scientists. In May this year TIMS was flown in eastern Australia on board a CSIRO-owned aircraft together with a CSIRO-designed CO2 laser spectrometer. The Science Investigation Team for the Shuttle Imaging Radar (SIRC-C) Program includes one Australian Principal Investigator and ten Australian co-investigators who will work on nine projects related to studying land and near-shore surfaces after the Shuttle flight scheduled for April 1994. This long-term continued joint collaboration was progressed further with the deployment of AIRSAR downunder in September 1993. During a five week period, the DC-8 aircraft flew in all Australian states and collected data from some 65 individual test sites.
Magsonic™ Carbothermal Technology Compared with the Electrolytic and Pidgeon Processes
NASA Astrophysics Data System (ADS)
Prentice, Leon H.; Haque, Nawshad
A broad technology comparison of carbothermal magnesium production with present technologies has not been previously presented. In this paper a comparative analysis of CSIRO's MagSonic™ process is made with the electrolytic and Pidgeon processes. The comparison covers energy intensity (GJ/tonne Mg), labor intensity (person-hours/tonne Mg), capital intensity (USD/tonne annual Mg installed capacity), and Global Warming Potential (GWP, tonnes CO2-equivalent/tonne Mg). Carbothermal technology is advantageous on all measures except capital intensity (where it is roughly twice the capital cost of a similarly-sized Pidgeon plant). Carbothermal and electrolytic production can have comparatively low environmental impacts, with typical emissions one-sixth those of the Pidgeon process. Despite recent progress, the Pidgeon process depends upon abundant energy and labor combined with few environmental constraints. Pressure is expected to increase on environmental constraints and labor and energy costs over the coming decade. Carbothermal reduction technology appears to be competitive for future production.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirkham, R.; Siddons, D.; Dunn, P.A.
2010-06-23
The Maia detector system is engineered for energy dispersive x-ray fluorescence spectroscopy and elemental imaging at photon rates exceeding 10{sup 7}/s, integrated scanning of samples for pixel transit times as small as 50 {micro}s and high definition images of 10{sup 8} pixels and real-time processing of detected events for spectral deconvolution and online display of pure elemental images. The system developed by CSIRO and BNL combines a planar silicon 384 detector array, application-specific integrated circuits for pulse shaping and peak detection and sampling and optical data transmission to an FPGA-based pipelined, parallel processor. This paper describes the system and themore » underpinning engineering solutions.« less
Introduction to the Special Issue on Digital Signal Processing in Radio Astronomy
NASA Astrophysics Data System (ADS)
Price, D. C.; Kocz, J.; Bailes, M.; Greenhill, L. J.
2016-03-01
Advances in astronomy are intimately linked to advances in digital signal processing (DSP). This special issue is focused upon advances in DSP within radio astronomy. The trend within that community is to use off-the-shelf digital hardware where possible and leverage advances in high performance computing. In particular, graphics processing units (GPUs) and field programmable gate arrays (FPGAs) are being used in place of application-specific circuits (ASICs); high-speed Ethernet and Infiniband are being used for interconnect in place of custom backplanes. Further, to lower hurdles in digital engineering, communities have designed and released general-purpose FPGA-based DSP systems, such as the CASPER ROACH board, ASTRON Uniboard, and CSIRO Redback board. In this introductory paper, we give a brief historical overview, a summary of recent trends, and provide an outlook on future directions.
INTEGRAL serendipitous upper limits on FRB180301
NASA Astrophysics Data System (ADS)
Savchenko, V.; Panessa, F.; Ferrigno, C.; Keane, E.; Bazzano, A.; Burgay, M.; Kuulkers, E.; Petroff, E.; Ubertini, P.; Diehl, R.
2018-03-01
On March 1 at T0 = 07:34:19.76 (UTC), a Fast Radio Burst (FRB180301) was detected during Breakthrough Listen observations with the 21-cm multibeam receiver of the CSIRO Parkes radio telescope (see ATel #11376).
Woods, Lucy A; Dolezal, Olan; Ren, Bin; Ryan, John H; Peat, Thomas S; Poulsen, Sally-Ann
2016-03-10
Fragment-based drug discovery (FBDD) is contingent on the development of analytical methods to identify weak protein-fragment noncovalent interactions. Herein we have combined an underutilized fragment screening method, native state mass spectrometry, together with two proven and popular fragment screening methods, surface plasmon resonance and X-ray crystallography, in a fragment screening campaign against human carbonic anhydrase II (CA II). In an initial fragment screen against a 720-member fragment library (the "CSIRO Fragment Library") seven CA II binding fragments, including a selection of nonclassical CA II binding chemotypes, were identified. A further 70 compounds that comprised the initial hit chemotypes were subsequently sourced from the full CSIRO compound collection and screened. The fragment results were extremely well correlated across the three methods. Our findings demonstrate that there is a tremendous opportunity to apply native state mass spectrometry as a complementary fragment screening method to accelerate drug discovery.
Humphery-Smith, I; Cybinski, D H; Byrnes, K A; St George, T D
1991-10-01
Duplicate neutralization tests were done on 401 avian and 101 human sera from island residents collected in the Coral Sea and on Australia's Great Barrier Reef against 19 known arboviruses. Antibodies to a potentially harmful flavivirus, Gadget's Gully virus, were equally present (4%) in both avian and human sera. Antibodies to another flavivirus, Murray Valley Encephalitis, and an ungrouped isolate, CSIRO 1499, were also present in both populations with non-significantly different incidences. Antibodies to Upolu, Johnston Atoll, Lake Clarendon, Taggert, Saumarez Reef and CSIRO 264 viruses were restricted to seabirds. Island residents with antibodies to Ross River and Barmah Forest viruses are thought to have been exposed to these viruses on the mainland as antibody to both viruses was absent among seabirds. These results indicate that consideration should be given to tick-associated arboviruses as potential public health hazards on islands where both seabird and human activities interact.
Overview of the CSIRO Australian Animal Health Laboratory.
Lowenthal, John
2016-01-01
Emerging infectious diseases arising from livestock and wildlife pose serious threats to global human health, as shown by a series of continuous outbreaks involving highly pathogenic influenza, SARS, Ebola and MERS. The risk of pandemics and bioterrorism threats is ever present and growing, but our ability to combat them is limited by the lack of available vaccines, therapeutics and rapid diagnostics. The use of high bio-containment facilities, such as the CSIRO Australian Animal Health Laboratory, plays a key role studying these dangerous pathogens and facilitates the development of countermeasures. To combat diseases like MERS, we must take a holistic approach that involves the development of early biomarkers of infection, a suite of treatment options (vaccines, anti-viral drugs and antibody therapeutics) and appropriate animal models to test the safety and efficacy of candidate treatments. Copyright © 2016 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.
Fabrication and metrology of km-scale radii on surfaces of master tooling
NASA Astrophysics Data System (ADS)
Leistner, Achim J.; Oreb, Bozenko F.; Seckold, Jeffrey A.; Walsh, Christopher J.
1999-08-01
The Laser Interferometer Gravitational-wave Observatory (LIGO) core optical components have been manufactured by CSIRO. These optical substrates are optically polished on a lap surface that is made of Teflon coated onto a thick rigid faceted Zerodur base. To produce the km-scale radii (> 10 km) on these substrates the lap surface is shaped by abrading it with a fine ground silica plate whose radius of curvature corresponds to the one specified for the LIGO component. The plates are measured by a commercial phase stepping interferometer which is used in a grazing incidence arrangement. We describe the process of shaping and measuring the conditioning plates and laps.
Li, Kangkang; Yu, Hai; Tade, Moses; Feron, Paul; Yu, Jingwen; Wang, Shujuan
2014-06-17
An advanced NH3 abatement and recycling process that makes great use of the waste heat in flue gas was proposed to solve the problems of ammonia slip, NH3 makeup, and flue gas cooling in the ammonia-based CO2 capture process. The rigorous rate-based model, RateFrac in Aspen Plus, was thermodynamically and kinetically validated by experimental data from open literature and CSIRO pilot trials at Munmorah Power Station, Australia, respectively. After a thorough sensitivity analysis and process improvement, the NH3 recycling efficiency reached as high as 99.87%, and the NH3 exhaust concentration was only 15.4 ppmv. Most importantly, the energy consumption of the NH3 abatement and recycling system was only 59.34 kJ/kg CO2 of electricity. The evaluation of mass balance and temperature steady shows that this NH3 recovery process was technically effective and feasible. This process therefore is a promising prospect toward industrial application.
Review of “Managing Arsenic in the Environment: From Soil to Hman Health”
This is a book review of "Managing Arsenic in the Environment: From Soil to Human Health," R. Naidu, E. Smith, G. Owens, P. Bhattacharya, and P. Nadebaum eds., CSIRO Publishing, Melbourne, Australia, 656 pp.,
NASA Astrophysics Data System (ADS)
Kala, Jatin; Lyons, Tom J.; Abbs, Deborah J.; Foster, Ian J.
2010-05-01
Heat stress, frost, and water stress events have significant impacts on grain quality and production within the agricultural region (wheat-belt) of Southwest Western Australia (SWWA) (Cramb, 2000) and understanding how the frequency and intensity of these events will change in the future is crucial for management purposes. Hence, the Regional Atmospheric Modeling System (Pielke et al, 1992) (RAMS Version 6.0) is used to simulate the past 10 years of the climate of SWWA at a 20 km grid resolution by down-scaling the 6-hourly 1.0 by 1.0 degree National Center for Environmental Prediction Final Analyses from December 1999 to Present. Daily minimum and maximum temperatures, as well as daily rainfall are validated against observations. Simulations of future climate are carried out by down-scaling the Commonwealth Scientific and Industrial Research Organization (CSIRO) Mark 3.5 General Circulation Model (Gordon et al, 2002) for 10 years (2046-2055) under the SRES A2 scenario using the Cubic Conformal Atmospheric Model (CCAM) (McGregor and Dix, 2008). The 6-hourly CCAM output is then downscaled to a 20 km resolution using RAMS. Changes in extreme events are discussed within the context of the continued viability of agriculture in SWWA. Cramb, J. (2000) Climate in relation to agriculture in south-western Australia. In: The Wheat Book (Eds W. K. Anderson and J. R. Garlinge). Bulletin 4443. Department of Agriculture, Western Australia. Gordon, H. B., Rotstayn, L. D., McGregor, J. L., Dix, M. R., Kowalczyk, E. A., O'Farrell, S. P., Waterman, L. J., Hirst, A. C., Wilson, S. G., Collier, M. A., Watterson, I. G., and Elliott, T. I. (2002). The CSIRO Mk3 Climate System Model [Electronic publication]. Aspendale: CSIRO Atmospheric Research. (CSIRO Atmospheric Research technical paper; no. 60). 130 p McGregor, J. L., and Dix, M. R., (2008) An updated description of the conformal-cubic atmospheric model. High Resolution Simulation of the Atmosphere and Ocean, Hamilton, K. and Ohfuchi, W., Eds., Springer, 51-76. Pielke, R. A., Cotton, W. R., Walko, R. L., Tremback, C. J., Lyons, W. A., Grasso, L. D., Nicholls, M. E., Moran, M. D., Wesley, D. A., Lee, T. J., Copeland, J. H., (1992) A comprehensive meteorological modeling system - RAMS. Meteorol. Atmos. Phys., 49, 69-91.
Research Using ASDC Data Products
Atmospheric Science Data Center
2013-04-18
Research using ASDC Data Products Please Contact Us if you would like to contribute your research. "An investigation into the performance of ... data," (PDF 4MB) Stuart A. Young, C.S.I.R.O. Atmospheric Research, Aspendale, VIC, Australia. Photosynthetically Active ...
Scientific Writing = Thinking in Words
USDA-ARS?s Scientific Manuscript database
Ensuring that research results are reported accurately and effectively is an eternal challenge for scientists. The book Science Writing = Thinking in Words (David Lindsay, 2011. CSIRO Publishing) is a primer for researchers who seek to improve their impact through better written (and oral) presentat...
NASA Astrophysics Data System (ADS)
Cox, S. J.; Wyborn, L. A.; Fraser, R.; Rankine, T.; Woodcock, R.; Vote, J.; Evans, B.
2012-12-01
The Virtual Geophysics Laboratory (VGL) is web portal that provides geoscientists with an integrated online environment that: seamlessly accesses geophysical and geoscience data services from the AuScope national geoscience information infrastructure; loosely couples these data to a variety of gesocience software tools; and provides large scale processing facilities via cloud computing. VGL is a collaboration between CSIRO, Geoscience Australia, National Computational Infrastructure, Monash University, Australian National University and the University of Queensland. The VGL provides a distributed system whereby a user can enter an online virtual laboratory to seamlessly connect to OGC web services for geoscience data. The data is supplied in open standards formats using international standards like GeoSciML. A VGL user uses a web mapping interface to discover and filter the data sources using spatial and attribute filters to define a subset. Once the data is selected the user is not required to download the data. VGL collates the service query information for later in the processing workflow where it will be staged directly to the computing facilities. The combination of deferring data download and access to Cloud computing enables VGL users to access their data at higher resolutions and to undertake larger scale inversions, more complex models and simulations than their own local computing facilities might allow. Inside the Virtual Geophysics Laboratory, the user has access to a library of existing models, complete with exemplar workflows for specific scientific problems based on those models. For example, the user can load a geological model published by Geoscience Australia, apply a basic deformation workflow provided by a CSIRO scientist, and have it run in a scientific code from Monash. Finally the user can publish these results to share with a colleague or cite in a paper. This opens new opportunities for access and collaboration as all the resources (models, code, data, processing) are shared in the one virtual laboratory. VGL provides end users with access to an intuitive, user-centered interface that leverages cloud storage and cloud and cluster processing from both the research communities and commercial suppliers (e.g. Amazon). As the underlying data and information services are agnostic of the scientific domain, they can support many other data types. This fundamental characteristic results in a highly reusable virtual laboratory infrastructure that could also be used for example natural hazards, satellite processing, soil geochemistry, climate modeling, agriculture crop modeling.
NASA Astrophysics Data System (ADS)
Oslowsk, S.; Shannon, R. M..; Jameson, Andrew; Sarkissian, J. M..; Bailes, M.; Andreoni, I.; Bhat, N. D. R..; Coles, W. A.; Dai, S.; Dempsey, J.; Hobbs, G.; Keith, M. J.; Kerr, M.; Manchester, R. N.; Lasky, P. D.; Levin, Y.; Parthasarathy, A.; Ravi, V.; Reardon, D. J.; Rosado, P. A.; Russell, C. J.; Spiewak, R.; Van Straten, W.; Toomey, L.; Wang, J. B.; Wen, L.; You, X.-P.; Zhang, L.; Zhang, S.; Zhu, X.-J.
2018-03-01
The Parkes Pulsar Timing Array (Manchester et al. 2013) project monitors pulse times of arrival for 24 millisecond pulsars in the Galaxy on a fortnightly cadence using the multibeam receiver on the CSIRO 64-m Parkes Telescope.
NASA Astrophysics Data System (ADS)
Oslowski, S.; Shannon, R. M.; Jameson, Andrew; Hobbs, G.; Bailes, M.; Bhat, N. D. R.; Coles, W. A.; Dai, S.; Dempsey, J.; Keith, M. J.; Kerr, M.; Manchester, R. N.; Lasky, D. P.; Levin, Y.; Parthasarathy, A.; Ravi, V.; Reardon, D. J.; Russell, C. J.; Sarkissian, J. M.; Spiewak, R.; Van Straten, W.; Toomey, L.; Wang, J. B.; Wen, L.; You, X.-P.; Zhang, L.; Zhang, S.; Zhu, X.-J.
2018-03-01
The Parkes Pulsar Timing Array (Manchester et al. 2013) project monitors pulse times of arrival for 24 millisecond pulsars in the Galaxy on a fortnightly cadence using the multibeam receiver on the CSIRO 64-m Parkes Telescope.
NGSANE: a lightweight production informatics framework for high-throughput data analysis.
Buske, Fabian A; French, Hugh J; Smith, Martin A; Clark, Susan J; Bauer, Denis C
2014-05-15
The initial steps in the analysis of next-generation sequencing data can be automated by way of software 'pipelines'. However, individual components depreciate rapidly because of the evolving technology and analysis methods, often rendering entire versions of production informatics pipelines obsolete. Constructing pipelines from Linux bash commands enables the use of hot swappable modular components as opposed to the more rigid program call wrapping by higher level languages, as implemented in comparable published pipelining systems. Here we present Next Generation Sequencing ANalysis for Enterprises (NGSANE), a Linux-based, high-performance-computing-enabled framework that minimizes overhead for set up and processing of new projects, yet maintains full flexibility of custom scripting when processing raw sequence data. Ngsane is implemented in bash and publicly available under BSD (3-Clause) licence via GitHub at https://github.com/BauerLab/ngsane. Denis.Bauer@csiro.au Supplementary data are available at Bioinformatics online.
The HEPEX Seasonal Streamflow Forecast Intercomparison Project
NASA Astrophysics Data System (ADS)
Wood, A. W.; Schepen, A.; Bennett, J.; Mendoza, P. A.; Ramos, M. H.; Wetterhall, F.; Pechlivanidis, I.
2016-12-01
The Hydrologic Ensemble Prediction Experiment (HEPEX; www.hepex.org) has launched an international seasonal streamflow forecasting intercomparison project (SSFIP) with the goal of broadening community knowledge about the strengths and weaknesses of various operational approaches being developed around the world. While some of these approaches have existed for decades (e.g. Ensemble Streamflow Prediction - ESP - in the United States and elsewhere), recent years have seen the proliferation of new operational and experimental streamflow forecasting approaches. These have largely been developed independently in each country, thus it is difficult to assess whether the approaches employed in some centers offer more promise for development than others. This motivates us to establish a forecasting testbed to facilitate a diagnostic evaluation of a range of different streamflow forecasting approaches and their components over a common set of catchments, using a common set of validation methods. Rather than prescribing a set of scientific questions from the outset, we are letting the hindcast results and notable differences in methodologies on a watershed-specific basis motivate more targeted analyses and sub-experiments that may provide useful insights. The initial pilot of the testbed involved two approaches - CSIRO's Bayesian joint probability (BJP) and NCAR's sequential regression - for two catchments, each designated by one of the teams (the Murray River, Australia, and Hungry Horse reservoir drainage area, USA). Additional catchments/approaches are in the process of being added to the testbed. To support this CSIRO and NCAR have developed data and analysis tools, data standards and protocols to formalize the experiment. These include requirements for cross-validation, verification, reference climatologies, and common predictands. This presentation describes the SSFIP experiments, pilot basin results and scientific findings to date.
The Future of the "Research" Library in an Age of Information Abundance and Lifelong Learning
ERIC Educational Resources Information Center
Wainwright, Eric
2005-01-01
Traditionally, several types of library in Australia have been perceived as having a "research" role, notably the National Library, the state libraries, and the university libraries, together with a few special libraries serving research-producing organisations such as the CSIRO. This paper provides some speculations on how some…
NASA Astrophysics Data System (ADS)
Showstack, Randy
After global fears of computer snafus prompted billions of dollars of remedial action, the Y2K bug appears to have vanished with barely a trace. But on January l, taxonomists with the entomology division of Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) reported the discovery of an insect whose scientific and common names will be the "millennium bug."
Effect of tree-growth rate on papermaking fibre properties
J. Y. Zhu; D. W. Vahey; C. T. Scott; G. C. Myers
2008-01-01
Measurements of wood density and anatomical properties of wood disks were conducted by SilviScan (CSIRO Australia) and a new imaging technique. The disks included red pine (Pinus resinosa Ait.) obtained from a never-thinned experimental forest with five different plantation densities and Douglas-fir (Pseudotsuga menziesii var. glauca (Beissn.) Franco) and lodgepole...
Reducing uncertainty in the climatic interpretations of speleothem δ18O
NASA Astrophysics Data System (ADS)
Jex, C. N.; Phipps, S. J.; Baker, A.; Bradley, C.
2013-05-01
We explore two principal areas of uncertainty associated with paleoclimate reconstructions from speleothem δ18O (δ18Ospel): potential non-stationarity in relationships between local climate and larger-scale atmospheric circulation, and routing of water through the karst aquifer. Using a δ18Ospel record from Turkey, the CSIRO Mk3L climate system model and the KarstFOR karst hydrology model, we confirm the stationarity of relationships between cool season precipitation and regional circulation dynamics associated with the North Sea-Caspian pattern since 1 ka. Stalagmite δ18O is predicted for the last 500 years, using precipitation and temperature output from the CSIRO Mk3L model and synthetic δ18O of precipitation as inputs for the KarstFOR model. Interannual variability in the δ18Ospel record is captured by KarstFOR, but we cannot reproduce the isotopically lighter conditions of the sixteenth to seventeenth centuries. We argue that forward models of paleoclimate proxies (such as KarstFOR) embedded within isotope-enabled general circulation models are now required.
NASA Astrophysics Data System (ADS)
Devaraju, Anusuriya; Klump, Jens; Tey, Victor; Fraser, Ryan
2016-04-01
Physical samples such as minerals, soil, rocks, water, air and plants are important observational units for understanding the complexity of our environment and its resources. They are usually collected and curated by different entities, e.g., individual researchers, laboratories, state agencies, or museums. Persistent identifiers may facilitate access to physical samples that are scattered across various repositories. They are essential to locate samples unambiguously and to share their associated metadata and data systematically across the Web. The International Geo Sample Number (IGSN) is a persistent, globally unique label for identifying physical samples. The IGSNs of physical samples are registered by end-users (e.g., individual researchers, data centers and projects) through allocating agents. Allocating agents are the institutions acting on behalf of the implementing organization (IGSN e.V.). The Commonwealth Scientific and Industrial Research Organisation CSIRO) is one of the allocating agents in Australia. To implement IGSN in our organisation, we developed a RESTful service and a metadata model. The web service enables a client to register sub-namespaces and multiple samples, and retrieve samples' metadata programmatically. The metadata model provides a framework in which different types of samples may be represented. It is generic and extensible, therefore it may be applied in the context of multi-disciplinary projects. The metadata model has been implemented as an XML schema and a PostgreSQL database. The schema is used to handle sample registrations requests and to disseminate their metadata, whereas the relational database is used to preserve the metadata records. The metadata schema leverages existing controlled vocabularies to minimize the scope for error and incorporates some simplifications to reduce complexity of the schema implementation. The solutions developed have been applied and tested in the context of two sample repositories in CSIRO, the Capricorn Distal Footprints project and the Rock Store.
Red mud flocculation process in alumina production
NASA Astrophysics Data System (ADS)
Fedorova, E. R.; Firsov, A. Yu
2018-05-01
The process of thickening and washing red mud is a gooseneck of alumina production. The existing automated systems of the thickening process control involve stabilizing the parameters of the primary technological circuits of the thickener. The actual direction of scientific research is the creation and improvement of models and systems of the thickening process control by model. But the known models do not fully consider the presence of perturbing effects, in particular the particle size distribution in the feed process, distribution of floccules by size after the aggregation process in the feed barrel. The article is devoted to the basic concepts and terms used in writing the population balance algorithm. The population balance model is implemented in the MatLab environment. The result of the simulation is the particle size distribution after the flocculation process. This model allows one to foreseen the distribution range of floccules after the process of aggregation of red mud in the feed barrel. The mud of Jamaican bauxite was acting as an industrial sample of red mud; Cytec Industries of HX-3000 series with a concentration of 0.5% was acting as a flocculant. When simulating, model constants obtained in a tubular tank in the laboratories of CSIRO (Australia) were used.
USDA-ARS?s Scientific Manuscript database
Impacts of climate change on hydrology, soil erosion, and wheat production during 2010-2039 at El Reno in central Oklahoma, USA, were simulated using the Water Erosion Prediction Project (WEPP) model. Projections from four GCMs (CCSR/NIES, CGCM2, CSIRO-Mk2, and HadCM3) under three emissions scenari...
Effect of tree-growth rate on papermaking fiber properties
Junyong Zhu; David W. Vahey; C. Tim Scott; Gary C. Myers
2007-01-01
Measurements of wood density and anatomical properties of wood disks were conducted by SilviScan (CSIRO Australia) and a new imaging technique. The disks included red pine obtained from a never-thinned experimental forest with five different plantation densities and Douglas-fir and lodgepole pine (one normal growth and the other suppressed growth) both supplied by a...
NASA Astrophysics Data System (ADS)
Showstack, Randy
After global fears of computer snafus prompted billions of dollars of remedial action, the Y2K bug appears to have vanished with barely a trace. But on January l, taxonomists with the entomology division of Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) reported the discovery of an insect whose scientific and common names will be the “millennium bug.”
Equipment for testing automotive lead/acid batteries under SAE J240a conditions
NASA Astrophysics Data System (ADS)
Hamilton, J. A.; Rand, D. A. J.
Battery cycling equipment has been designed and constructed to test lead/acid batteries according to the American Society of Automotive Engineers' (SAE) J240a Standard. This life test simulates automotive service where the battery operates in a voltage-regulated charging system. The CSIRO design uses a master/slave concept to reduce both construction time and cost.
ERIC Educational Resources Information Center
Dawson, Vaille; Moore, Leah
2011-01-01
In 2007, a new upper secondary course, Earth and Environmental Science (EES) was introduced in Western Australia. The development and implementation of the course was supported by Earth Science Western Australia (ESWA), a consortium of universities, the CSIRO and other organisations. The role of ESWA is to support the teaching of earth science in…
Shrestha, Sangam; Chapagain, Ranju; Babel, Mukand S
2017-12-01
Northeast Thailand makes a significant contribution to fragrant and high-quality rice consumed within Thailand and exported to other countries. The majority of rice is produced in rainfed conditions while irrigation water is supplied to rice growers in the dry season. This paper quantifies the potential impact of climate change on the water footprint of rice production using the DSSAT (CERES-Rice) crop growth model for the Nam Oon Irrigation Project located in Northeast Thailand. Crop phenology data was obtained from field experiments and used to set up and validate the CERES-Rice model. The present and future water footprint of rice, the amount of water evaporated during the growing period, was calculated under current and future climatic condition for the irrigation project area. The outputs of three regional climate models (ACCESS-CSIRO-CCAM, CNRM-CM5-CSIRO-CCAM, and MPI-ESM-LR-CSIRO-CCAM) for scenarios RCP 4.5 and RCP 8.5 were downscaled using quantile mapping method. Simulation results show a considerably high increase in the water footprint of KDML-105 and RD-6 rice varieties ranging from 56.5 to 92.2% and 27.5 to 29.7%. respectively for the future period under RCP 4.5, and 71.4 to 76.5% and 27.9 to 37.6%, respectively under RCP 8.5 relative to the simulated baseline water footprint for the period 1976-2005. Conversely, the ChaiNat-1 variety shows a decrease in projected water footprint of 42.1 to 39.4% under RCP 4.5 and 38.5 to 31.7% under RCP 8.5. The results also indicate a huge increase in the future blue water footprint, which will consequently cause a high increment in the irrigation water requirement in order to meet the plant's evaporation demand. The research outcome highlights the importance of proper adaptation strategies to reduce or maintain acceptable water footprints under future climate conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rayson, Matthew D.; Ivey, Gregory N.; Jones, Nicole L.; Fringer, Oliver B.
2018-02-01
We apply the unstructured grid hydrodynamic model SUNTANS to investigate the internal wave dynamics around Scott Reef, Western Australia, an isolated coral reef atoll located on the edge of the continental shelf in water depths of 500,m and more. The atoll is subject to strong semi-diurnal tidal forcing and consists of two relatively shallow lagoons separated by a 500 m deep, 2 km wide and 15 km long channel. We focus on the dynamics in this channel as the internal tide-driven flow and resulting mixing is thought to be a key mechanism controlling heat and nutrient fluxes into the reef lagoons. We use an unstructured grid to discretise the domain and capture both the complex topography and the range of internal wave length scales in the channel flow. The model internal wave field shows super-tidal frequency lee waves generated by the combination of the steep channel topography and strong tidal flow. We evaluate the model performance using observations of velocity and temperature from two through water-column moorings in the channel separating the two reefs. Three different global ocean state estimate datasets (global HYCOM, CSIRO Bluelink, CSIRO climatology atlas) were used to provide the model initial and boundary conditions, and the model outputs from each were evaluated against the field observations. The scenario incorporating the CSIRO Bluelink data performed best in terms of through-water column Murphy skill scores of water temperature and eastward velocity variability in the channel. The model captures the observed vertical structure of the tidal (M2) and super-tidal (M4) frequency temperature and velocity oscillations. The model also predicts the direction and magnitude of the M2 internal tide energy flux. An energy analysis reveals a net convergence of the M2 energy flux and a divergence of the M4 energy flux in the channel, indicating the channel is a region of either energy transfer to higher frequencies or energy loss to dissipation. This conclusion is supported by the mooring observations that reveal high frequency lee waves breaking on the turning phase of the tide.
2008-10-20
embedded intelligence and cultural adaptations to the onslaught of robots in society. This volume constitutes a key contribution to the body of... Robotics , CNRS/Toulouse University, France Nathalie COLINEAU, Language & Multi-modality, CSIRO, Australia Roberto CORDESCHI, Computation & Communication...Intelligence, SONY CSL Paris Nik KASABOV, Computer and Information Sciences, Auckland University, New Zealand Oussama KHATIB, Robotics & Artificial
Heath Monitoring of Thermal Protection Systems - Preliminary Measurements and Design Specifications
NASA Technical Reports Server (NTRS)
Scott, D. A.; Price, D. C.
2007-01-01
The work reported here is the first stage of a project that aims to develop a health monitoring system for Thermal Protection Systems (TPS) that enables a vehicle to safely re-enter the Earth's atmosphere. The TPS health monitoring system is to be integrated into an existing acoustic emissions-based Concept Demonstrator, developed by CSIRO, which has been previously demonstrated for evaluating impact damage of aerospace systems.
Statistics of the MASIV 5 GHZ VLA Scintillation Survey
2007-10-01
76, Epping, NSW, Australia, E-mail: david.jauncey@csiro.au James Lovell : School of Mathematics & Physics, University of Tasmania, GPO Box 252...Technology, Pasadena CA 91125, E-mail: jpm@astro.caltech.edu Hayley Bignall: Joint Institute for VLBI in Europe, Postbus 2, 7900 AA Dwingeloo, The...369, 449 [7] Lovell , J. E. J., et al., First Results from MASIV: The Microarcsecond Scintillation- induced Variability Survey, 2003, AJ, 126, 1699
Finding, Weighting and Describing Venues: CSIRO at the 2012 TREC Contextual Suggestion Track
2012-11-01
commercial system (namely the Google Places API ), and whether the current experimental setup encourages diversity. The remaining two submissions...baseline systems that rely on the Google Places API and the user reviews it provides, and two more complex systems that incorporate information...from the Foursquare API , and are sensitive to personal preference and time. The remainder of this paper is structured as follows. The next section
Hendrie, Gilly A; Baird, Danielle; Golley, Rebecca K; Noakes, Manny
2017-01-09
There are few dietary assessment tools that are scientifically developed and freely available online. The Commonwealth Scientific and Industrial Research Organisation (CSIRO) Healthy Diet Score survey asks questions about the quantity, quality, and variety of foods consumed. On completion, individuals receive a personalised Diet Score-reflecting their overall compliance with the Australian Dietary Guidelines. Over 145,000 Australians have completed the survey since it was launched in May 2015. The average Diet Score was 58.8 out of a possible 100 (SD = 12.9). Women scored higher than men; older adults higher than younger adults; and normal weight adults higher than obese adults. It was most common to receive feedback about discretionary foods (73.8% of the sample), followed by dairy foods (55.5%) and healthy fats (47.0%). Results suggest that Australians' diets are not consistent with the recommendations in the guidelines. The combination of using technology and providing the tool free of charge has attracted a lot of traffic to the website, providing valuable insights into what Australians' report to be eating. The use of technology has also enhanced the user experience, with individuals receiving immediate and personalised feedback. This survey tool will be useful to monitor population diet quality and understand the degree to Australians' diets comply with dietary guidelines.
Hendrie, Gilly A.; Baird, Danielle; Golley, Rebecca K.; Noakes, Manny
2017-01-01
There are few dietary assessment tools that are scientifically developed and freely available online. The Commonwealth Scientific and Industrial Research Organisation (CSIRO) Healthy Diet Score survey asks questions about the quantity, quality, and variety of foods consumed. On completion, individuals receive a personalised Diet Score—reflecting their overall compliance with the Australian Dietary Guidelines. Over 145,000 Australians have completed the survey since it was launched in May 2015. The average Diet Score was 58.8 out of a possible 100 (SD = 12.9). Women scored higher than men; older adults higher than younger adults; and normal weight adults higher than obese adults. It was most common to receive feedback about discretionary foods (73.8% of the sample), followed by dairy foods (55.5%) and healthy fats (47.0%). Results suggest that Australians’ diets are not consistent with the recommendations in the guidelines. The combination of using technology and providing the tool free of charge has attracted a lot of traffic to the website, providing valuable insights into what Australians’ report to be eating. The use of technology has also enhanced the user experience, with individuals receiving immediate and personalised feedback. This survey tool will be useful to monitor population diet quality and understand the degree to Australians’ diets comply with dietary guidelines. PMID:28075355
EffectorP: predicting fungal effector proteins from secretomes using machine learning.
Sperschneider, Jana; Gardiner, Donald M; Dodds, Peter N; Tini, Francesco; Covarelli, Lorenzo; Singh, Karam B; Manners, John M; Taylor, Jennifer M
2016-04-01
Eukaryotic filamentous plant pathogens secrete effector proteins that modulate the host cell to facilitate infection. Computational effector candidate identification and subsequent functional characterization delivers valuable insights into plant-pathogen interactions. However, effector prediction in fungi has been challenging due to a lack of unifying sequence features such as conserved N-terminal sequence motifs. Fungal effectors are commonly predicted from secretomes based on criteria such as small size and cysteine-rich, which suffers from poor accuracy. We present EffectorP which pioneers the application of machine learning to fungal effector prediction. EffectorP improves fungal effector prediction from secretomes based on a robust signal of sequence-derived properties, achieving sensitivity and specificity of over 80%. Features that discriminate fungal effectors from secreted noneffectors are predominantly sequence length, molecular weight and protein net charge, as well as cysteine, serine and tryptophan content. We demonstrate that EffectorP is powerful when combined with in planta expression data for predicting high-priority effector candidates. EffectorP is the first prediction program for fungal effectors based on machine learning. Our findings will facilitate functional fungal effector studies and improve our understanding of effectors in plant-pathogen interactions. EffectorP is available at http://effectorp.csiro.au. © 2015 CSIRO New Phytologist © 2015 New Phytologist Trust.
Iron Cycling at Corroding Carbon Steel Surfaces
2013-01-01
product corrosion was examined using ESEM. Samples were also sent to CSIRO (Floreat Park, WA, Australia) for selected area electron diffraction (SAED...penetration and RMS roughness values ɚ.0 μm. Discussion Corrosion product mineralogy can be used to interpret the role of microorganisms in MIC (McNeil & Odom...investigate corrosion using defined mixed cultures of FeOB and FeRB. Different combinations of organisms and marine media were chosen to provide a
A case of remission from pre-diabetes following intermittent hypoxic training.
Fuller, Nicholas R; Courtney, Rosalba
2016-01-01
A female patient (49 years of age) with obesity (body mass index: 35.3kg/m(2)) and diagnosed with pre-diabetes presented to the clinic of one of the authors (RC) with recent weight gain (approximately 10kg) over the preceding 12 months, despite several unsuccessful attempts at weight loss. She reported being short of breath performing light activities and feeling fatigued the majority of the time. Treatment consisted of a run in period of five weeks following the Commonwealth Scientific and Industrial Research Organisation (CSIRO) diet, followed by four weeks of the CSIRO diet plus intermittent hypoxic training (IHT) using the GO2(®) altitude training device. Anthropometric measures, bloods and questionnaires were completed before treatment (week 0), end of diet phase (week 5), and end of diet plus IHT phase (week 9). At the end of week five, the patient had lost some weight and had an improvement in glycaemic control. However, there was a clinically greater improvement in weight loss and glycaemic control from week five to nine following the IHT, resulting in remission from pre-diabetes. This case study shows that incorporation of IHT has benefits existing beyond a standard dietary approach, helping to achieve remission from pre-diabetes back to a normal fasting glucose state. Copyright © 2016 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.
LIGO optics manufacture: figuring transmission core optics for best performance
NASA Astrophysics Data System (ADS)
Leistner, Achim J.; Farrant, David I.; Oreb, Bozenko F.; Pavlovic, Edita; Seckold, Jeffrey A.; Walsh, Christopher J.
1999-11-01
The Laser Interferometer Gravitational-wave Observatory (LIGO) is a long baseline Michelson interferometer, with arms of up to 4 km in length each containing a Fabry Perot cavity. CSIRO has manufactured 32 core optical components for the LIGO interferometer consisting of five different groups of optical elements. Long radii of curvature (7 km - 15 km) and tolerances in the order of plus or minus 200 m in the radius are specified. Although the components are made of hyper pure fused silica there are some residual inhomogeneities in the material. The optics used in transmission must be figured so that the influence of these material inhomogeneities on the transmitted wave front is compensated for. This was done by correcting the surface figure on side 2 of the optics. The approach we took to manufacturing the transmission optics was to calculate the quadratic component of refractive index gradient (Delta) n of the substrate from the measurements of the transmitted wavefront and the surface profile of the two substrate surfaces, determine what shape had to be produced on side two of the substrates to compensate for this gradient and then produce this by optical polishing. The surfaces were polished on rigid solid laps of Zerodur coated with a thin layer of Teflon as the polishing matrix, a technique developed by CSIRO for super-polishing very flat surfaces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao, Jiafu; Phipps, S.J.; Pitman, A.J.
The CSIRO Mk3L climate system model, a reduced-resolution coupled general circulation model, has previously been described in this journal. The model is configured for millennium scale or multiple century scale simulations. This paper reports the impact of replacing the relatively simple land surface scheme that is the default parameterisation in Mk3L with a sophisticated land surface model that simulates the terrestrial energy, water and carbon balance in a physically and biologically consistent way. An evaluation of the new model s near-surface climatology highlights strengths and weaknesses, but overall the atmospheric variables, including the near-surface air temperature and precipitation, are simulatedmore » well. The impact of the more sophisticated land surface model on existing variables is relatively small, but generally positive. More significantly, the new land surface scheme allows an examination of surface carbon-related quantities including net primary productivity which adds significantly to the capacity of Mk3L. Overall, results demonstrate that this reduced-resolution climate model is a good foundation for exploring long time scale phenomena. The addition of the more sophisticated land surface model enables an exploration of important Earth System questions including land cover change and abrupt changes in terrestrial carbon storage.« less
Steele, L. P. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Aspendale, Victoria, Australia; Krummel, P. B. [Commonwealth Scientific and Industrial Research Organization (CSIRO),; Langenfelds, R. L. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Aspendale, Victoria, Australia
2008-01-01
Individual measurements have been obtained from flask air samples returned to the CSIRO GASLAB. Typical sample storage times range from days to weeks for some sites (e.g. Cape Grim, Aircraft over Tasmania and Bass Strait) to as much as one year for Macquarie Island and the Antarctic sites. Experiments carried out to test for changes in sample CO2 mixing ratio during storage have shown significant drifts in some flask types over test periods of several months to years (Cooper et al., 1999). Corrections derived from the test results are applied to network data according to flask type. These measurements indicate a rise in annual average atmospheric CO2 concentration from 357.72 parts per million by volume (ppmv) in 1992 to 383.05 ppmv in 2006, or an increase in annual average of about 1.81 ppmv/year. These flask data may be compared with other flask measurements from the Scripps Institution of Oceanography, available through 2004 in TRENDS; both indicate an annual average increase of 1.72 ppmv/year throuth 2004. Differences may be attributed to different sampling times or days, different numbers of samples, and different curve-fitting techniques used to obtain monthly and annual average numbers from flask data. Measurement error in flask data is believed to be small (Masarie et al., 2001).
Using the FAIMS Mobile App for field data recording
NASA Astrophysics Data System (ADS)
Ballsun-Stanton, Brian; Klump, Jens; Ross, Shawn
2016-04-01
Multiple people creating data in the field poses a hard technical problem: our ``web 2.0'' environment presumes constant connectivity, data ``authority'' held by centralised servers, and sees mobile devices as tools for presentation rather than origination. A particular design challenge is the remoteness of the sampling locations, hundreds of kilometres away from network access. The alternative, however, is hand collection with a lengthy, error prone, and expensive digitisation process. This poster will present a field-tested open-source solution to field data recording. This solution, originally created by a community of archaeologists, needed to accommodate diverse recording methodologies. The community could not agree on standard vocabularies, workflows, attributes, or methodologies, but most agreed that at app to ``record data in the field'' was desirable. As a result, the app is generalised for field data collection; not only can it record a range of data types, but it is deeply customisable. The NeCTAR / ARC funded FAIMS Project, therefore, created an app which allows for arbitrary data collection in the field. In order to accomplish this ambitious goal, FAIMS relied heavily on OSS projects including: spatialite and gdal (for GIS support), sqlite (for a lightweight key-attribute-value datastore), Javarosa and Beanshell (for UI and scripting), Ruby, and Linux. Only by standing on the shoulders of giants, FAIMS was able to make a flexible and highly generalisable field data collection system that CSIRO geoscientists were able to customise to suit most of their completely unanticipated needs. While single-task apps (i.e. those commissioned by structural geologists to take strikes and dips) will excel in their domains, other geoscientists (palaeoecologists, palaeontologists, anyone taking samples) likely cannot afford to commission domain- and methodology-specific recording tools for their custom recording needs. FAIMS shows the utility of OSS software development and provides geoscientists a way forward for edge-case field data collection. Moreover, as the data is completely open and exports are scriptable, federation with other data services is both possible and encouraged. This poster will describe the internal architecture of the FAIMS app, show how it was used by CSIRO in the field, and display a graph of its OSS heritage. The app is available from Google Play, the recording module can be found at https://github.com/FAIMS/CSIRO-Water-Samples, and the exporter we used can be found at https://github.com/FAIMS/shapefileExport. You can make your own data-collection modules for free via the documentation at https://www.fedarch.org/support/#2. See chapter by Sobotkova et. al. in {Mobilizing the Past}, forthcoming 2016 Ross, S., et. al. (2013) Creating eResearch tools for archaeologists: The federated archaeological information management systems project [online]. {Australian Archaeology}. Ross, S., et. al. (2015). Building the bazaar: enhancing archaeological field recording through an open source approach. In Wilson, A. T., & Edwards, B. (Eds.). {Open Source Archaeology: Ethics and Practice.}. Reid, N., et. al. (2015) {A mobile app for geochemical field data acquisition.} Poster presented at AGU Fall Meeting 2015, San Francisco.
Bernard Yarnton Mills AC FAA. 8 August 1920 - 25 April 2011
NASA Astrophysics Data System (ADS)
Frater, R. H.; Goss, W. M.; Wendt, H. W.
2013-12-01
Bernie Mills is remembered globally as an influential pioneer in the evolving field of radio astronomy. His contributions with the 'Mills Cross' at the Commonwealth Scientific and Industrial Research Organisation (CSIRO) Division of Radiophysics and later at the University of Sydney's School of Physics and the development of the Molonglo Observatory Synthesis Telescope (MOST) were widely recognized as astronomy evolved in the years 1948-85 and radio astronomy changed the viewpoint of the astronomer as a host of new objects were discovered.
Anticipating Installation Natural Resource Climate Change Concerns: The Data
2013-10-15
period of development (1 to 2 decades) include: 1. CM2.1 (GFDL model — NOAA Princeton) 2. E-H and E-R ( NASA GISS) 3. HadGEM1 (Hadley UKMO) 4. CGCM3...sixth GCM, the Australian CSIRO model, to increase the sample. Thus the adopted GCMs include: 1. GFDL model (NOAA Princeton) 6. GISS Model e ( NASA ...Sciences La- boratory ( USDA 2012) created data that would be useful to the related threshold project. This US Forest Service date were similar to those of
Australian polymer banknote: a review
NASA Astrophysics Data System (ADS)
Wilson, Gerard J.
1998-04-01
In 1996 Australia became the first country in the world to have an all-polymer currency in general circulation. Australia's first polymer note was a commemorative note that was issued in January 1988 to celebrate the bicentenary of European settlement. That note was the culmination of almost twenty year's collaboration between the Reserve Bank of Australia and the Commonwealth Scientific and Industrial Research Organisation. This paper traces the development of the Bicentennial Banknote note from its conception at a brain- storming meeting between RBA and CSIRO scientists in 1968 through to its release in 1988.
Contribution of the AN/TPS-3 Radar Antenna to Australian radio astronomy
NASA Astrophysics Data System (ADS)
Wendt, Harry; Orchiston, Wayne
2018-04-01
The CSIRO Division of Radiophysics used the WWII surplus AN/TPS-3 radar dishes for their early solar radio astronomy research and eclipse observations. These aerials were also used in a spaced (Michelson) interferometer configuration in the late 1940s to investigate solar limb brightening at 600 MHz. This work paralleled early solar observations at Cambridge. None of the Australian research results using the spaced interferometry technique appeared in publications, and the invention of the solar grating array in 1950 made further use of the method redundant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kay, John; Stanislowski, Joshua; Tolbert, Scott
Utilities continue to investigate ways to decrease their carbon footprint. Carbon capture and storage (CCS) can enable existing power generation facilities to maintain operations and address carbon reduction. Subtask 2.1 – Pathway to Low-Carbon Lignite Utilization focused on several research areas in an effort to find ways to decrease the cost of capture across both precombustion and postcombustion platforms. Two postcombustion capture solvents were tested, one from CO 2 Solutions Inc. and one from ARCTECH, Inc. The CO 2 Solutions solvent had been evaluated previously, and the company had incorporated the concept of a rotating packed bed (RPB) to replacemore » the traditional packed columns typically used. In the limited testing performed at the Energy & Environmental Research Center (EERC), no CO 2 reduction benefit was seen from the RPB; however, if the technology could be scaled up, it may introduce some savings in capital expense and overall system footprint. Rudimentary tests were conducted with the ARCTECH solvent to evaluate if it could be utilized in a spray tower configuration contactor and capture CO 2, SO 2, and NO x. This solvent after loading can be processed to make an additional product to filter wastewater, providing a second-tier usable product. Modeling of the RPB process for scaling to a 550-MW power system was also conducted. The reduced cost of RPB systems combined with a smaller footprint highlight the potential for reducing the cost of capturing CO 2; however, more extensive testing is needed to truly evaluate their potential for use at full scale. Hydrogen separation membranes from Commonwealth Scientific and Industrial Research Organisation (CSIRO) were evaluated through precombustion testing. These had also been previously tested and were improved by CSIRO for this test campaign. They are composed of vanadium alloy, which is less expensive than the palladium alloys that are typically used. Their performance was good, and they may be good candidates for medium-pressure gasifiers, but much more scale-up work is needed. Next-generation power cycles are currently being developed and show promise for high efficiency, and the utilization of supercritical CO 2 to drive a turbine could significantly increase cycle efficiency over traditional steam cycles. The EERC evaluated pressurized oxy-combustion technology from the standpoint of CO 2 purification. If impurities can be removed, the costs for CO 2 capture can be lowered significantly over postcombustion capture systems. Impurity removal consisted of a simple water scrubber referred to as the DeSNO x process. The process worked well, but corrosion management is crucial to its success. A model of this process was constructed. Finally, an integrated gasification combined-cycle (IGCC) system model, developed by the Massachusetts Institute of Technology (MIT), was modified to allow for the modeling of membrane systems in the IGCC process. This modified model was used to provide an assessment of the costs of membrane use at full scale. An economic estimation indicated a 14% reduction in cost for CO 2 separation over the SELEXOL™ process. This subtask was funded through the EERC–DOE Joint Program on Research and Development for Fossil Energy-Related Resources Cooperative Agreement No. DE-FE0024233. Nonfederal sponsors for this project were the North Dakota Industrial Commission, Basin Electric Power Cooperative, and Allete, Inc. (including BNI Coal and Minnesota Power).« less
Optical properties of Southern Hemisphere aerosols: Report of the joint CSIRO/NASA study
NASA Technical Reports Server (NTRS)
Gras, John L.; Platt, C. Martin; Huffaker, R. Milton; Jones, William D.; Kavaya, Michael J.; Gras, John L.
1988-01-01
This study was made in support of the LAWS and GLOBE programs, which aim to design a suitable Doppler lidar system for measuring global winds from a satellite. Observations were taken from 5 deg S to 45 deg S along and off the E and SE Australian coast, thus obtaining representative samples over a large latitude range. Observations were made between 0 and 6 km altitude of aerosol physical and chemical properties in situ from the CSIRO F-27 aircraft; of lidar backscatter coefficients at 10.6 micron wavelength from the F-27 aircraft; of lidar backscatter profiles at 0.694 microns at Sale, SE Australia; and of lidar backscatter profiles at 0.532 microns at Cowley Beach, NE Australia. Both calculations and observations in the free troposphere gave a backscatter coefficient of 1-2 x 10 to the -11/m/sr at 10.6 microns, although the accuracies of the instruments were marginal at this level. Equivalent figures were 2-8 x 10 to the -9/m/sr (aerosol) and 9 x 10 to the -9 to 2 x 10 to the -8/m/sr (lidar) at 0.694 microns wavelength at Sale; and 3.7 x 10 to the -9/m/sr (aerosol) and 10 to the -8 to 10 to the -7/m/sr (lidar) at 0.532 microns wavelength at Cowley Beach. The measured backscatter coefficients at 0.694 and 0.532 microns were consistently higher than the values calculated from aerosol size distributions by factors of typically 2 to 10.
NASA Astrophysics Data System (ADS)
Bourgeat, Pierrick; Dore, Vincent; Fripp, Jurgen; Villemagne, Victor L.; Rowe, Chris C.; Salvado, Olivier
2015-03-01
With the advances of PET tracers for β-Amyloid (Aβ) detection in neurodegenerative diseases, automated quantification methods are desirable. For clinical use, there is a great need for PET-only quantification method, as MR images are not always available. In this paper, we validate a previously developed PET-only quantification method against MR-based quantification using 6 tracers: 18F-Florbetaben (N=148), 18F-Florbetapir (N=171), 18F-NAV4694 (N=47), 18F-Flutemetamol (N=180), 11C-PiB (N=381) and 18F-FDG (N=34). The results show an overall mean absolute percentage error of less than 5% for each tracer. The method has been implemented as a remote service called CapAIBL (http://milxcloud.csiro.au/capaibl). PET images are uploaded to a cloud platform where they are spatially normalised to a standard template and quantified. A report containing global as well as local quantification, along with surface projection of the β-Amyloid deposition is automatically generated at the end of the pipeline and emailed to the user.
Chesser, R. Terry
2009-01-01
Systematists argue that the importance of our work lies not only in the elucidation of evolutionary relationships, but also in the incorporation of evolutionary information into classifications and the use of these classifications by government agencies, nongovernmental organizations, professional scientists, and others interested in biodiversity. If this is true, and I think that it is, then synthetic publications that make our findings accessible to a wide audience, such as Christidis and Boles' new Systematics and Taxonomy of Australian Birds, may be among the most significant works that we publish.
Marketable transport fuels made from Julia Creek shale oil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-03-01
CSR Limited and the CSIRO Division of Energy Chemistry have been working on the problem of producing refined products from the Julia Creek deposit in Queensland, Australia. Two samples of shale oil, retorted at different temperatures from Julia Creek oil shale, were found to differ markedly in aromaticity. Using conventional hydrotreating technology, high quality jet and diesel fuels could be made from the less aromatic oil. Naphtha suitable for isomerization and reforming to gasoline could be produced from both oils. This paper discusses oil properties, stabilization of topped crudes, second stage hydrotreatment, and naphtha hydrotreating. 1 figure, 4 tables.
Linking earth science informatics resources into uninterrupted digital value chains
NASA Astrophysics Data System (ADS)
Woodcock, Robert; Angreani, Rini; Cox, Simon; Fraser, Ryan; Golodoniuc, Pavel; Klump, Jens; Rankine, Terry; Robertson, Jess; Vote, Josh
2015-04-01
The CSIRO Mineral Resources Flagship was established to tackle medium- to long-term challenges facing the Australian mineral industry across the value chain from exploration and mining through mineral processing within the framework of an economically, environmentally and socially sustainable minerals industry. This broad portfolio demands collaboration and data exchange with a broad range of participants and data providers across government, research and industry. It is an ideal environment to link geoscience informatics platforms to application across the resource extraction industry and to unlock the value of data integration between traditionally discrete parts of the minerals digital value chain. Despite the potential benefits, data integration remains an elusive goal within research and industry. Many projects use only a subset of available data types in an integrated manner, often maintaining the traditional discipline-based data 'silos'. Integrating data across the entire minerals digital value chain is an expensive proposition involving multiple disciplines and, significantly, multiple data sources both internal and external to any single organisation. Differing vocabularies and data formats, along with access regimes to appropriate analysis software and equipment all hamper the sharing and exchange of information. AuScope has addressed the challenge of data exchange across organisations nationally, and established a national geosciences information infrastructure using open standards-based web services. Federated across a wide variety of organisations, the resulting infrastructure contains a wide variety of live and updated data types. The community data standards and infrastructure platforms that underpin AuScope provide important new datasets and multi-agency links independent of software and hardware differences. AuScope has thus created an infrastructure, a platform of technologies and the opportunity for new ways of working with and integrating disparate data at much lower cost. An early example of this approach is the value generated by combining geological and metallurgical data sets as part of the rapidly growing field of geometallurgy. This not only provides a far better understanding of the impact of geological variability on ore processing but also leads to new thinking on the types and characteristics of data sets collected at various stages of the exploration and mining process. The Minerals Resources Flagship is linking its research activities to the AuScope infrastructure, exploiting the technology internally to create a platform for integrated research across the minerals value chain and improved interaction with industry. Referred to as the 'Early Access Virtual Lab', the system will be fully interoperable with AuScope and international infrastructures using open standards like GeosciML. Secured access is provided to allow confidential collaboration with industry when required. This presentation will discuss how the CSIRO Mineral Resources Flagship is building on the AuScope infrastructure to transform the way that data and data products are identified, shared, integrated, and reused, to unlock the benefits of true integration of research efforts across the minerals digital value chain.
NASA Astrophysics Data System (ADS)
Bastrakova, I.; Klump, J. F.; McInnes, B.; Wyborn, L. A.; Brown, A.
2015-12-01
The International Geo-Sample Number (IGSN) provides a globally unique identifier for physical samples used to generate analytical data. This unique identifier provides the ability to link each physical sample to any analytical data undertaken on that sample, as well as to any publications derived from any data derived on the sample. IGSN is particularly important for geochemical and geochronological data, where numerous analytical techniques can be undertaken at multiple analytical facilities not only on the parent rock sample itself, but also on derived sample splits and mineral separates. Australia now has three agencies implementing IGSN: Geoscience Australia, CSIRO and Curtin University. All three have now combined into a single project, funded by the Australian Research Data Services program, to better coordinate the implementation of IGSN in Australia, in particular how these agencies allocate IGSN identifiers. The project will register samples from pilot applications in each agency including the CSIRO National Collection of Mineral Spectra database, the Geoscience Australia sample collection, and the Digital Mineral Library of the John De Laeter Centre for Isotope Research at Curtin University. These local agency catalogues will then be aggregated into an Australian portal, which will ultimately be expanded for all geoscience specimens. The development of this portal will also involve developing a common core metadata schema for the description of Australian geoscience specimens, as well as formulating agreed governance models for registering Australian samples. These developments aim to enable a common approach across Australian academic, research organisations and government agencies for the unique identification of geoscience specimens and any analytical data and/or publications derived from them. The emerging pattern of governance and technical collaboration established in Australia may also serve as a blueprint for similar collaborations internationally.
Sample Identification at Scale - Implementing IGSN in a Research Agency
NASA Astrophysics Data System (ADS)
Klump, J. F.; Golodoniuc, P.; Wyborn, L. A.; Devaraju, A.; Fraser, R.
2015-12-01
Earth sciences are largely observational and rely on natural samples, types of which vary significantly between science disciplines. Sharing and referencing of samples in scientific literature and across the Web requires the use of globally unique identifiers essential for disambiguation. This practice is very common in other fields, e.g. ISBN in publishing, doi in scientific literature, etc. In Earth sciences however, this is still often done in an ad-hoc manner without the use of unique identifiers. The International Geo Sample Number (IGSN) system provides a persistent, globally unique label for identifying environmental samples. As an IGSN allocating agency, CSIRO implements the IGSN registration service at the organisational scale with contributions from multiple research groups. Capricorn Distal Footprints project is one of the first pioneers and early adopters of the technology in Australia. For this project, IGSN provides a mechanism for identification of new and legacy samples, as well as derived sub-samples. It will ensure transparency and reproducibility in various geochemical sampling campaigns that will involve a diversity of sampling methods. Hence, diverse geochemical and isotopic results can be linked back to the parent sample, particularly where multiple children of that sample have also been analysed. The IGSN integration for this project is still in early stages and requires further consultations on the governance mechanisms that we need to put in place to allow efficient collaboration within CSIRO and collaborating partners on the project including naming conventions, service interfaces, etc. In this work, we present the results of the initial implementation of IGSN in the context of the Capricorn Distal Footprints project. This study has so far demonstrated the effectiveness of the proposed approach, while maintaining the flexibility to adapt to various media types, which is critical in the context of a multi-disciplinary project.
Wyld, Belinda; Harrison, Adam; Noakes, Manny
2010-12-01
The CSIRO Total Wellbeing Diet (TWD) publication is an evidence-based weight management strategy utilising a structured higher protein diet as part of a nutritionally balanced lifestyle programme. Despite its popularity, the impact of TWD on weight status, weight loss and food choices of Australians was unknown. An independent representative survey was conducted in 2006. Sociodemographic differences in awareness, use of TWD and the impact on weight status and well-being were investigated via computer-aided telephone interviews and web-based surveys. Australia. A total of 5026 men and women aged 18-60 years. Consumers were highly aware of TWD (66 %) with personal use reported by 7·5 % of the total sample (n 5026). An additional 2·5 % (126 people) were members of a household that used TWD. In all, 80 % of TWD purchasers actively used the eating plan with approximately 3·8 % losing an average self-reported weight loss of 5·7 kg (sd = 1·72 kg; range = 1-13 kg). Results showed that awareness was greatest among women (73·79 % v. 58·27 %), those over 50 years of age (69·39 % v. 62·88 %) with no children in the household (69·00 % v. 64·88 %), tertiary educated people (72·58 % v. 63·22 %) and those with more previous weight loss attempts (79·66 % v. 70·24 %). Logistic regression was unable to predict an identifiable sociodemographic profile of TWD users. The present study shows widespread uptake of TWD in Australia with few sociodemographic differences. Self-reported increased awareness of nutrition and well-being as well as weight loss indicates that TWD has been a successful delivery mechanism for lifestyle advice.
CSIRO GASLAB Network: Individual Flask Measurements of Atmospheric Trace Gases (April 2003)
Steele, L. P. [Commonwealth Scientific and Industrial Research Organisation (CSIRO), Aspendale, Victoria, Australia; Krummel, P. R. [Commonwealth Scientific and Industrial Research Organisation (CSIRO), Aspendale, Victoria, Australia; Langenfelds, R. L. [Commonwealth Scientific and Industrial Research Organisation (CSIRO), Aspendale, Victoria, Australia
2003-04-01
Data are available for four atmospheric trace gases at nine stationary sites and one moving platform (aircraft over Cape Grim, Tasmania, and Bass Strait, between the Australian continent and Tasmania). The trace gases are carbon dioxide (CO2), methane (CH4), carbon monoxide (CO), and hydrogen (H2). Measurements of δ13C from CO2 are also included in this database. The nine stationary sites are, from north to south: Alert, Canada; Shetland Islands, Scotland; Estevan Point, Canada; Mauna Loa, Hawaii; Cape Ferguson, Australia; Cape Grim, Australia (Tasmania); Macquarie Island, Australia; Mawson, Antarctica; and the South Pole station, Antarctica.
The New Maia Detector System: Methods For High Definition Trace Element Imaging Of Natural Material
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, C. G.; School of Physics, University of Melbourne, Parkville VIC; CODES Centre of Excellence, University of Tasmania, Hobart TAS
2010-04-06
Motivated by the need for megapixel high definition trace element imaging to capture intricate detail in natural material, together with faster acquisition and improved counting statistics in elemental imaging, a large energy-dispersive detector array called Maia has been developed by CSIRO and BNL for SXRF imaging on the XFM beamline at the Australian Synchrotron. A 96 detector prototype demonstrated the capacity of the system for real-time deconvolution of complex spectral data using an embedded implementation of the Dynamic Analysis method and acquiring highly detailed images up to 77 M pixels spanning large areas of complex mineral sample sections.
The New Maia Detector System: Methods For High Definition Trace Element Imaging Of Natural Material
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, C.G.; Siddons, D.P.; Kirkham, R.
2010-05-25
Motivated by the need for megapixel high definition trace element imaging to capture intricate detail in natural material, together with faster acquisition and improved counting statistics in elemental imaging, a large energy-dispersive detector array called Maia has been developed by CSIRO and BNL for SXRF imaging on the XFM beamline at the Australian Synchrotron. A 96 detector prototype demonstrated the capacity of the system for real-time deconvolution of complex spectral data using an embedded implementation of the Dynamic Analysis method and acquiring highly detailed images up to 77 M pixels spanning large areas of complex mineral sample sections.
Future climate effects on suitability for growth of oil palms in Malaysia and Indonesia
Paterson, R. Russell M.; Kumar, Lalit; Taylor, Subhashni; Lima, Nelson
2015-01-01
The production of palm oil (PO) is highly profitable. The economies of the principal producers, Malaysia and Indonesia, and others, benefit considerably. Climate change (CC) will most likely have an impact on the distribution of oil palms (OP) (Elaeis guineensis). Here we present modelled CC projections with respect to the suitability of growing OP, in Malaysia and Indonesia. A process-oriented niche model of OP was developed using CLIMEX to estimate its potential distribution under current and future climate scenarios. Two Global Climate Models (GCMs), CSIRO-Mk3.0 and MIROC-H, were used to explore the impacts of CC under the A1B and A2 scenarios for 2030, 2070 and 2100. Decreases in climatic suitability for OP in the region were gradual by 2030 but became more pronounced by 2100. These projections imply that OP growth will be affected severely by CC, with obvious implications to the economies of (a) Indonesia and Malaysia and (b) the PO industry, but with potential benefits towards reducing CC. A possible remedial action is to concentrate research on development of new varieties of OP that are less vulnerable to CC. PMID:26399638
Ronald N. Bracewell: An Appreciation
NASA Astrophysics Data System (ADS)
Thompson, A. Richard; Frater, Robert H.
2010-11-01
Ronald Newbold Bracewell (1921-2007) made fundamental contributions to the development of radio astronomy in the areas of interferometry, signal processing, and imaging, and also to tomography, various areas of data analysis, and the understanding of Fourier transforms. He was born in Sydney, Australia, and received a B.Sc. degree in mathematics and physics, and B.E. and M.E. degrees in electrical engineering from the University of Sydney, and his Ph.D. from the University of Cambridge, U.K., for research on the ionosphere. In 1949 he joined the Radiophysics Laboratory of CSIRO, where he became interested in radio astronomy. In 1955 he moved to Stanford University, California, where he became Lewis M. Terman Professor of Electrical Engineering. He retired from teaching in 1991, but continued to be active in radio astronomy and other applications of imaging techniques, etc. During his career he published ten books and more than 250 papers. Honors that he received include the Duddell Premium of the Institute of Electrical Engineers, London, the Hertz Medal of the IEEE, and the Order of Australia. For his work on imaging in tomography he was elected to Associate Membership of the Institute of Medicine of the U.S. National Academy of Sciences.
Future climate effects on suitability for growth of oil palms in Malaysia and Indonesia.
Paterson, R Russell M; Kumar, Lalit; Taylor, Subhashni; Lima, Nelson
2015-09-24
The production of palm oil (PO) is highly profitable. The economies of the principal producers, Malaysia and Indonesia, and others, benefit considerably. Climate change (CC) will most likely have an impact on the distribution of oil palms (OP) (Elaeis guineensis). Here we present modelled CC projections with respect to the suitability of growing OP, in Malaysia and Indonesia. A process-oriented niche model of OP was developed using CLIMEX to estimate its potential distribution under current and future climate scenarios. Two Global Climate Models (GCMs), CSIRO-Mk3.0 and MIROC-H, were used to explore the impacts of CC under the A1B and A2 scenarios for 2030, 2070 and 2100. Decreases in climatic suitability for OP in the region were gradual by 2030 but became more pronounced by 2100. These projections imply that OP growth will be affected severely by CC, with obvious implications to the economies of (a) Indonesia and Malaysia and (b) the PO industry, but with potential benefits towards reducing CC. A possible remedial action is to concentrate research on development of new varieties of OP that are less vulnerable to CC.
Technologies and practices for maintaining and publishing earth science vocabularies
NASA Astrophysics Data System (ADS)
Cox, Simon; Yu, Jonathan; Williams, Megan; Giabardo, Fabrizio; Lowe, Dominic
2015-04-01
Shared vocabularies are a key element in geoscience data interoperability. Many organizations curate vocabularies, with most Geologic Surveys having a long history of development of lexicons and authority tables. However, their mode of publication is heterogeneous, ranging from PDFs and HTML web pages, spreadsheets and CSV, through various user-interfaces, and public and private APIs. Content maintenance ranges from tightly-governed and externally opaque, through various community processes, all the way to crowd-sourcing ('folksonomies'). Meanwhile, there is an increasing expectation of greater harmonization and vocabulary re-use, which create requirements for standardized content formalization and APIs, along with transparent content maintenance and versioning. We have been trialling a combination of processes and software dealing with vocabulary formalization, registration, search and linking. We use the Simplified Knowledge Organization System (SKOS) to provide a generic interface to content. SKOS is an RDF technology for multi-lingual, hierarchical vocabularies, oriented around 'concepts' denoted by URIs, and thus consistent with Linked Open Data. SKOS may be mixed in with classes and properties from specialized ontologies which provide a more specific interface when required. We have developed a suite of practices and techniques for conversion of content from the source technologies and styles into SKOS, largely based on spreadsheet manipulation before RDF conversion, and SPARQL afterwards. The workflow for each vocabulary must be adapted to match the specific inputs. In linked data applications, two requirements are paramount for user confidence: (i) the URI that denotes a vocabulary item is persistent, and should be dereferenceable indefinitely; (ii) the history and status of the resource denoted by a URI must be available. This is implemented by the Linked Data Registry (LDR), originally developed for the World Meteorological Organization and the UK Environment Agency, and now adapted and enhanced for deployment by CSIRO and the Australian Bureau of Meteorology. The LDR applies a standard content registration paradigm to RDF data, also including a delegation mode that enables a system to register (endorse) externally managed content. The locally managed RDF is exposed on a SPARQL endpoint. The registry implementation enables a flexible interaction pattern to support various specific content publication workflows, with the key feature of making the content externally accessible through a standard interface alongside its history, previous versions, and status. SPARQL is the standard low-level API for RDF including SKOS. On top of this we have developed SISSvoc, a SKOS-based RESTful API. This has been used it to deploy a number of vocabularies on behalf of the IUGS, ICS, NERC, OGC, the Australian Government, and CSIRO projects. Applications like SISSvoc Search provide a simple search UI on top of one or more SISSvoc sources. Together, these components provide a powerful and flexible system for providing earth science vocabularies for the community, consistent with semantic web and linked-data principles.
Srinivasa Rao, Mathukumalli; Swathi, Pettem; Rama Rao, Chitiprolu Anantha; Rao, K. V.; Raju, B. M. K.; Srinivas, Karlapudi; Manimanjari, Dammu; Maheswari, Mandapaka
2015-01-01
The present study features the estimation of number of generations of tobacco caterpillar, Spodoptera litura. Fab. on peanut crop at six locations in India using MarkSim, which provides General Circulation Model (GCM) of future data on daily maximum (T.max), minimum (T.min) air temperatures from six models viz., BCCR-BCM2.0, CNRM-CM3, CSIRO-Mk3.5, ECHams5, INCM-CM3.0 and MIROC3.2 along with an ensemble of the six from three emission scenarios (A2, A1B and B1). This data was used to predict the future pest scenarios following the growing degree days approach in four different climate periods viz., Baseline-1975, Near future (NF) -2020, Distant future (DF)-2050 and Very Distant future (VDF)—2080. It is predicted that more generations would occur during the three future climate periods with significant variation among scenarios and models. Among the seven models, 1–2 additional generations were predicted during DF and VDF due to higher future temperatures in CNRM-CM3, ECHams5 & CSIRO-Mk3.5 models. The temperature projections of these models indicated that the generation time would decrease by 18–22% over baseline. Analysis of variance (ANOVA) was used to partition the variation in the predicted number of generations and generation time of S. litura on peanut during crop season. Geographical location explained 34% of the total variation in number of generations, followed by time period (26%), model (1.74%) and scenario (0.74%). The remaining 14% of the variation was explained by interactions. Increased number of generations and reduction of generation time across the six peanut growing locations of India suggest that the incidence of S. litura may increase due to projected increase in temperatures in future climate change periods. PMID:25671564
It Takes A 'Village of Partnerships' To Raise A 'Big Data Facility' In A 'Big Data World'.
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Wyborn, L. A.
2015-12-01
The National Computational Infrastructure (NCI) at the Australian National University (ANU) has collocated a priority set of national and international data assets that span a wide range of domains from climate, oceans, geophysics, environment, astronomy, bioinformatics and the social sciences. The data are located on a 10 PB High Performance Data (HPD) Node that is integrated with a High Performance Computing (HPC) facility to enable a new style of Data-intensive in-situ analysis. Investigators can either log in via direct access to the data collections: access is also provided via modern standards-based web services. The NCI integrated HPD/HPC facility is supported by a 'village' of partnerships. NCI itself operates as a formal partnership between the ANU and three major National Scientific Agencies: CSIRO, the Bureau of Meteorology (BoM) and Geoscience Australia (GA). These same agencies are also the custodians of many of the national data collections hosted at NCI, and in partnership with other collaborating national and overseas organisations have agreed to work together to develop a shared data environment and use standards that enable interoperability between the collections, rather than isolating their collections as separate entities that each agency runs independently. To effectively analyse these complex and large volume data sets, NCI has entered into a series of national and national partnerships with international agencies to provide world-class digital analytical environments that allow computational to be conducted and shared. The ability for government and research to work in partnership at the NCI has been well established over the last decade, mainly with BoM, CSIRO, and GA. New emerging industry linkages are now being encouraged by revised government agendas and these promises to foster a new series of partnerships that will increase uptake of this major government funded infrastructure and promise to foster further collaboration and innovation.
Génier, François; Davis, Adrian L V
2017-01-19
At risk of committing entomological heresy, we question the identity of a dung-burying beetle species that originates from Africa and has been introduced first into Hawaii and subsequently to Australasia, North America, and South America (Fincher 1986; Edwards 2007; Noriega et al. 2010) for pasture improvement and biological control of dung-breeding flies (Waterhouse 1974; Bornemissza 1979). Under the name Onthophagus gazella (Fabricius 1787), it was the first species selected for introduction into Australia by the CSIRO Dung Beetle Project (Bornemissza 1976; Edwards 2007). Firstly, in 1968, a "tropical strain" was introduced from Hawaii where it had become established after introduction from Zimbabwe in 1957 (Markin & Yoshioka 1998). Later, after establishment of the CSIRO Dung Beetle Research Unit in Pretoria in 1970, a "cold" or "even rainfall strain" was introduced into Australia directly from South Africa (Bornemissza 1976) (even rainfall region = south coast of Eastern Cape). The species was subsequently introduced into the southern continental United States of America (Victoria County, Texas) from Hawaii (Montes de Oca & Halffter 1998) then elsewhere into southeastern and southwestern states from Hawaii and breeding colonies from Australia (Anderson & Loomis 1978). It has since expanded its range through Mexico, Central America, and the Caribbean to coastal Colombia (Kohlmann 1994; Noriega 2002; Noriega et al. 2006, 2011). Expansion of its range within central southern South America (Noriega et al. 2010) has been assisted by introductions into Brazil from the United States of America since the 1980s (Bianchin et al. 1998), and others into Venezuela and Chile (Vidaurre et al. 2008). More recently, it has been introduced into quarantine and field trials in New Zealand (Forgie et al. 2013) using individuals originating from the south coast of the Eastern Cape and Northwest Province of South Africa (S. Forgie, personal communication).
Managing Livestock Species under Climate Change in Australia
Seo, S. Niggol; McCarl, Bruce
2011-01-01
Simple Summary World communities are concerned about the impacts of a hotter and drier climate on future agriculture. By examining Australian regional livestock data on sheep, beef cattle, dairy cattle, and pigs, the authors find that livestock production will expand under such conditions. Livestock revenue per farm is expected to increase by more than 47% by 2060 under the UKMO, the GISS, and a high degree of warming CSIRO scenario. The existence of a threshold temperature for these species is not evident. Abstract This paper examines the vulnerabilities of major livestock species raised in Australia to climate change using the regional livestock profile of Australia of around 1,400 regions. The number of each species owned, the number of each species sold, and the aggregate livestock revenue across all species are examined. The four major species analyzed are sheep, beef cattle, dairy cattle, and pigs. The analysis also includes livestock products such as wool and milk. These livestock production statistics are regressed against climate, geophysical, market and household characteristics. In contrast to crop studies, the analysis finds that livestock species are resilient to a hotter and more arid climate. Under the CSIRO climate scenario in which temperature increases by 3.4 °C, livestock revenue per farm increases significantly while the number of each species owned increases by large percentages except for dairy cattle. The precipitation reduction by about 8% in 2060 also increases the numbers of livestock species per farm household. Under both UKMO and GISS scenarios, livestock revenue is expected to increase by around 47% while the livestock population increases by large percentage. Livestock management may play a key role in adapting to a hot and arid climate in Australia. However, critical values of the climatic variables for the species analyzed in this paper are not obvious from the regional data. PMID:26486620
Evaluation of trapping-web designs
Lukacs, P.M.; Anderson, D.R.; Burnham, K.P.
2005-01-01
The trapping web is a method for estimating the density and abundance of animal populations. A Monte Carlo simulation study is performed to explore performance of the trapping web for estimating animal density under a variety of web designs and animal behaviours. The trapping performs well when animals have home ranges, even if the home ranges are large relative to trap spacing. Webs should contain at least 90 traps. Trapping should continue for 5-7 occasions. Movement rates have little impact on density estimates when animals are confined to home ranges. Estimation is poor when animals do not have home ranges and movement rates are rapid. The trapping web is useful for estimating the density of animals that are hard to detect and occur at potentially low densities. ?? CSIRO 2005.
NASA Astrophysics Data System (ADS)
Orchiston, Wayne; Slee, Bruce
During the period 1946-1961 Australia was one of the world's leading nations in radio astronomy and played a key role in its development. Much of the research was carried out at a number of different field stations and associated remote sites situated in or near Sydney which were maintained by the Commonwealth Scientific and Industrial Research Organisation's Division of Radiophysics. The best-known of these were Dover Heights, Dapto, Fleurs, Hornsby Valley and Potts Hill. At these and other field stations a succession of innovative radio telescopes was erected, and these were used by a band of young scientists—mainly men with engineering qualifications—to address a wide range of research issues, often with outstanding success.
Polarimetry of 600 pulsars from observations at 1.4 GHz with the Parkes radio telescope
NASA Astrophysics Data System (ADS)
Johnston, Simon; Kerr, Matthew
2018-03-01
Over the past 13 yr, the Parkes radio telescope has observed a large number of pulsars using digital filter bank backends with high time and frequency resolution and the capability for Stokes recording. Here, we use archival data to present polarimetry data at an observing frequency of 1.4 GHz for 600 pulsars with spin-periods ranging from 0.036 to 8.5 s. We comment briefly on some of the statistical implications from the data and highlight the differences between pulsars with high and low spin-down energy. The data set, images and table of properties for all 600 pulsars are made available in a public data archive maintained by the CSIRO.
Pearce, Mark A
2015-08-01
EBSDinterp is a graphic user interface (GUI)-based MATLAB® program to perform microstructurally constrained interpolation of nonindexed electron backscatter diffraction data points. The area available for interpolation is restricted using variations in pattern quality or band contrast (BC). Areas of low BC are not available for interpolation, and therefore cannot be erroneously filled by adjacent grains "growing" into them. Points with the most indexed neighbors are interpolated first and the required number of neighbors is reduced with each successive round until a minimum number of neighbors is reached. Further iterations allow more data points to be filled by reducing the BC threshold. This method ensures that the best quality points (those with high BC and most neighbors) are interpolated first, and that the interpolation is restricted to grain interiors before adjacent grains are grown together to produce a complete microstructure. The algorithm is implemented through a GUI, taking advantage of MATLAB®'s parallel processing toolbox to perform the interpolations rapidly so that a variety of parameters can be tested to ensure that the final microstructures are robust and artifact-free. The software is freely available through the CSIRO Data Access Portal (doi:10.4225/08/5510090C6E620) as both a compiled Windows executable and as source code.
NASA Astrophysics Data System (ADS)
Jex, C.; Phipps, S. J.; Baker, A.; Bradley, C.; Scholz, D.
2012-12-01
Speleothem δ18O (δ18Ospel) is arguably one of the best proxies for understanding seasonal groundwater recharge dynamics on all timescales, and therefore for inferring past changes in regional hydroclimate. Statistical relationships between δ18Ospel and the amount of seasonally effective precipitation or its isotopic composition may be demonstrated at cave sites where there is a reliable seasonally distinct composition of δ18O of precipitation (δ18Opptn). This is often the case where recharge is driven by spring snow-melt, seasonal soil moisture excess, or in monsoonal regimes with distinct changes in moisture source. We suggest that there are also three main areas of uncertainty that need to be addressed with any individual record of δ18Ospel. Here we present the results of a multi-model-proxy comparison using a published record of δ18Ospel from Turkey that has grown over the last 500 years in order to quantify these three main areas of uncertainty. First, we assess the stability of previously observed relationships between local climate parameters and regional circulation dynamics over the last 1ka using the CSIRO Mk3L climate system model [Phipps et al., 2011] in order to estimate the variability of δ18Opptn that could be explained by internal climate variability alone. Second, we estimate the variability in δ18Odw that could be explained by storage and routing of water in the karst aquifer over the last 1 ka using the temperature and precipitation output of a three-member ensemble of transient simulations and synthetic δ18Opptn for this location, to drive the KarstFor karst systems model [Baker et al., 2012]. Finally, we estimate the variability in δ18Ospel that may be attributed to kinetic fractionation processes associated with non-equilibrium CaCO3 formation for this cave system [Scholz et al., 2009]. Baker, A., C. Bradley, S. J. Phipps, M. Fischer, I. J. Fairchild, L. Fuller, C. Spötl, and C. Azcurra (2012), Millennial-length forward models and pseudoproxies of stalagmite δ18O: an example from NW Scotland, Clim. Past Discuss, 8, 869-907. Phipps, S. J., L. D. Rotstayn, H. B. Gordon, J. L. Roberts, A. C. Hirst, and W. F. Budd (2011), The CSIRO Mk3L climate system model version 1.0 - Part 1: Description and evaluation, Geoscientific Model Development, 4, 483-509. Scholz, D., C. Mühlinghaus, and A. Mangini (2009), Modelling δ18C and δ18O in the solution layer on stalagmite surfaces, Geochimica et Cosmochimica Acta, 73(9), 2592-2602.
NASA Astrophysics Data System (ADS)
Gao, Xiang; Du, Jia; Zhang, Ting; Jay Guo, Y.; Foley, Cathy P.
2017-11-01
This paper presents a systematic investigation of a broadband thin-film antenna-coupled high-temperature superconducting (HTS) terahertz (THz) harmonic mixer at relatively high operating temperature from 40 to 77 K. The mixer device chip was fabricated using the CSIRO established step-edge YBa2Cu3O7-x (YBCO) Josephson junction technology, packaged in a well-designed module and cooled in a temperature adjustable cryocooler. Detailed experimental characterizations were carried out for the broadband HTS mixer at both the 200 and 600 GHz bands in harmonic mixing mode. The DC current-voltage characteristics (IVCs), bias current condition, local oscillator (LO) power requirement, frequency response, as well as conversion efficiency under different bath temperatures were thoroughly investigated for demonstrating the frequency down-conversion performance.
Vocabulary services to support scientific data interoperability
NASA Astrophysics Data System (ADS)
Cox, Simon; Mills, Katie; Tan, Florence
2013-04-01
Shared vocabularies are a core element in interoperable systems. Vocabularies need to be available at run-time, and where the vocabularies are shared by a distributed community this implies the use of web technology to provide vocabulary services. Given the ubiquity of vocabularies or classifiers in systems, vocabulary services are effectively the base of the interoperability stack. In contemporary knowledge organization systems, a vocabulary item is considered a concept, with the "terms" denoting it appearing as labels. The Simple Knowledge Organization System (SKOS) formalizes this as an RDF Schema (RDFS) application, with a bridge to formal logic in Web Ontology Language (OWL). For maximum utility, a vocabulary should be made available through the following interfaces: * the vocabulary as a whole - at an ontology URI corresponding to a vocabulary document * each item in the vocabulary - at the item URI * summaries, subsets, and resources derived by transformation * through the standard RDF web API - i.e. a SPARQL endpoint * through a query form for human users. However, the vocabulary data model may be leveraged directly in a standard vocabulary API that uses the semantics provided by SKOS. SISSvoc3 [1] accomplishes this as a standard set of URI templates for a vocabulary. Any URI comforming to the template selects a vocabulary subset based on the SKOS properties, including labels (skos:prefLabel, skos:altLabel, rdfs:label) and a subset of the semantic relations (skos:broader, skos:narrower, etc). SISSvoc3 thus provides a RESTFul SKOS API to query a vocabulary, but hiding the complexity of SPARQL. It has been implemented using the Linked Data API (LDA) [2], which connects to a SPARQL endpoint. By using LDA, we also get content-negotiation, alternative views, paging, metadata and other functionality provided in a standard way. A number of vocabularies have been formalized in SKOS and deployed by CSIRO, the Australian Bureau of Meteorology (BOM) and their collaborators using SISSvoc3, including: * geologic timescale (multiple versions) * soils classification * definitions from OGC standards * geosciml vocabularies * mining commodities * hyperspectral scalars Several other agencies in Australia have adopted SISSvoc3 for their vocabularies. SISSvoc3 differs from other SKOS-based vocabulary-access APIs such as GEMET [3] and NVS [4] in that (a) the service is decoupled from the content store, (b) the service URI is independent of the content URIs This means that a SISSvoc3 interface can be deployed over any SKOS vocabulary which is available at a SPARQL endpoint. As an example, a SISSvoc3 query and presentation interface has been deployed over the NERC vocabulary service hosted by the BODC, providing a search interface which is not available natively. We use vocabulary services to populate menus in user interfaces, to support data validation, and to configure data conversion routines. Related services built on LDA have also been used as a generic registry interface, and extended for serving gazetteer information. ACKNOWLEDGEMENTS The CSIRO SISSvoc3 implementation is built using the Epimorphics ELDA platform http://code.google.com/p/elda/. We thank Jacqui Githaiga and Terry Rankine for their contributions to SISSvoc design and implementation. REFERENCES 1. SISSvoc3 Specification https://www.seegrid.csiro.au/wiki/Siss/SISSvoc30Specification 2. Linked Data API http://code.google.com/p/linked-data-api/wiki/Specification 3. GEMET https://svn.eionet.europa.eu/projects/Zope/wiki/GEMETWebServiceAPI 4. NVS 2.0 http://vocab.nerc.ac.uk/
NASA Astrophysics Data System (ADS)
Teneva, Lida; Karnauskas, Mandy; Logan, Cheryl A.; Bianucci, Laura; Currie, Jock C.; Kleypas, Joan A.
2012-03-01
Sea surface temperature fields (1870-2100) forced by CO2-induced climate change under the IPCC SRES A1B CO2 scenario, from three World Climate Research Programme Coupled Model Intercomparison Project Phase 3 (WCRP CMIP3) models (CCSM3, CSIRO MK 3.5, and GFDL CM 2.1), were used to examine how coral sensitivity to thermal stress and rates of adaption affect global projections of coral-reef bleaching. The focus of this study was two-fold, to: (1) assess how the impact of Degree-Heating-Month (DHM) thermal stress threshold choice affects potential bleaching predictions and (2) examine the effect of hypothetical adaptation rates of corals to rising temperature. DHM values were estimated using a conventional threshold of 1°C and a variability-based threshold of 2σ above the climatological maximum Coral adaptation rates were simulated as a function of historical 100-year exposure to maximum annual SSTs with a dynamic rather than static climatological maximum based on the previous 100 years, for a given reef cell. Within CCSM3 simulations, the 1°C threshold predicted later onset of mild bleaching every 5 years for the fraction of reef grid cells where 1°C > 2σ of the climatology time series of annual SST maxima (1961-1990). Alternatively, DHM values using both thresholds, with CSIRO MK 3.5 and GFDL CM 2.1 SSTs, did not produce drastically different onset timing for bleaching every 5 years. Across models, DHMs based on 1°C thermal stress threshold show the most threatened reefs by 2100 could be in the Central and Western Equatorial Pacific, whereas use of the variability-based threshold for DHMs yields the Coral Triangle and parts of Micronesia and Melanesia as bleaching hotspots. Simulations that allow corals to adapt to increases in maximum SST drastically reduce the rates of bleaching. These findings highlight the importance of considering the thermal stress threshold in DHM estimates as well as potential adaptation models in future coral bleaching projections.
Presence in Video-Mediated Interactions: Case Studies at CSIRO
NASA Astrophysics Data System (ADS)
Alem, Leila
Although telepresence and a sense of connectedness with others are frequently mentioned in media space studies, as far as we know, none of these studies report attempts at assessing this critical aspect of user experience. While some attempts have been made to measure presence in virtual reality or augmented reality, (a comprehensive review of existing measures is available in Baren and Ijsselsteijn [2004]), very little work has been reported in measuring presence in video-mediated collaboration systems. Traditional studies of video-mediated collaboration have mostly focused their evaluation on measures of task performance and user satisfaction. Videoconferencing systems can be seen as a type of media space; they rely on technologies of audio, video, and computing put together to create an environment extending the embodied mind. This chapter reports on a set of video-mediated collaboration studies conducted at CSIRO in which different aspects of presence are being investigated. The first study reports the sense of physical presence a specialist doctor experiences when engaged in a remote consultation of a patient using the virtual critical care unit (Alem et al., 2006). The Viccu system is an “always-on” system connecting two hospitals (Li et al., 2006). The presence measure focuses on the extent to which users of videoconferencing systems feel physically present in the remote location. The second study reports the sense of social presence users experience when playing a game of charades with remote partners using a video conference link (Kougianous et al., 2006). In this study the presence measure focuses on the extent to which users feel connected with their remote partners. The third study reports the sense of copresence users experience when building collaboratively a piece of Lego toy (Melo and Alem, 2007). The sense of copresence is the extent to which users feel present with their remote partner. In this final study the sense of copresence is investigated by looking at the word used by users when referring to the physical objects they are manipulating during their interaction as well as when referring to locations in the collaborative workspace. We believe that such efforts provide a solid stepping stone for evaluating and analyzing future media spaces.
The atmospheric boundary layer in the CSIRO global climate model: simulations versus observations
NASA Astrophysics Data System (ADS)
Garratt, J. R.; Rotstayn, L. D.; Krummel, P. B.
2002-07-01
A 5-year simulation of the atmospheric boundary layer in the CSIRO global climate model (GCM) is compared with detailed boundary-layer observations at six locations, two over the ocean and four over land. Field observations, in the form of surface fluxes and vertical profiles of wind, temperature and humidity, are generally available for each hour over periods of one month or more in a single year. GCM simulations are for specific months corresponding to the field observations, for each of five years. At three of the four land sites (two in Australia, one in south-eastern France), modelled rainfall was close to the observed climatological values, but was significantly in deficit at the fourth (Kansas, USA). Observed rainfall during the field expeditions was close to climatology at all four sites. At the Kansas site, modelled screen temperatures (Tsc), diurnal temperature amplitude and sensible heat flux (H) were significantly higher than observed, with modelled evaporation (E) much lower. At the other three land sites, there is excellent correspondence between the diurnal amplitude and phase and absolute values of each variable (Tsc, H, E). Mean monthly vertical profiles for specific times of the day show strong similarities: over land and ocean in vertical shape and absolute values of variables, and in the mixed-layer and nocturnal-inversion depths (over land) and the height of the elevated inversion or height of the cloud layer (over the sea). Of special interest is the presence climatologically of early morning humidity inversions related to dewfall and of nocturnal low-level jets; such features are found in the GCM simulations. The observed day-to-day variability in vertical structure is captured well in the model for most sites, including, over a whole month, the temperature range at all levels in the boundary layer, and the mix of shallow and deep mixed layers. Weaknesses or unrealistic structure include the following, (a) unrealistic model mixed-layer temperature profiles over land in clear skies, related to use of a simple local first-order turbulence closure, (b) a tendency to overpredict cloud liquid water near the surface.
Enabling Open Research Data Discovery through a Recommender System
NASA Astrophysics Data System (ADS)
Devaraju, Anusuriya; Jayasinghe, Gaya; Klump, Jens; Hogan, Dominic
2017-04-01
Government agencies, universities, research and nonprofit organizations are increasingly publishing their datasets to promote transparency, induce new research and generate economic value through the development of new products or services. The datasets may be downloaded from various data portals (data repositories) which are general or domain-specific. The Registry of Research Data Repository (re3data.org) lists more than 2500 such data repositories from around the globe. Data portals allow keyword search and faceted navigation to facilitate discovery of research datasets. However, the volume and variety of datasets have made finding relevant datasets more difficult. Common dataset search mechanisms may be time consuming, may produce irrelevant results and are primarily suitable for users who are familiar with the general structure and contents of the respective database. Therefore, we need new approaches to support research data discovery. Recommender systems offer new possibilities for users to find datasets that are relevant to their research interests. This study presents a recommender system developed for the CSIRO Data Access Portal (DAP, http://data.csiro.au). The datasets hosted on the portal are diverse, published by researchers from 13 business units in the organisation. The goal of the study is not to replace the current search mechanisms on the data portal, but rather to extend the data discovery through an exploratory search, in this case by building a recommender system. We adopted a hybrid recommendation approach, comprising content-based filtering and item-item collaborative filtering. The content-based filtering computes similarities between datasets based on metadata such as title, keywords, descriptions, fields of research, location, contributors, etc. The collaborative filtering utilizes user search behaviour and download patterns derived from the server logs to determine similar datasets. Similarities above are then combined with different degrees of importance (weights) to determine the overall data similarity. We determined the similarity weights based on a survey involving 150 users of the portal. The recommender results for a given dataset are accessible programmatically via a RESTful web service. An offline evaluation involving data users demonstrates the ability of the recommender system to discover relevant and 'novel' datasets.
PULSE@Parkes, Engaging Students through Hands-On Radio Astronomy
NASA Astrophysics Data System (ADS)
Hollow, Robert; Hobbs, George; Shannon, Ryan M.; Kerr, Matthew
2015-08-01
PULSE@Parkes is an innovative, free educational program run by CSIRO Astronomy and Space Science (CASS) in which high school students use the 64m Parkes radio telescope remotely in real time to observe pulsars then analyse their data. The program caters for a range of student ability and introduces students to hands-on observing and radio astronomy. Students are guided by professional astronomers, educators and PhD students during an observing session. They have ample time to interact with the scientists and discuss astronomy, careers and general scientific questions. Students use a web-based module to analyse pulsar properties. All data from the program are streamed via a web browser and are freely available from the online archive and may be used for open-ended student investigations. The data are also used by the team for ongoing pulsar studies with two scientific papers published to date.Over 100 sessions have been held so far. Most sessions are held at CASS headquarters in Sydney, Australia but other sessions are regularly held in other states with partner institutions. The flexibility of the program means that it is also possible to run sessions in other countries. This aspect of the program is useful for demonstrating capability, engaging students in diverse settings and fostering collaborations. The use of Twitter (@pulseatparkes) during allows followers worldwide to participate and ask questions.Two tours of Japan plus sessions in the UK, Netherlands and Canada have reached a wide audience. Plans for collaborations in China are well underway with the possibility of use with other countries also being explored. The program has also been successfully used in helping to train international graduate students via the International Pulsar Timing Array Schools. We have identified strong demand and need for programs such as this for training undergraduate students in Asia and the North America in observing and data analysis techniques so one area of planned development is teaching materials and a package for students at this level. The program has also been used to inform the development of educational programs for new telescopes such as the Australian SKA Pathfinder (ASKAP) and the SKA.http://pulseatparkes.atnf.csiro.au/
Development and Application of a Process-based River System Model at a Continental Scale
NASA Astrophysics Data System (ADS)
Kim, S. S. H.; Dutta, D.; Vaze, J.; Hughes, J. D.; Yang, A.; Teng, J.
2014-12-01
Existing global and continental scale river models, mainly designed for integrating with global climate model, are of very course spatial resolutions and they lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing streamflow forecast at fine spatial resolution and water accounts at sub-catchment levels, which are important for water resources planning and management at regional and national scale. A large-scale river system model has been developed and implemented for water accounting in Australia as part of the Water Information Research and Development Alliance between Australia's Bureau of Meteorology (BoM) and CSIRO. The model, developed using node-link architecture, includes all major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. It includes an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. An auto-calibration tool has been built within the modelling system to automatically calibrate the model in large river systems using Shuffled Complex Evolution optimiser and user-defined objective functions. The auto-calibration tool makes the model computationally efficient and practical for large basin applications. The model has been implemented in several large basins in Australia including the Murray-Darling Basin, covering more than 2 million km2. The results of calibration and validation of the model shows highly satisfactory performance. The model has been operalisationalised in BoM for producing various fluxes and stores for national water accounting. This paper introduces this newly developed river system model describing the conceptual hydrological framework, methods used for representing different hydrological processes in the model and the results and evaluation of the model performance. The operational implementation of the model for water accounting is discussed.
Communications During Critical Mission Operations: Preparing for InSight's Landing on Mars
NASA Technical Reports Server (NTRS)
Asmar, Sami; Oudrhiri, Kamal; Kurtik, Susan; Weinstein-Weiss, Stacy
2014-01-01
Radio communications with deep space missions are often taken for granted due to the impressively successful records since, for decades, the technology and infrastructure have been developed for ground and flight systems to optimize telemetry and commanding. During mission-critical events such as the entry, descent, and landing of a spacecraft on the surface of Mars, the signal's level and frequency dynamics vary significantly and typically exceed the threshold of the budgeted links. The challenge is increased when spacecraft shed antennas with heat shields and other hardware during those risky few minutes. We have in the past successfully received signals on Earth during critical events even ones not intended for ground reception. These included the UHF signal transmitted by Curiosity to Marsorbiting assets. Since NASA's Deep Space Network does not operate in the UHF band, large radio telescopes around the world are utilized. The Australian CSIRO Parkes Radio Telescope supported the Curiosity UHF signal reception and DSN receivers, tools, and expertise were used in the process. In preparation for the InSight mission's landing on Mars in 2016, preparations are underway to support the UHF communications. This paper presents communication scenarios with radio telescopes, and the DSN receiver and tools. It also discusses the usefulness of the real-time information content for better response time by the mission team towards successful mission operations.
Metabolic engineering of plant oils and waxes for use as industrial feedstocks.
Vanhercke, Thomas; Wood, Craig C; Stymne, Sten; Singh, Surinder P; Green, Allan G
2013-02-01
Society has come to rely heavily on mineral oil for both energy and petrochemical needs. Plant lipids are uniquely suited to serve as a renewable source of high-value fatty acids for use as chemical feedstocks and as a substitute for current petrochemicals. Despite the broad variety of acyl structures encountered in nature and the cloning of many genes involved in their biosynthesis, attempts at engineering economic levels of specialty industrial fatty acids in major oilseed crops have so far met with only limited success. Much of the progress has been hampered by an incomplete knowledge of the fatty acid biosynthesis and accumulation pathways. This review covers new insights based on metabolic flux and reverse engineering studies that have changed our view of plant oil synthesis from a mostly linear process to instead an intricate network with acyl fluxes differing between plant species. These insights are leading to new strategies for high-level production of industrial fatty acids and waxes. Furthermore, progress in increasing the levels of oil and wax structures in storage and vegetative tissues has the potential to yield novel lipid production platforms. The challenge and opportunity for the next decade will be to marry these technologies when engineering current and new crops for the sustainable production of oil and wax feedstocks. © 2012 CSIRO Plant Biotechnology Journal © 2012 Society for Experimental Biology, Association of Applied Biologists and Blackwell Publishing Ltd.
Vanhercke, Thomas; El Tahchy, Anna; Liu, Qing; Zhou, Xue-Rong; Shrestha, Pushkar; Divi, Uday K; Ral, Jean-Philippe; Mansour, Maged P; Nichols, Peter D; James, Christopher N; Horn, Patrick J; Chapman, Kent D; Beaudoin, Frederic; Ruiz-López, Noemi; Larkin, Philip J; de Feyter, Robert C; Singh, Surinder P; Petrie, James R
2014-02-01
High biomass crops have recently attracted significant attention as an alternative platform for the renewable production of high energy storage lipids such as triacylglycerol (TAG). While TAG typically accumulates in seeds as storage compounds fuelling subsequent germination, levels in vegetative tissues are generally low. Here, we report the accumulation of more than 15% TAG (17.7% total lipids) by dry weight in Nicotiana tabacum (tobacco) leaves by the co-expression of three genes involved in different aspects of TAG production without severely impacting plant development. These yields far exceed the levels found in wild-type leaf tissue as well as previously reported engineered TAG yields in vegetative tissues of Arabidopsis thaliana and N. tabacum. When translated to a high biomass crop, the current levels would translate to an oil yield per hectare that exceeds those of most cultivated oilseed crops. Confocal fluorescence microscopy and mass spectrometry imaging confirmed the accumulation of TAG within leaf mesophyll cells. In addition, we explored the applicability of several existing oil-processing methods using fresh leaf tissue. Our results demonstrate the technical feasibility of a vegetative plant oil production platform and provide for a step change in the bioenergy landscape, opening new prospects for sustainable food, high energy forage, biofuel and biomaterial applications. © 2013 CSIRO. Plant Biotechnology Journal published by Society for Experimental Biology, Association of Applied Biologists and John Wiley & Sons Ltd.
Breakthrough Listen - A New Search for Life in the Universe
NASA Astrophysics Data System (ADS)
Worden, Pete
On July 20, 2015 Yuri Milner and Stephen Hawking announced a new set of scientific initiatives - a SETI search called Breakthrough Listen and a contest to devise potential messages in response to a detection entitled Breakthrough Message. These are the first of several privately-funded Breakthrough Initiatives, designed to answer the fundamental science questions surrounding the origin, extent and nature of life in the universe. The initiatives are managed by the Breakthrough Prize Foundation. With Breakthrough Listen, Radio SETI observations have begun at the Green Bank Radio Telescope (GBT) and optical SETI at the Lick Observatory Automated Planet Finder (APF). Observations will soon commence at the CSIRO Parkes Radio Telescope. Other SETI instruments and observations are under consideration. In addition, several other initiatives are under development including an expanded search for life in the universe.
Cen A Radio Optical Gamma Composite
2017-12-08
NASA release April 1, 2010 It takes the addition of radio data (orange) to fully appreciate the scale of Cen A's giant radio-emitting lobes, which stretch more than 1.4 million light-years. Gamma-rays from Fermi's Large Area Telescope (purple) and an image of the galaxy in visible light are also included in this composite. Credit: NASA/DOE/Fermi LAT Collaboration, Capella Observatory, and Ilana Feain, Tim Cornwell, and Ron Ekers (CSIRO/ATNF), R. Morganti (ASTRON), and N. Junkes (MPIfR) To learn more about these images go to: www.nasa.gov/mission_pages/GLAST/news/smokestack-plumes.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, C.G.; De Geronimo, G.; Kirkham, R.
2009-11-13
The fundamental parameter method for quantitative SXRF and PIXE analysis and imaging using the dynamic analysis method is extended to model the changing X-ray yields and detector sensitivity with angle across large detector arrays. The method is implemented in the GeoPIXE software and applied to cope with the large solid-angle of the new Maia 384 detector array and its 96 detector prototype developed by CSIRO and BNL for SXRF imaging applications at the Australian and NSLS synchrotrons. Peak-to-background is controlled by mitigating charge-sharing between detectors through careful optimization of a patterned molybdenum absorber mask. A geological application demonstrates the capabilitymore » of the method to produce high definition elemental images up to {approx}100 M pixels in size.« less
Astronomers Discover Clue to Origin of Milky Way Gas Clouds
NASA Astrophysics Data System (ADS)
2010-05-01
A surprising discovery that hydrogen gas clouds found in abundance in and above our Milky Way Galaxy have preferred locations has given astronomers a key clue about the origin of such clouds, which play an important part in galaxy evolution. We've concluded that these clouds are gas that has been blown away from the Galaxy's plane by supernova explosions and the fierce winds from young stars in areas of intense star formation," said H. Alyson Ford of the University of Michigan, whose Ph.D thesis research from Swinburne University formed the basis for this result. The team, consisting of Ford and collaborators Felix J. Lockman, of the National Radio Astronomy Observatory (NRAO), and Naomi Mclure-Griffiths of CSIRO Astronomy and Space Science, presented their findings to the American Astronomical Society's meeting in Miami, Florida. The astronomers studied gas clouds in two distinct regions of the Galaxy. The clouds they studied are between 400 and 15,000 light-years outside the disk-like plane of the Galaxy. The disk contains most of the Galaxy's stars and gas, and is surrounded by a "halo" of gas more distant than the clouds the astronomers studied. "These clouds were first detected with the National Science Foundation's Robert C. Byrd Green Bank Telescope, and are quite puzzling. They are in a transitional area between the disk and the halo, and their origin has been uncertain," Lockman explained. The research team used data from the Galactic All-Sky Survey, made with CSIRO's Parkes radio telescope in Australia. When the astronomers compared the observations of the two regions, they saw that one region contained three times as many hydrogen clouds as the other. In addition, that region's clouds are, on average, twice as far above the Galaxy's plane. The dramatic difference, they believe, is because the region with more clouds lies near the tip of the Galaxy's central "bar," where the bar merges with a major spiral arm. This is an area of intense star formation, containing many young stars whose strong winds can propel gas away from the region. The most massive stars also will explode as supernovae, blasting material outward. In the other region they studied, star formation activity is more sparse. "The properties of these clouds show clearly that they originated as part of the Milky Way's disk, and are a major component of our Galaxy. Understanding these clouds is important in understanding how material moves between the Galaxy's disk and its halo, a critical process in the evolution of galaxies," Lockman said. The clouds consist of neutral hydrogen gas, with an average mass equal to that of about 700 Suns. Their sizes vary greatly, but most are about 200 light-years across. The astronomers studied about 650 such clouds in the two widely-separated regions of the Galaxy.
Smith, W.K.
1982-01-01
The mathematical method of determining in-situ stresses by overcoring, using either the U.S. Bureau of Mines Borehole Deformation Gage or the Commonwealth Scientific and Industrial Research Organisation Hollow Inclusion Stress Cell, is summarized, and data reduction programs for each type of instrument, written in BASIC, are presented. The BASIC programs offer several advantages over previously available FORTRAN programs. They can be executed on a desk-top microcomputer at or near the field site, allowing the investigator to assess the quality of the data and make decisions on the need for additional testing while the crew is still in the field. Also, data input is much simpler than with currently available FORTRAN programs; either English or SI units can be used; and standard deviations of the principal stresses are computed as well as those of the geographic components.
Spatiotemporal Interpolation for Environmental Modelling
Susanto, Ferry; de Souza, Paulo; He, Jing
2016-01-01
A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497
Atmospheric CO2 Concentrations from Aircraft for 1972-1981, CSIRO Monitoring Program
Beardsmore, David J. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Victoria, Australia; Pearman, Graeme I. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Victoria, Australia
2012-01-01
From 1972 through 1981, air samples were collected in glass flasks from aircraft at a variety of latitudes and altitudes over Australia, New Zealand, and Antarctica. The samples were analyzed for CO2 concentrations with nondispersive infrared gas analysis. The resulting data contain the sampling dates, type of aircraft, flight number, flask identification number, sampling time, geographic sector, distance in kilometers from the listed distance measuring equipment (DME) station, station number of the radio navigation distance measuring equipment, altitude of the aircraft above mean sea level, sample analysis date, flask pressure, tertiary standards used for the analysis, analyzer used, and CO2 concentration. These data represent the first published record of CO2 concentrations in the Southern Hemisphere expressed in the WMO 1981 CO2 Calibration Scale and provide a precise record of atmospheric CO2 concentrations in the troposphere and lower stratosphere over Australia and New Zealand.
NASA Astrophysics Data System (ADS)
Early on September 28,1993 our friend and colleague, Ian Moore passed away after a brief but courageous fight with cancer. Ian was born in Melbourne, Australia. He obtained his Bachelor's degree in Civil Engineering (with honors) in 1973 and his Master of Engineering Science in Civil Engineering in 1975, both from Monash University. After completing his Ph.D. in Agricultural Engineering at the University of Minnesota in 1979, he joined the Department of Agricultural Engineering at the University of Kentucky, Lexington, as an Assistant Professor. In 1983 he returned with his family to Australia to work as a Senior Research Scientist in the Canberra Laboratory of the then CSIRO Division of Water and Land Resources as a hydrologist in the Physical Hydrology and Water Quality Program. He left the Canberra Laboratory in 1986 for an appointment as an Assistant Professor in the Department of Agricultural Engineering at the University of Minnesota, where he was promoted to Associate Professor in 1989.
Best-Ever Snapshot of a Black Hole's Jets
2017-12-08
NASA image release May 20, 2011 To see a really cool video related to this image go here: www.flickr.com/photos/gsfc/5740451675/in/photostream The giant elliptical galaxy NGC 5128 is the radio source known as Centaurus A. Vast radio-emitting lobes (shown as orange in this optical/radio composite) extend nearly a million light-years from the galaxy. Credit: Capella Observatory (optical), with radio data from Ilana Feain, Tim Cornwell, and Ron Ekers (CSIRO/ATNF), R. Morganti (ASTRON), and N. Junkes (MPIfR). To read more go to: www.nasa.gov/topics/universe/features/radio-particle-jets... NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook
Oceanography and Yacht Racing - A Handful of Competitors, Millions of Spectators
NASA Astrophysics Data System (ADS)
Griffin, D.; Cresswell, G.; Badcock, K.; Cahill, M.; Rathbone, C.; Turner, P.
2006-07-01
Satellite altimeter measurements of sea level have proven to be far more accurate, and useful, than was hope for when the missions were designed, especially when data from several instruments are combines. In the regard, the experimental missions (ERS1 and 2, Topex/Poseidon, Jason-1 and GFO) have all been a resounding success. Why then, are there not plans already in place to continue and improve on the recent missions? One reason is surely that end-user uptake of the mission products has not yet convincingly justified the costs of future missions. At CSIRO we sought to maximise the awareness, amongst all marine sectors, that mapping ocean currents with sufficient accuracy and detail for operational use is indeed possible, so that the societal benefits of the system would become clear as quickly as possible. We did this using a well know marketing too - sport.
The Australia Telescope search for cosmic microwave background anisotropy
NASA Astrophysics Data System (ADS)
Subrahmanyan, Ravi; Kesteven, Michael J.; Ekers, Ronald D.; Sinclair, Malcolm; Silk, Joseph
1998-08-01
In an attempt to detect cosmic microwave background (CMB) anisotropy on arcmin scales, we have made an 8.7-GHz image of a sky region with a resolution of 2 arcmin and high surface brightness sensitivity using the Australia Telescope Compact Array (ATCA) in an ultracompact configuration. The foreground discrete-source confusion was estimated from observations with higher resolution at the same frequency and in a scaled array at a lower frequency. Following the subtraction of the foreground confusion, the field shows no features in excess of the instrument noise. This limits the CMB anisotropy flat-band power to Q_flat<23.6muK with 95 per cent confidence; the ATCA filter function (which is available at the website www.atnf.csiro.au/Research/cmbr/cmbr_atca.html) F_l in multipole l-space peaks at l_eff=4700 and has half-maximum values at l=3350 and 6050.
Fast-neutron/gamma-ray radiography scanner for the detection of contraband in air cargo containers
NASA Astrophysics Data System (ADS)
Eberhardt, J.; Liu, Y.; Rainey, S.; Roach, G.; Sowerby, B.; Stevens, R.; Tickner, J.
2006-05-01
There is a worldwide need for efficient inspection of cargo containers at airports, seaports and road border crossings. The main objectives are the detection of contraband such as illicit drugs, explosives and weapons. Due to the large volume of cargo passing through Australia's airports every day, it is critical that any scanning system should be capable of working on unpacked or consolidated cargo, taking at most 1-2 minutes per container. CSIRO has developed a fast-neutron/gamma-ray radiography (FNGR) method for the rapid screening of air freight. By combining radiographs obtained using 14 MeV neutrons and 60Co gamma-rays, high resolution images showing both density and material composition are obtained. A near full-scale prototype scanner has been successfully tested in the laboratory. With the support of the Australian Customs Service, a full-scale scanner has recently been installed and commissioned at Brisbane International Airport.
Assessing climate change beliefs: Response effects of question wording and response alternatives.
Greenhill, Murni; Leviston, Zoe; Leonard, Rosemary; Walker, Iain
2014-11-01
To date, there is no 'gold standard' on how to best measure public climate change beliefs. We report a study (N = 897) testing four measures of climate change causation beliefs, drawn from four sources: the CSIRO, Griffith University, the Gallup poll, and the Newspoll. We found that question wording influences the outcome of beliefs reported. Questions that did not allow respondents to choose the option of believing in an equal mix of natural and anthropogenic climate change obtained different results to those that included the option. Age and belief groups were found to be important predictors of how consistent people were in reporting their beliefs. Response consistency gave some support to past findings suggesting climate change beliefs reflect something deeper in the individual belief system. Each belief question was assessed against five criterion variables commonly used in climate change literature. Implications for future studies are discussed. © The Author(s) 2013.
The impact of climate change on hailstorms in southeastern Australia
NASA Astrophysics Data System (ADS)
Niall, Stephanie; Walsh, Kevin
2005-11-01
Data from a number of locations around southeastern Australia were analysed to determine the influence of climate change on the frequency and intensity of hail events in this region. The relationship between Convective Available Potential Energy (CAPE), frequently used as a measure of atmospheric instability, and hailstorms was investigated using both NCEP/NCAR reanalysis data (a data set comprising a blend of observations and model simulations) and also direct sounding data obtained from the Australian National Climate Centre. Two locations were chosen in southeastern Australia, Mount Gambier and Melbourne, over the months August to October for the period 1980-2001. A statistically significant relationship between hail incidence and CAPE values was established for both NCEP/NCAR and sounding data at both study sites. A stronger relationship was found between hail incidence and the CAPE, which was calculated using NCEP/NCAR data, than that between hail and the CAPE from the actual sounding data. A similar analysis was also conducted at both sites using the totals-totals index (TT index), which is an alternative measure of atmospheric instability.The CSIRO Mk3 Climate System Model was used to simulate values of CAPE for Mount Gambier in an environment containing double the pre-industrial concentrations of equivalent CO2. The results showed a significant decrease in CAPE values in the future. From this, assuming the relationship between CAPE and hail remains unchanged under enhanced greenhouse conditions, it is possible that there will be a decrease in the frequency of hail in southeastern Australia if current rates of CO2 emission are sustained. The severity of future hail events was investigated using crop-loss data from insurance companies. Strongest correlations were found between the crop-loss ratio (value of crop lost to hail damage over the total insured value of crop) and the number of days in a crop season with a TT index greater than 55. Results from the CSIRO Mk3 Climate System Model revealed that there was no significant difference between the number of days with a TT index over 55 for the simulation using current CO2 levels and that based on doubled equivalent pre-industrial CO2 concentrations (roughly equivalent to 2050 in the chosen emissions scenario). This implies that, for southeastern Australia, crop losses due to hail damage would not significantly increase under enhanced greenhouse conditions. Copyright
Indian Ocean Dipolelike Variability in the CSIRO Mark 3 Coupled Climate Model.
NASA Astrophysics Data System (ADS)
Cai, Wenju; Hendon, Harry H.; Meyers, Gary
2005-05-01
Coupled ocean-atmosphere variability in the tropical Indian Ocean is explored with a multicentury integration of the Commonwealth Scientific and Industrial Research Organisation (CSIRO) Mark 3 climate model, which runs without flux adjustment. Despite the presence of some common deficiencies in this type of coupled model, zonal dipolelike variability is produced. During July through November, the dominant mode of variability of sea surface temperature resembles the observed zonal dipole and has out-of-phase rainfall variations across the Indian Ocean basin, which are as large as those associated with the model El Niño-Southern Oscillation (ENSO). In the positive dipole phase, cold SST anomaly and suppressed rainfall south of the equator on the Sumatra-Java coast drives an anticyclonic circulation anomaly that is consistent with the steady response (Gill model) to a heat sink displaced south of the equator. The northwest-southeast tilting Sumatra-Java coast results in cold sea surface temperature (SST) centered south of the equator, which forces anticylonic winds that are southeasterly along the coast, which thus produces local upwelling, cool SSTs, and promotes more anticylonic winds; on the equator, the easterlies raise the thermocline to the east via upwelling Kelvin waves and deepen the off-equatorial thermocline to the west via off-equatorial downwelling Rossby waves. The model dipole mode exhibits little contemporaneous relationship with the model ENSO; however, this does not imply that it is independent of ENSO. The model dipole often (but not always) develops in the year following El Niño. It is triggered by an unrealistic transmission of the model's ENSO discharge phase through the Indonesian passages. In the model, the ENSO discharge Rossby waves arrive at the Sumatra-Java coast some 6 to 9 months after an El Niño peaks, causing the majority of model dipole events to peak in the year after an ENSO warm event. In the observed ENSO discharge, Rossby waves arrive at the Australian northwest coast. Thus the model Indian Ocean dipolelike variability is triggered by an unrealistic mechanism. The result highlights the importance of properly representing the transmission of Pacific Rossby waves and Indonesian throughflow in the complex topography of the Indonesian region in coupled climate models.
VRLA Ultrabattery for high-rate partial-state-of-charge operation
NASA Astrophysics Data System (ADS)
Lam, L. T.; Louey, R.; Haigh, N. P.; Lim, O. V.; Vella, D. G.; Phyland, C. G.; Vu, L. H.; Furukawa, J.; Takada, T.; Monma, D.; Kano, T.
The objective of this study is to produce and test the hybrid valve-regulated Ultrabattery designed specifically for hybrid-electric vehicle duty, i.e., high-rate partial-state-of-charge operation. The Ultrabattery developed by CSIRO Energy Technology is a hybrid energy-storage device, which combines an asymmetric supercapacitor, and a lead-acid battery in one unit cells, taking the best from both technologies without the need for extra, expensive electronic controls. The capacitor will enhance the power and lifespan of the lead-acid battery as it acts as a buffer during high-rate discharging and charging. Consequently, this hybrid technology is able to provide and absorb charge rapidly during vehicle acceleration and braking. The work programme of this study is divided into two main parts, namely, field trial of prototype Ultrabatteries in a Honda Insight HEV and laboratory tests of prototype batteries. In this paper, the performance of prototype Ultrabatteries under different laboratory tests is reported. The evaluation of Ultrabatteries in terms of initial performance and cycling performance has been conducted at both CSIRO and Furukawa laboratories. The initial performance of prototype Ultrabatteries, such as capacity, power, cold cranking and self-discharge has been evaluated based upon the US FreedomCAR Battery Test Manual (DOE/ID-11069, October 2003). Results show that the Ultrabatteries meet, or exceed, respective targets of power, available energy, cold cranking and self-discharge set for both minimum and maximum power-assist HEVs. The cycling performance of prototype Ultrabatteries has been evaluated using: (i) simplified discharge and charge profile to simulate the driving conditions of micro-HEV; (ii) 42-V profile to simulate the driving conditions of mild-HEV and (iii) EUCAR and RHOLAB profiles to simulate the driving conditions of medium-HEV. For comparison purposes, nickel-metal-hydride (Ni-MH) cells, which are presently used in the Honda Insight HEV, have also been subjected to some of the above profiles (i.e., simplified discharge and charge profile and EUCAR profile). Although the Ultrabattery and a Ni-MH cell under EUCAR test profile are still on cycling, the outcomes to date show that the performance of these batteries and cells has been at least four times longer than that of the state-of-the art lead-acid cells or batteries. Excitingly, the performance of Ultrabatteries is proven to be comparable with that of the Ni-MH cells.
Estimating plant available water content from remotely sensed evapotranspiration
NASA Astrophysics Data System (ADS)
van Dijk, A. I. J. M.; Warren, G.; Doody, T.
2012-04-01
Plant available water content (PAWC) is an emergent soil property that is a critical variable in hydrological modelling. PAWC determines the active soil water storage and, in water-limited environments, is the main cause of different ecohydrological behaviour between (deep-rooted) perennial vegetation and (shallow-rooted) seasonal vegetation. Conventionally, PAWC is estimated for a combination of soil and vegetation from three variables: maximum rooting depth and the volumetric water content at field capacity and permanent wilting point, respectively. Without elaborate local field observation, large uncertainties in PAWC occur due to the assumptions associated with each of the three variables. We developed an alternative, observation-based method to estimate PAWC from precipitation observations and CSIRO MODIS Reflectance-based Evapotranspiration (CMRSET) estimates. Processing steps include (1) removing residual systematic bias in the CMRSET estimates, (2) making spatially appropriate assumptions about local water inputs and surface runoff losses, (3) using mean seasonal patterns in precipitation and CMRSET to estimate the seasonal pattern in soil water storage changes, (4) from these, calculating the mean seasonal storage range, which can be treated as an estimate of PAWC. We evaluate the resulting PAWC estimates against those determined in field experiments for 180 sites across Australia. We show that the method produces better estimates of PAWC than conventional techniques. In addition, the method provides detailed information with full continental coverage at moderate resolution (250 m) scale. The resulting maps can be used to identify likely groundwater dependent ecosystems and to derive PAWC distributions for each combination of soil and vegetation type.
NASA Astrophysics Data System (ADS)
Rotstayn, L. D.; Jeffrey, S. J.; Collier, M. A.; Dravitzki, S. M.; Hirst, A. C.; Syktus, J. I.; Wong, K. K.
2012-02-01
We use a coupled atmosphere-ocean global climate model (CSIRO-Mk3.6) to investigate the roles of different forcing agents as drivers of summer rainfall trends in the Australasian region. Our results suggest that anthropogenic aerosols have contributed to the observed multi-decadal rainfall increase over north-western Australia. As part of the Coupled Model Intercomparison Project Phase 5 (CMIP5), we performed multiple 10-member ensembles of historical climate change, which are analysed for the period 1951-2010. The historical runs include ensembles driven by "all forcings" (HIST), all forcings except anthropogenic aerosols (NO_AA) and forcing only from long-lived greenhouse gases (GHGAS). Anthropogenic aerosol-induced effects in a warming climate are calculated from the difference of HIST minus NO_AA. We also compare a 10-member 21st century ensemble driven by Representative Concentration Pathway 4.5 (RCP4.5). Simulated aerosol-induced rainfall trends over the Indo-Pacific region for austral summer and boreal summer show a distinct contrast. In boreal summer, there is a southward shift of equatorial rainfall, consistent with the idea that anthropogenic aerosols have suppressed Asian monsoonal rainfall, and caused a southward shift of the local Hadley circulation. In austral summer, the aerosol-induced response more closely resembles a westward shift and strengthening of the upward branch of the Walker circulation, rather than a coherent southward shift of regional tropical rainfall. Thus the mechanism by which anthropogenic aerosols may affect Australian summer rainfall is unclear. Focusing on summer rainfall trends over north-western Australia (NWA), we find that CSIRO-Mk3.6 simulates a strong rainfall decrease in RCP4.5, whereas simulated trends in HIST are weak and insignificant during 1951-2010. The weak rainfall trends in HIST are due to compensating effects of different forcing agents: there is a significant decrease in GHGAS, offset by an aerosol-induced increase in HIST minus NO_AA. However, the magnitude of the observed NWA rainfall trend is not captured by the ensemble mean of HIST minus NO_AA, or by 440 unforced 60-yr trends calculated from a 500-yr pre-industrial control run. This suggests that the observed trend includes both a forced and unforced component. We investigate the mechanism of simulated and observed NWA rainfall changes by exploring changes in circulation over the Indo-Pacific region. The key circulation feature associated with the rainfall increase is a lower-tropospheric cyclonic circulation trend off the coast of NWA. In the model, it induces moisture convergence and upward motion over NWA. The cyclonic anomaly is present in trends calculated from HIST minus NO_AA and from reanalyses. Further analysis suggests that the cyclonic circulation trend in HIST minus NO_AA may be initiated as a Rossby wave response to positive convective heating anomalies south of the equator during November, when the aerosol-induced response of the model over the Indian Ocean still resembles that in boreal summer (i.e. a southward shift of equatorial rainfall). The aerosol-induced enhancement of the cyclonic circulation and associated monsoonal rainfall becomes progressively stronger from December to March, suggesting that there is a positive feedback between the source of latent heat (the Australian monsoon) and the cyclonic circulation. CSIRO-Mk3.6 indicates that anthropogenic aerosols may have masked greenhouse gas-induced changes in rainfall over NWA and in circulation over the wider Indo-Pacific region: simulated trends in RCP4.5 resemble a stronger version of those in GHGAS, and are very different from those in HIST. Further research is needed to better understand the mechanisms and the extent to which these findings are model-dependent.
Early Pulsar Observations in Australia
NASA Astrophysics Data System (ADS)
Wielebinski, R.
2012-12-01
The news about the discovery of the pulsar CP1919 reached Australia soon after the Hewish et al. publication in Nature came out at the end of February 1968. Immediately the Parkes radio telescope was transferred from scheduled observations to observe this new exciting object. Since pulsars have steep spectra, low radio frequency receivers were needed that were not supported by the Radiophysics Division of the CSIRO. As a result I, a staff member of the School of Electrical Engineering, Sydney University, was asked to come with my low-frequency receivers to Parkes and join in the first observations. Later the Molonglo Mills Cross radio telescope showed its suitability to pulsar discoveries and became involved in a number of important discoveries. New additional equipment aimed for the reception of pulsating signals had to be constructed in a hurry. In my talk I will cover the period 1968 to 1970 when I left Sydney for the Max-Planck-Institute in Bonn with its 100-m radio telescope.
Dr Elizabeth Alexander: First Female Radio Astronomer
NASA Astrophysics Data System (ADS)
Orchiston, Wayne
2005-01-01
During March-April 1945, solar radio emission was detected at 200 MHz by operators of a Royal New Zealand Air Force radar unit located on Norfolk Island. Initially dubbed the `Norfolk Island Effect', this anomalous radiation was investigated throughout 1945 by British-born Elizabeth Alexander, head of the Operational Research Section of the Radio Development Laboratory in New Zealand. Alexander prepared a number of reports on this work, and in early 1946 she published a short paper in the newly-launched journal, Radio & Electronics. A geologist by training, Elizabeth Alexander happened to be in the right place at the right time, and unwittingly became the first woman in the world to work in the field that would later become known as radio astronomy. Her research also led to further solar radio astronomy projects in New Zealand in the immediate post-war year, and in part was responsible for the launch of the radio astronomy program at the Division of Radiophysics, CSIRO, in Sydney.
Effect of experimental manipulation on survival and recruitment of feral pigs
Hanson, L.B.; Mitchell, M.S.; Grand, J.B.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2009-01-01
Lethal removal is commonly used to reduce the density of invasive-species populations, presuming it reduces population growth rate; the actual effect of lethal removal on the vital rates contributing to population growth, however, is rarely tested. We implemented a manipulative experiment of feral pig (Sus scrofa) populations at Fort Benning, Georgia, USA, to assess the demographic effects of harvest intensity. Using markrecapture data, we estimated annual survival, recruitment, and population growth rates of populations in a moderately harvested area and a heavily harvested area for 200406. Population growth rates did not differ between the populations. The top-ranked model for survival included a harvest intensity effect; model-averaged survival was lower for the heavily harvested population than for the moderately harvested population. Increased immigration and reproduction likely compensated for the increased mortality in the heavily harvested population. We conclude that compensatory responses in feral pig recruitment can limit the success of lethal control efforts. ?? 2009 CSIRO.
NASA Technical Reports Server (NTRS)
Batten, Adam; Edwards, Graeme; Gerasimov, Vadim; Hoschke, Nigel; Isaacs, Peter; Lewis, Chris; Moore, Richard; Oppolzer, Florien; Price, Don; Prokopenko, Mikhail;
2010-01-01
This report describes a significant advance in the capability of the CSIRO/NASA structural health monitoring Concept Demonstrator (CD). The main thrust of the work has been the development of a mobile robotic agent, and the hardware and software modifications and developments required to enable the demonstrator to operate as a single, self-organizing, multi-agent system. This single-robot system is seen as the forerunner of a system in which larger numbers of small robots perform inspection and repair tasks cooperatively, by self-organization. While the goal of demonstrating self-organized damage diagnosis was not fully achieved in the time available, much of the work required for the final element that enables the robot to point the video camera and transmit an image has been completed. A demonstration video of the CD and robotic systems operating will be made and forwarded to NASA.
Integrated water resource assessment for the Adelaide region, South Australia
NASA Astrophysics Data System (ADS)
Cox, James W.; Akeroyd, Michele; Oliver, Danielle P.
2016-10-01
South Australia is the driest state in the driest inhabited country in the world, Australia. Consequently, water is one of South Australia's highest priorities. Focus on water research and sources of water in the state became more critical during the Millenium drought that occurred between 1997 and 2011. In response to increased concern about water sources the South Australian government established The Goyder Institute for Water Research - a partnership between the South Australian State Government, the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Flinders University, University of Adelaide and University of South Australia. The Goyder Institute undertakes cutting-edge science to inform the development of innovative integrated water management strategies to ensure South Australia's ongoing water security and enhance the South Australian Government's capacity to develop and deliver science-based policy solutions in water management. This paper focuses on the integrated water resource assessment of the northern Adelaide region, including the key research investments in water and climate, and how this information is being utilised by decision makers in the region.
Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.
2014-01-01
The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019
LOCALIZER: subcellular localization prediction of both plant and effector proteins in the plant cell
Sperschneider, Jana; Catanzariti, Ann-Maree; DeBoer, Kathleen; Petre, Benjamin; Gardiner, Donald M.; Singh, Karam B.; Dodds, Peter N.; Taylor, Jennifer M.
2017-01-01
Pathogens secrete effector proteins and many operate inside plant cells to enable infection. Some effectors have been found to enter subcellular compartments by mimicking host targeting sequences. Although many computational methods exist to predict plant protein subcellular localization, they perform poorly for effectors. We introduce LOCALIZER for predicting plant and effector protein localization to chloroplasts, mitochondria, and nuclei. LOCALIZER shows greater prediction accuracy for chloroplast and mitochondrial targeting compared to other methods for 652 plant proteins. For 107 eukaryotic effectors, LOCALIZER outperforms other methods and predicts a previously unrecognized chloroplast transit peptide for the ToxA effector, which we show translocates into tobacco chloroplasts. Secretome-wide predictions and confocal microscopy reveal that rust fungi might have evolved multiple effectors that target chloroplasts or nuclei. LOCALIZER is the first method for predicting effector localisation in plants and is a valuable tool for prioritizing effector candidates for functional investigations. LOCALIZER is available at http://localizer.csiro.au/. PMID:28300209
Bradford, DanaKai; Hansen, David; Karunanithi, Mohan
2015-01-01
Cardiovascular disease is a major health problem for all Australians and is the leading cause of death in Aboriginal and Torres Strait Islanders. In 2010, more then 50% of all heart attack deaths were due to repeated events. Cardiac rehabilitation programs have been proven to be effective in preventing the recurrence of cardiac events and readmission to hospitals. There are however, many barriers to the use of these programs. To address these barriers, CSIRO developed an IT enabled cardiac rehabilitation program delivered by mobile phone through a smartphone app and succesfully trialed it in an urban general population. If these results can be replicated in Indigenous populations, the program has the potential to significantly improve life expectancy and help close the gap in health outcomes. The challenge described in this paper is customizing the existing cardiac health program to make it culturally relevant and suitable for Indigenous Australians living in urban and remote communities.
Engagement with dietary fibre and receptiveness to resistant starch in Australia.
Mohr, Philip; Quinn, Sinéad; Morell, Matthew; Topping, David
2010-11-01
To investigate community engagement with the health benefits of dietary fibre (DF) and its potential as a framework for the promotion of increased consumption of resistant starch (RS). A nationwide postal Food and Health Survey conducted in Australia by CSIRO Human Nutrition. Adults aged 18 years and above, selected at random from the Australian Electoral Roll (n 849). A cross-sectional design was employed to analyse ratings of (i) the importance of various RS health and functional claims and (ii) receptiveness to different foods as RS delivery vehicles, according to the respondents' level of fibre engagement as classified under the Precaution Adoption Process Model (PAPM) of Health Behaviour. There was a high level of recognition (89·5 %) of DF as being important for health. Significant gender differences were found for ratings of RS attributes and RS delivery options. Women were both more fibre-engaged than men and more receptive than men to RS and its potential benefits. Ratings of the acceptability of several foods as means of delivering RS revealed a general preference for healthy staples over indulgences, with the margin between acceptability of staples and indulgences increasing markedly with increased fibre engagement. Application of the PAPM to awareness of DF reveals a ready-made target group for health messages about RS and pockets of differential potential receptiveness. The findings support the promotion of RS as providing health benefits of DF with the added reduction of risk of serious disease, its delivery through healthy staples and the targeting of messages at both fibre-engaged individuals and women in general.
Searching for Planets Around Pulsars
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2015-09-01
Did you know that the very first exoplanets ever confirmed were found around a pulsar? The precise timing measurements of pulsar PSR 1257+12 were what made the discovery of its planetary companions possible. Yet surprisingly, though weve discovered thousands of exoplanets since then, only one other planet has ever been confirmed around a pulsar. Now, a team of CSIRO Astronomy and Space Science researchers are trying to figure out why.Formation ChallengesThe lack of detected pulsar planets may simply reflect the fact that getting a pulsar-planet system is challenging! There are three main pathways:The planet formed before the host star became a pulsar which means it somehow survived its star going supernova (yikes!).The planet formed elsewhere and was captured by the pulsar.The planet formed out of the debris of the supernova explosion.The first two options, if even possible, are likely to be rare occurrences but the third option shows some promise. In this scenario, after the supernova explosion, a small fraction of the material falls back toward the stellar remnant and is recaptured, forming what is known as a supernova fallback disk. According to this model, planets could potentially form out of this disk.Disk ImplicationsLed by Matthew Kerr, the CSIRO astronomers set out to systematically look for these potential planets that might have formed in situ around pulsars. They searched a sample of 151 young, energetic pulsars, scouring seven years of pulse time-of-arrival data for periodic variation that could signal the presence of planetary companions. Their methods to mitigate pulsar timing noise and model realistic orbits allowed them to have good sensitivity to low-mass planets.The results? They found no conclusive evidence that any of these pulsars have planets.This outcome carries with it some significant implications. The pulsar sample spans 2 Myr in age, in which planets should have had enough time to form in debris disks. The fact that none were detected suggests that long-lived supernova fallback disks may actually be much rarer than thought, or they exist only in conditions that arent compatible with planet formation.So if thats the case, what about the planets found around PSR 1257+12? This pulsar may actually be somewhat unique, in that it was born with an unusually weak magnetic field. This birth defect might have allowed it to form a fallback disk and, subsequently, planets where the sample of energetic pulsars studied here could not.CitationM. Kerr et al.2015 ApJ 809 L11 doi:10.1088/2041-8205/809/1/L11
Kerestes, Rebecca; Phal, Pramit M; Steward, Chris; Moffat, Bradford A; Salinas, Simon; Cox, Kay L; Ellis, Kathryn A; Cyarto, Elizabeth V; Ames, David; Martins, Ralph N; Masters, Colin L; Rowe, Christopher C; Sharman, Matthew J; Salvado, Olivier; Szoeke, Cassandra; Lai, Michelle; Lautenschlager, Nicola T; Desmond, Patricia M
2015-10-01
Recent evidence suggests that exercise plays a role in cognition and that the posterior cingulate cortex (PCC) can be divided into dorsal and ventral subregions based on distinct connectivity patterns. To examine the effect of physical activity and division of the PCC on brain functional connectivity measures in subjective memory complainers (SMC) carrying the epsilon 4 allele of apolipoprotein E (APOE ε 4) allele. Participants were 22 SMC carrying the APOE ε 4 allele ( ε 4+; mean age 72.18 years) and 58 SMC non-carriers ( ε 4-; mean age 72.79 years). Connectivity of four dorsal and ventral seeds was examined. Relationships between PCC connectivity and physical activity measures were explored. ε 4+ individuals showed increased connectivity between the dorsal PCC and dorsolateral prefrontal cortex, and the ventral PCC and supplementary motor area (SMA). Greater levels of physical activity correlated with the magnitude of ventral PCC-SMA connectivity. The results provide the first evidence that ε 4+ individuals at increased risk of cognitive decline show distinct alterations in dorsal and ventral PCC functional connectivity. D.A. has served on scientific advisory boards for Novartis, Eli Lilly, Janssen, Prana and Pfizer, and as Editor-in-Chief for International Psychogeriatrics; received speaker honoraria from Pfizer and Lundbeck, and research support from Eli Lilly, GlaxoSmithKline, Forest Laboratories, Novartis, and CSIRO. C.L.M. has received consulting fees from Eli Lilly and Prana Biotechnology, and has stock ownership in Prana Biotechnology. C.C.R. has received consultancy payments from Roche and Piramal, and research support from Avid Radiopharmaceuticals, Eli Lilly, GE Healthcare, Piramal and Navidea for amyloid imaging. C.S. has provided clinical consultancy and been on scientific advisory committees for the Australian CSIRO, Alzheimer's Australia, University of Melbourne and other relationships, which are subject to confidentiality clauses; she has been a named Chief Investigator on investigator-driven collaborative research projects in partnership with Pfizer, Merck, Piramal, Bayer and GE Healthcare. Her research programme has received support from the National Health and Medical Research Council Alzheimer's Association, Collier Trust, Scobie and Claire McKinnon Foundation, JO and JR Wicking Trust, Shepherd Foundation, Brain Foundation, Mason Foundation, Ramaciotti Foundation, Alzheimer's Australia and the Royal Australian College of Physicians. © The Royal College of Psychiatrists 2015. This is an open access article distributed under the terms of the Creative Commons Non-Commercial, No Derivatives (CC BY-NC-ND) licence.
Phal, Pramit M.; Steward, Chris; Moffat, Bradford A.; Salinas, Simon; Cox, Kay L.; Ellis, Kathryn A.; Cyarto, Elizabeth V.; Ames, David; Martins, Ralph N.; Masters, Colin L.; Rowe, Christopher C.; Sharman, Matthew J.; Salvado, Olivier; Szoeke, Cassandra; Lai, Michelle; Lautenschlager, Nicola T.; Desmond, Patricia M.
2015-01-01
Background Recent evidence suggests that exercise plays a role in cognition and that the posterior cingulate cortex (PCC) can be divided into dorsal and ventral subregions based on distinct connectivity patterns. Aims To examine the effect of physical activity and division of the PCC on brain functional connectivity measures in subjective memory complainers (SMC) carrying the epsilon 4 allele of apolipoprotein E (APOE ε4) allele. Method Participants were 22 SMC carrying the APOE ε4 allele (ε4+; mean age 72.18 years) and 58 SMC non-carriers (ε4–; mean age 72.79 years). Connectivity of four dorsal and ventral seeds was examined. Relationships between PCC connectivity and physical activity measures were explored. Results ε4+ individuals showed increased connectivity between the dorsal PCC and dorsolateral prefrontal cortex, and the ventral PCC and supplementary motor area (SMA). Greater levels of physical activity correlated with the magnitude of ventral PCC–SMA connectivity. Conclusions The results provide the first evidence that ε4+ individuals at increased risk of cognitive decline show distinct alterations in dorsal and ventral PCC functional connectivity. Declaration of interest D.A. has served on scientific advisory boards for Novartis, Eli Lilly, Janssen, Prana and Pfizer, and as Editor-in-Chief for International Psychogeriatrics; received speaker honoraria from Pfizer and Lundbeck, and research support from Eli Lilly, GlaxoSmithKline, Forest Laboratories, Novartis, and CSIRO. C.L.M. has received consulting fees from Eli Lilly and Prana Biotechnology, and has stock ownership in Prana Biotechnology. C.C.R. has received consultancy payments from Roche and Piramal, and research support from Avid Radiopharmaceuticals, Eli Lilly, GE Healthcare, Piramal and Navidea for amyloid imaging. C.S. has provided clinical consultancy and been on scientific advisory committees for the Australian CSIRO, Alzheimer's Australia, University of Melbourne and other relationships, which are subject to confidentiality clauses; she has been a named Chief Investigator on investigator-driven collaborative research projects in partnership with Pfizer, Merck, Piramal, Bayer and GE Healthcare. Her research programme has received support from the National Health and Medical Research Council Alzheimer's Association, Collier Trust, Scobie and Claire McKinnon Foundation, JO and JR Wicking Trust, Shepherd Foundation, Brain Foundation, Mason Foundation, Ramaciotti Foundation, Alzheimer's Australia and the Royal Australian College of Physicians. Copyright and usage © The Royal College of Psychiatrists 2015. This is an open access article distributed under the terms of the Creative Commons Non-Commercial, No Derivatives (CC BY-NC-ND) licence. PMID:27703739
NASA Astrophysics Data System (ADS)
Barnuud, Nyamdorj N.; Zerihun, Ayalsew; Mpelasoka, Freddie; Gibberd, Mark; Bates, Bryson
2014-08-01
More than a century of observations has established that climate influences grape berry composition. Accordingly, the projected global climate change is expected to impact on grape berry composition although the magnitude and direction of impact at regional and subregional scales are not fully known. The aim of this study was to assess potential impacts of climate change on levels of berry anthocyanin and titratable acidity (TA) of the major grapevine varieties grown across all of the Western Australian (WA) wine regions. Grape berry anthocyanin and TA responses across all WA wine regions were projected for 2030, 2050 and 2070 by utilising empirical models that link these berry attributes and climate data downscaled (to ˜5 km resolution) from the csiro_mk3_5 and miroc3_2_medres global climate model outputs under IPCC SRES A2 emissions scenario. Due to the dependence of berry composition on maturity, climate impacts on anthocyanin and TA levels were assessed at a common maturity of 22 °Brix total soluble solids (TSS), which necessitated the determination of when this maturity will be reached for each variety, region and warming scenario, and future period. The results indicate that both anthocyanin and TA levels will be affected negatively by a warming climate, but the magnitude of the impacts will differ between varieties and wine regions. Compared to 1990 levels, median anthocyanins concentrations are projected to decrease, depending on global climate model, by up to 3-12 % and 9-33 % for the northern wine regions by 2030 and 2070, respectively while 2-18 % reductions are projected in the southern wine regions for the same time periods. Patterns of reductions in the median Shiraz berry anthocyanin concentrations are similar to that of Cabernet Sauvignon; however, the magnitude is lower (up to 9-18 % in southern and northern wine regions respectively by 2070). Similarly, uneven declines in TA levels are projected across the study regions. The largest reductions in median TA are likely to occur in the present day warmer wine regions, up to 40 % for Chardonnay followed by 15 % and 12 % for Shiraz and Cabernet Sauvignon, respectively, by 2070 under the high warming projection (csiro_mk3_5). It is concluded that, under existing management practices, some of the key grape attributes that are integral to premium wine production will be affected negatively by a warming climate, but the magnitudes of the impacts vary across the established wine regions, varieties, the magnitude of warming and future periods considered.
The simulated climate of the Last Glacial Maximum and insights into the global carbon cycle.
NASA Astrophysics Data System (ADS)
Buchanan, P. J.; Matear, R.; Lenton, A.; Phipps, S. J.; Chase, Z.; Etheridge, D. M.
2016-12-01
The ocean's ability to store large quantities of carbon, combined with the millennial longevity over which this reservoir is overturned, has implicated the ocean as a key driver of glacial-interglacial climates. However, the combination of processes that cause an accumulation of carbon within the ocean during glacial periods is still under debate. Here we present simulations of the Last Glacial Maximum (LGM) using the CSIRO Mk3L-COAL Earth System Model to test the contribution of key biogeochemical processes to ocean carbon storage. For the coupled LGM simulation, we find that significant cooling (3.2 °C), expanded minimum (Northern Hemisphere: 105 %; Southern Hemisphere: 225 %) and maximum (Northern Hemisphere: 145 %; Southern Hemisphere: 120 %) sea ice cover, and a reorganisation of the overturning circulation caused significant changes in ocean biogeochemical fields. The coupled LGM simulation stores an additional 322 Pg C in the deep ocean relative to the Pre-Industrial (PI) simulation. However, 839 Pg C is lost from the upper ocean via equilibration with a lower atmospheric CO2 concentration, causing a net loss of 517 Pg C relative to the PI simulation. The LGM deep ocean also experiences an oxygenation (>100 mmol O2 m-3) and deepening of the aragonite saturation depth (> 2,000 m deeper) at odds with proxy reconstructions. Hence, these physical changes cannot in isolation produce plausible biogeochemistry nor the required drawdown of atmospheric CO2 of 80-100 ppm at the LGM. With modifications to key biogeochemical processes, which include an increased export of organic matter due to a simulated release from iron limitation, a deepening of remineralisation and decreased inorganic carbon export driven by cooler temperatures, we find that the carbon content in the glacial oceanic reservoir can be increased (326 Pg C) to a level that is sufficient to explain the reduction in atmospheric and terrestrial carbon at the LGM (520 ± 400 Pg C). These modifications also go some way to reconcile simulated export production, aragonite saturation state and oxygen fields with those that have been reconstructed by proxy measurements, thereby implicating past changes in ocean biogeochemistry as an essential driver of the climate system.
NASA Astrophysics Data System (ADS)
Golodoniuc, P.; Davis, A. C.; Klump, J. F.
2017-12-01
Electromagnetic exploration techniques are extensively used for remote detection and measurement of subsurface electrical conductivity structures for a variety of geophysical applications such as mineral exploration and groundwater detection. The Electromagnetic Applications group in the Mineral Resources business unit of CSIRO heavily relies upon the use of airborne electromagnetic (AEM) data for the development of new exploration methods. AEM data, which are often originally acquired for green- or brown-fields exploration for minerals, can be re-used for groundwater resource detection in the near-surface. This makes AEM data potentially useful beyond their initial purpose for decades into the future. Increasingly, AEM data are also used as a primary mapping tool for groundwater resources. With surveys ranging from under 1000 km to tens of thousands of km in total length, AEM data are spatially and temporally dense. Sounding stations are often sampled every 0.2 seconds, with about 30-50 measurements taken at each site, resulting in a spacing of measurements along the flight lines of approximately 20-50 metres. This means that typical AEM surveys can easily have on the order of millions of individual stations, with tens of millions of measurements. AEM data needs to be examined for data quality before it can be inverted into conductivity-depth information. Data, which is gathered in survey transects or lines, is examined both along the line, in a plan view and for the transient decay of the electromagnetic signal of individual stations before noise artefacts can be removed. The complexity of the data, its size and dimensionality require efficient tools that support interactive visual data analysis and allows easy navigation through the dataset. A suite of numerical algorithms for data quality assurance facilitates this process through efficient visualisations and data quality metrics. The extensible architecture of the toolkit allows application of custom algorithms on-demand through a web-based user interface and seamlessly connects data processing workflow to geophysical inversion codes. The toolkit architecture has a small client-side footprint and runs on a standard workstation, delegating all computationally intensive tasks to the accompanying Cloud-based processing unit.
NASA Astrophysics Data System (ADS)
Golodoniuc, P.; Davis, A. C.; Klump, J. F.
2016-12-01
Electromagnetic exploration techniques are extensively used for remote detection and measurement of subsurface electrical conductivity structures for a variety of geophysical applications such as mineral exploration and groundwater detection. The Electromagnetic Applications group in the Mineral Resources business unit of CSIRO heavily relies upon the use of airborne electromagnetic (AEM) data for the development of new exploration methods. AEM data, which are often originally acquired for green- or brown-fields exploration for minerals, can be re-used for groundwater resource detection in the near-surface. This makes AEM data potentially useful beyond their initial purpose for decades into the future. Increasingly, AEM data are also used as a primary mapping tool for groundwater resources. With surveys ranging from under 1000 km to tens of thousands of km in total length, AEM data are spatially and temporally dense. Sounding stations are often sampled every 0.2 seconds, with about 30-50 measurements taken at each site, resulting in a spacing of measurements along the flight lines of approximately 20-50 metres. This means that typical AEM surveys can easily have on the order of millions of individual stations, with tens of millions of measurements. AEM data needs to be examined for data quality before it can be inverted into conductivity-depth information. Data, which is gathered in survey transects or lines, is examined both along the line, in a plan view and for the transient decay of the electromagnetic signal of individual stations before noise artefacts can be removed. The complexity of the data, its size and dimensionality require efficient tools that support interactive visual data analysis and allows easy navigation through the dataset. A suite of numerical algorithms for data quality assurance facilitates this process through efficient visualisations and data quality metrics. The extensible architecture of the toolkit allows application of custom algorithms on-demand through a web-based user interface and seamlessly connects data processing workflow to geophysical inversion codes. The toolkit architecture has a small client-side footprint and runs on a standard workstation, delegating all computationally intensive tasks to the accompanying Cloud-based processing unit.
NASA Astrophysics Data System (ADS)
Sangpenchan, R.
2011-12-01
This research explores the vulnerability of Thai rice production to simultaneous exposure by climate and socioeconomic change -- so-called "double exposure." Both processes influence Thailand's rice production system, but the vulnerabilities associated with their interactions are unknown. To understand this double exposure, I adopts a mixed-method, qualitative-quantitative analytical approach consisting of three phases of analysis involving a Vulnerability Scoping Diagram, a Principal Component Analysis, and the EPIC crop model using proxy datasets collected from secondary data sources at provincial scales.The first and second phases identify key variables representing each of the three dimensions of vulnerability -- exposure, sensitivity, and adaptive capacity indicating that the greatest vulnerability in the rice production system occurs in households and areas with high exposure to climate change, high sensitivity to climate and socioeconomic stress, and low adaptive capacity. In the third phase, the EPIC crop model simulates rice yields associated with future climate change projected by CSIRO and MIROC climate models. Climate change-only scenarios project the decrease in yields by 10% from the current productivity during 2016-2025 and 30% during 2045-2054. Scenarios applying both climate change and improved technology and management practices show that a 50% increase in rice production is possible, but requires strong collaboration between sectors to advance agricultural research and technology and requires strong adaptive capacity in the rice production system characterized by well-developed social capital, social networks, financial capacity, and infrastructure and household mobility at the local scale. The vulnerability assessment and climate and crop adaptation simulations used here provide useful information to decision makers developing vulnerability reduction plans in the face of concurrent climate and socioeconomic change.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunbar, Ricky B.; Duck, Benjamin C.; Moriarty, Tom E.
Perovskite materials have generated significant interest from academia and industry as a potential component in next-generation, high-efficiency, low-cost, photovoltaic (PV) devices. The record efficiency reported for perovskite solar cells has risen rapidly, and is now more than 22%. However, due to their complex dynamic behaviour, the process of measuring the efficiency of perovskite solar cells appears to be much more complicated than for other technologies. It has long been acknowledged that this is likely to greatly reduce the reliability of reported efficiency measurements, but the quantitative extent to which this occurs has not been determined. To investigate this, we conductmore » the first major inter-comparison of this PV technology. The participants included two labs accredited for PV performance measurement (CSIRO and NREL) and eight PV research laboratories. We find that the inter-laboratory measurement variability can be almost ten times larger for a slowly responding perovskite cell than for a control silicon cell. We show that for such a cell, the choice of measurement method, far more so than measurement hardware, is the single-greatest cause for this undesirably large variability. We provide recommendations for identifying the most appropriate method for a given cell, depending on its stabilization and degradation behaviour. Moreover, the results of this study suggest that identifying a consensus technique for accurate and meaningful efficiency measurements of perovskite solar cells will lead to an immediate improvement in reliability. This, in turn, should assist device researchers to correctly evaluate promising new materials and fabrication methods, and further boost the development of this technology.« less
Dunbar, Ricky B.; Duck, Benjamin C.; Moriarty, Tom E.; ...
2017-10-24
Perovskite materials have generated significant interest from academia and industry as a potential component in next-generation, high-efficiency, low-cost, photovoltaic (PV) devices. The record efficiency reported for perovskite solar cells has risen rapidly, and is now more than 22%. However, due to their complex dynamic behaviour, the process of measuring the efficiency of perovskite solar cells appears to be much more complicated than for other technologies. It has long been acknowledged that this is likely to greatly reduce the reliability of reported efficiency measurements, but the quantitative extent to which this occurs has not been determined. To investigate this, we conductmore » the first major inter-comparison of this PV technology. The participants included two labs accredited for PV performance measurement (CSIRO and NREL) and eight PV research laboratories. We find that the inter-laboratory measurement variability can be almost ten times larger for a slowly responding perovskite cell than for a control silicon cell. We show that for such a cell, the choice of measurement method, far more so than measurement hardware, is the single-greatest cause for this undesirably large variability. We provide recommendations for identifying the most appropriate method for a given cell, depending on its stabilization and degradation behaviour. Moreover, the results of this study suggest that identifying a consensus technique for accurate and meaningful efficiency measurements of perovskite solar cells will lead to an immediate improvement in reliability. This, in turn, should assist device researchers to correctly evaluate promising new materials and fabrication methods, and further boost the development of this technology.« less
Permafrost carbon cycles under multifactor global change: a modeling analysis
NASA Astrophysics Data System (ADS)
Li, J.; Natali, S.; Schaedel, C.; Schuur, E. A.; Luo, Y.
2012-12-01
Carbon dioxide (CO2) and methane (CH4) from permafrost zones are projected to be elevated under global change scenarios, but the magnitude and spatiotemporal variation of these greenhouse gas sources are still highly uncertain. Here we implement and evaluate the integration of a methane model into the Community Atmosphere-Biosphere Land Exchange model (CABLE v1.5 of CSIRO, Australia) in order to explore the carbon emissions under warming, elevated CO2 and altered precipitation. The weather data was obtained from a tundra site named eight mile lake in Alaska and the data of years 2004-2009 was used to tune and validate the model. First, data obtained from measurement were transformed to meet the input weather data required by the model. Second, model parameters regarding vegetation and soil were modified to accurately simulate the permafrost site. For example, we modified the resistivity of soil in the model so that the modeled energy balance was found to match with the observations. Currently, the modeled NPP are relatively higher but soil temperature is lower than the observations. Third, a new methane module is being integrated into the model. We simulate the methane production, oxidation and emission processes (ebullition, diffusion and plant-aided transport). We test new functions for soil pH and redox potential that impact microbial methane production and oxidation in soils. We link water table position (WTP) with the available amount of decomposable carbon for methanogens, in combination with spatially explicit simulation of soil temperature. We also validated the model and resolved the discrepancy between the model and observation. In this presentation, we will describe results of simulations to forecast CO2 and CH4 fluxes under climate change scenarios.
Evaluation of CMIP5 twentieth century rainfall simulation over the equatorial East Africa
NASA Astrophysics Data System (ADS)
Ongoma, Victor; Chen, Haishan; Gao, Chujie
2018-02-01
This study assesses the performance of 22 Coupled Model Intercomparison Project Phase 5 (CMIP5) historical simulations of rainfall over East Africa (EA) against reanalyzed datasets during 1951-2005. The datasets were sourced from Global Precipitation Climatology Centre (GPCC) and Climate Research Unit (CRU). The metrics used to rank CMIP5 Global Circulation Models (GCMs) based on their performance in reproducing the observed rainfall include correlation coefficient, standard deviation, bias, percentage bias, root mean square error, and trend. Performances of individual models vary widely. The overall performance of the models over EA is generally low. The models reproduce the observed bimodal rainfall over EA. However, majority of them overestimate and underestimate the October-December (OND) and March-May (MAM) rainfall, respectively. The monthly (inter-annual) correlation between model and reanalyzed is high (low). More than a third of the models show a positive bias of the annual rainfall. High standard deviation in rainfall is recorded in the Lake Victoria Basin, central Kenya, and eastern Tanzania. A number of models reproduce the spatial standard deviation of rainfall during MAM season as compared to OND. The top eight models that produce rainfall over EA relatively well are as follows: CanESM2, CESM1-CAM5, CMCC-CESM, CNRM-CM5, CSIRO-Mk3-6-0, EC-EARTH, INMCM4, and MICROC5. Although these results form a fairly good basis for selection of GCMs for carrying out climate projections and downscaling over EA, it is evident that there is still need for critical improvement in rainfall-related processes in the models assessed. Therefore, climate users are advised to use the projections of rainfall from CMIP5 models over EA cautiously when making decisions on adaptation to or mitigation of climate change.
Le Quere, C. [University of East Anglia, Norwich UK; Peters, G. P. [Univ. of Oslo (Norway); Andres, R. J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Andrew, R. M. [Univ. of Oslo (Norway); Boden, T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); al., et
2013-01-01
Global emissions of carbon dioxide from the combustion of fossil fuels will reach 36 billion tonnes for the year 2013. "This is a level unprecedented in human history," says CSIRO's Dr Pep Canadell, Executive-Director of the Global Carbon Project (GCP) and co-author of a new report. The GCP provides an annual report of carbon dioxide emissions, land and ocean sinks and accumulation in the atmosphere, incorporating data from multiple research institutes from around the world. The 2013 figures coincide with the global launch of the Global Carbon Atlas, an online platform to explore, visualise and interpret the emissions data at the global, regional and national scales (www.globalcarbonatlas.org). The full data and methods are published today in the journal Earth System Science Data Discussions, and data and other graphic materials can be found at: www.globalcarbonproject.org/carbonbudget. The Global Carbon Budget 2013 is a collaborative effort of the global carbon cycle science community coordinated by the Global Carbon Project.
Energy flux and hydrogeology of thermal anomalies in the Gulf of Mexico Basin: South Texas example
NASA Astrophysics Data System (ADS)
Sharp, J. M., Jr.
1994-03-01
This report covers the period from 1 September 1993 through 28 February 1994. The last Technical Progress Report was submitted in September 1993. In this period, we have focused our efforts on the following activities: (1) Finalizing collection of radiogenic heat production data; (2) Evaluating petrographic controls on thermal conductivity; (3) Modeling one-dimensional heat conduction with sources; (4) Completing base geologic cross-section; (5) Acquiring pressure data to augment data base; (6) Putting map and well data into digital format for analysis; (7) Analyzing salinity, temperate and fluid potential data for propensity of free convection; (8) Finalizing preliminary investigation into depressurization of reservoirs; (9) Preparing presentations for AAPG meeting in Denver; (10) Presenting results at the Geological Society of America Meeting in Boston (October 1993); (11) Collaborating with project members of the DOE funded Global Basins Research Network who are working on a project in the Eugene Island Block, offshore Louisiana; and (12) Collaborating with others working on research in the Gulf of Mexico Basin in our Department and with CSIRO scientists in Adelaide, Australia.
Constraining the Turbulence Scale and Mixing of a Crushed Pulsar Wind Nebula
NASA Astrophysics Data System (ADS)
Ng, Chi Yung; Ma, Y. K.; Bucciantini, Niccolo; Slane, Patrick O.; Gaensler, Bryan M.; Temim, Tea
2016-04-01
Pulsar wind nebulae (PWNe) are synchrotron-emitting nebulae resulting from the interaction between pulsars' relativistic particle outflows and the ambient medium. The Snail PWN in supernova remnant G327.1-1.1 is a rare system that has recently been crushed by supernova reverse shock. We carried out radio polarization observations with the Australia Telescope Compact Array and found highly ordered magnetic field structure in the nebula. This result is surprising, given the turbulent environment expected from hydrodynamical simulations. We developed a toymodel and compared simple simulations with observations to constrain the characteristic turbulence scale in the PWN and the mixing with supernova ejecta. We estimate that the turbulence scale is about one-eighth to one-sixth of the nebula radius and a pulsar wind filling factor of 50-75%. The latter implies substantial mixing of the pulsar wind with the surrounding supernova ejecta.This work is supported by an ECS grant of the Hong Kong Government under HKU 709713P. The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO.
Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T
2015-01-01
To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.
Radio Observations of Elongated Pulsar Wind Nebulae
NASA Astrophysics Data System (ADS)
Ng, Stephen C.-Y.
2015-08-01
The majority of pulsars' rotational energy is carried away by relativistic winds, which are energetic particles accelerated in the magnetosphere. The confinement of the winds by the ambient medium result in synchrotron bubbles with broad-band emission, which are commonly referred to as pulsar wind nebulae (PWNe). Due to long synchrotron cooling time, a radio PWN reflects the integrated history of the system, complementing information obtained from the X-ray and higher energy bands. In addition, radio polarization measurements can offer a powerful probe of the PWN magnetic field structure. Altogether these can reveal the physical conditions and evolutionary history of a system.I report on preliminary results from high-resolution radio observations of PWNe associated with G327.1-1.1, PSRs J1015-5719, B1509-58, and J1549-4848 taken with the Australia Telescope Compact Array (ATCA). Their magnetic field structure and multiwavelength comparison with other observations are discussed.This work is supported by a ECS grant of the Hong Kong Government under HKU 709713P. The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO.
NASA Astrophysics Data System (ADS)
Goebel, R.; Power, O.; Fletcher, N.; Stock, M.
2012-01-01
This report describes the results obtained from a NML(Ireland)-BIPM bilateral comparison of 1 Ω resistance standards in 2010. The comparison was carried out in the framework of the BIPM ongoing key comparison BIPM.EM-K13.a. Two BIPM 1 Ω travelling standards of CSIRO type were calibrated first at the BIPM, then at the NMLI and again at the BIPM after their return. The stability of the transfer standard was such that the uncertainty associated with the transfer was significantly smaller than the uncertainty arising from the calibrations. The mean difference between the NMLI and the BIPM calibrations was found to be just within the expanded uncertainty (k = 2) of the comparison. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Web Site on Marine Connecivity Around Australia
NASA Astrophysics Data System (ADS)
Condie, Scott
2005-06-01
The Commonwealth Scientific and Industrial Research Organisation (CSIRO), with support from the Western Australian Government, has developed an online tool for marine scientists and managers to investigate the largescale patterns of spatial connectivity around Australia that are associated with ocean current transport (,Figure 1). This tool, referred to as the Australian Connectivity Interface, or Aus-ConnIe, is expected to find applications in areas such as tracer dispersion studies (see example by Ridgway and Condie [2004](, larval dispersion and recruitment, and the development of scenarios and preliminary risk assessments for contaminant dispersion in the marine environment. After selecting a region of interest, users can investigate where material carried into that region comes from, or where material originating in that region goes to, over a range of timescales (weeks to months). These connectivity statistics are based on large numbers of particle trajctories (one million at any given time) estimated from satellite altimeter data, coastal tide-gauge data, and winds from meteorological models. Users can save the results in a variety of formats (CSV, Excel, or XML) and, as an option, may save their sessions by first registering.
NASA Astrophysics Data System (ADS)
Shabani, Farzin; Kumar, Lalit; Solhjouy-fard, Samaneh
2017-08-01
The aim of this study was to have a comparative investigation and evaluation of the capabilities of correlative and mechanistic modeling processes, applied to the projection of future distributions of date palm in novel environments and to establish a method of minimizing uncertainty in the projections of differing techniques. The location of this study on a global scale is in Middle Eastern Countries. We compared the mechanistic model CLIMEX (CL) with the correlative models MaxEnt (MX), Boosted Regression Trees (BRT), and Random Forests (RF) to project current and future distributions of date palm ( Phoenix dactylifera L.). The Global Climate Model (GCM), the CSIRO-Mk3.0 (CS) using the A2 emissions scenario, was selected for making projections. Both indigenous and alien distribution data of the species were utilized in the modeling process. The common areas predicted by MX, BRT, RF, and CL from the CS GCM were extracted and compared to ascertain projection uncertainty levels of each individual technique. The common areas identified by all four modeling techniques were used to produce a map indicating suitable and unsuitable areas for date palm cultivation for Middle Eastern countries, for the present and the year 2100. The four different modeling approaches predict fairly different distributions. Projections from CL were more conservative than from MX. The BRT and RF were the most conservative methods in terms of projections for the current time. The combination of the final CL and MX projections for the present and 2100 provide higher certainty concerning those areas that will become highly suitable for future date palm cultivation. According to the four models, cold, hot, and wet stress, with differences on a regional basis, appears to be the major restrictions on future date palm distribution. The results demonstrate variances in the projections, resulting from different techniques. The assessment and interpretation of model projections requires reservations, especially in correlative models such as MX, BRT, and RF. Intersections between different techniques may decrease uncertainty in future distribution projections. However, readers should not miss the fact that the uncertainties are mostly because the future GHG emission scenarios are unknowable with sufficient precision. Suggestions towards methodology and processing for improving projections are included.
The simulated climate of the Last Glacial Maximum and insights into the global marine carbon cycle
NASA Astrophysics Data System (ADS)
Buchanan, Pearse J.; Matear, Richard J.; Lenton, Andrew; Phipps, Steven J.; Chase, Zanna; Etheridge, David M.
2016-12-01
The ocean's ability to store large quantities of carbon, combined with the millennial longevity over which this reservoir is overturned, has implicated the ocean as a key driver of glacial-interglacial climates. However, the combination of processes that cause an accumulation of carbon within the ocean during glacial periods is still under debate. Here we present simulations of the Last Glacial Maximum (LGM) using the CSIRO Mk3L-COAL (Carbon-Ocean-Atmosphere-Land) earth system model to test the contribution of physical and biogeochemical processes to ocean carbon storage. For the LGM simulation, we find a significant global cooling of the surface ocean (3.2 °C) and the expansion of both minimum and maximum sea ice cover broadly consistent with proxy reconstructions. The glacial ocean stores an additional 267 Pg C in the deep ocean relative to the pre-industrial (PI) simulation due to stronger Antarctic Bottom Water formation. However, 889 Pg C is lost from the upper ocean via equilibration with a lower atmospheric CO2 concentration and a global decrease in export production, causing a net loss of carbon relative to the PI ocean. The LGM deep ocean also experiences an oxygenation ( > 100 mmol O2 m-3) and deepening of the calcite saturation horizon (exceeds the ocean bottom) at odds with proxy reconstructions. With modifications to key biogeochemical processes, which include an increased export of organic matter due to a simulated release from iron limitation, a deepening of remineralisation and decreased inorganic carbon export driven by cooler temperatures, we find that the carbon content of the glacial ocean can be sufficiently increased (317 Pg C) to explain the reduction in atmospheric and terrestrial carbon at the LGM (194 ± 2 and 330 ± 400 Pg C, respectively). Assuming an LGM-PI difference of 95 ppm pCO2, we find that 55 ppm can be attributed to the biological pump, 28 ppm to circulation changes and the remaining 12 ppm to solubility. The biogeochemical modifications also improve model-proxy agreement in export production, carbonate chemistry and dissolved oxygen fields. Thus, we find strong evidence that variations in the oceanic biological pump exert a primary control on the climate.
NASA Astrophysics Data System (ADS)
Wyborn, L. A.; Fraser, R.; Evans, B. J. K.; Friedrich, C.; Klump, J. F.; Lescinsky, D. T.
2017-12-01
Virtual Research Environments (VREs) are now part of academic infrastructures. Online research workflows can be orchestrated whereby data can be accessed from multiple external repositories with processing taking place on public or private clouds, and centralised supercomputers using a mixture of user codes, and well-used community software and libraries. VREs enable distributed members of research teams to actively work together to share data, models, tools, software, workflows, best practices, infrastructures, etc. These environments and their components are increasingly able to support the needs of undergraduate teaching. External to the research sector, they can also be reused by citizen scientists, and be repurposed for industry users to help accelerate the diffusion and hence enable the translation of research innovations. The Virtual Geophysics Laboratory (VGL) in Australia was started in 2012, built using a collaboration between CSIRO, the National Computational Infrastructure (NCI) and Geoscience Australia, with support funding from the Australian Government Department of Education. VGL comprises three main modules that provide an interface to enable users to first select their required data; to choose a tool to process that data; and then access compute infrastructure for execution. VGL was initially built to enable a specific set of researchers in government agencies access to specific data sets and a limited number of tools. Over the years it has evolved into a multi-purpose Earth science platform with access to an increased variety of data (e.g., Natural Hazards, Geochemistry), a broader range of software packages, and an increasing diversity of compute infrastructures. This expansion has been possible because of the approach to loosely couple data, tools and compute resources via interfaces that are built on international standards and accessed as network-enabled services wherever possible. Built originally for researchers that were not fussy about general usability, increasing emphasis on User Interfaces (UIs) and stability will lead to increased uptake in the education and industry sectors. Simultaneously, improvements are being added to facilitate access to data and tools by experienced researchers who want direct access to both data and flexible workflows.
Virtual Geophysics Laboratory: Exploiting the Cloud and Empowering Geophysicsts
NASA Astrophysics Data System (ADS)
Fraser, Ryan; Vote, Josh; Goh, Richard; Cox, Simon
2013-04-01
Over the last five decades geoscientists from Australian state and federal agencies have collected and assembled around 3 Petabytes of geoscience data sets under public funding. As a consequence of technological progress, data is now being acquired at exponential rates and in higher resolution than ever before. Effective use of these big data sets challenges the storage and computational infrastructure of most organizations. The Virtual Geophysics Laboratory (VGL) is a scientific workflow portal addresses some of the resulting issues by providing Australian geophysicists with access to a Web 2.0 or Rich Internet Application (RIA) based integrated environment that exploits eResearch tools and Cloud computing technology, and promotes collaboration between the user community. VGL simplifies and automates large portions of what were previously manually intensive scientific workflow processes, allowing scientists to focus on the natural science problems, rather than computer science and IT. A number of geophysical processing codes are incorporated to support multiple workflows. For example a gravity inversion can be performed by combining the Escript/Finley codes (from the University of Queensland) with the gravity data registered in VGL. Likewise, tectonic processes can also be modeled by combining the Underworld code (from Monash University) with one of the various 3D models available to VGL. Cloud services provide scalable and cost effective compute resources. VGL is built on top of mature standards-compliant information services, many deployed using the Spatial Information Services Stack (SISS), which provides direct access to geophysical data. A large number of data sets from Geoscience Australia assist users in data discovery. GeoNetwork provides a metadata catalog to store workflow results for future use, discovery and provenance tracking. VGL has been developed in collaboration with the research community using incremental software development practices and open source tools. While developed to provide the geophysics research community with a sustainable platform and scalable infrastructure; VGL has also developed a number of concepts, patterns and generic components of which have been reused for cases beyond geophysics, including natural hazards, satellite processing and other areas requiring spatial data discovery and processing. Future plans for VGL include a number of improvements in both functional and non-functional areas in response to its user community needs and advancement in information technologies. In particular, research is underway in the following areas (a) distributed and parallel workflow processing in the cloud, (b) seamless integration with various cloud providers, and (c) integration with virtual laboratories representing other science domains. Acknowledgements: VGL was developed by CSIRO in collaboration with Geoscience Australia, National Computational Infrastructure, Australia National University, Monash University and University of Queensland, and has been supported by the Australian Government's Education Investment Funds through NeCTAR.
Global Carbon Project: the 2013 Global Carbon Budget (Version 2.3, issued June 2014)
Le Quere, C. [University of East Anglia, Norwich UK; Peters, G. P. [Univ. of Oslo (Norway); Andrew, R. J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Andrew, R. M. [Univ. of Oslo (Norway); Boden, T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2013-01-01
Global emissions of carbon dioxide from the combustion of fossil fuels will reach 36 billion tonnes for the year 2013. "This is a level unprecedented in human history," says CSIRO's Dr Pep Canadell, Executive-Director of the Global Carbon Project (GCP) and co-author of a new report. The GCP provides an annual report of carbon dioxide emissions, land and ocean sinks and accumulation in the atmosphere, incorporating data from multiple research institutes from around the world. The 2013 figures coincide with the global launch of the Global Carbon Atlas, an online platform to explore, visualise and interpret the emissions data at the global, regional and national scales (www.globalcarbonatlas.org). The full data and methods are published today in the journal Earth System Science Data Discussions, and data and other graphic materials can be found at: www.globalcarbonproject.org/carbonbudget. The Global Carbon Budget 2013 is a collaborative effort of the global carbon cycle science community coordinated by the Global Carbon Project. The landing page for this dataset includes links to V. 1.1, issued Nov2013, V.1.3, issued Dec2013, and the June 2014 issue of V.2.3 of the 2013 Global Carbon Budget.
The Beginnings of Australian Radio Astronomy
NASA Astrophysics Data System (ADS)
Sullivan, Woodruff T.
The early stages of Australian radio astronomy, especially the first decade after World War II, are described in detail. These include the transition of the CSIRO Radiophysics Laboratory, under the leadership of Joseph Pawsey and Taffy Bowen, from a wartime laboratory in 1945 to, by 1950, the largest and one of the two most important radio astronomy groups in the world (with the Cavendish Laboratory at Cambridge University). The initial solar investigations are described, including discovery of the hot corona and development of the sea-cliff interferometer. During this same period painstaking `radio star' observations by John Bolton and colleagues led to the first suggested optical identifications of Taurus-A (the Crab Nebula), Centaurus-A (NGC 5128), and Virgo-A (M87). The factors that led to the extraordinary early success of the Radiophysics Laboratory are analyzed in detail, followed by discussion of how the situation changed significantly in the second decade of 1955-1965. Finally, the development of major Australian instruments, from the Parkes Radio Telescope (1961) to the Australia Telescope (1988), is briefly presented. This chapter is a direct reprint of the following research paper: Sullivan, W., 2005. The beginnings of Australian radio astronomy. Journal of Astronomical History and Heritage, 8, 11-32.
THE AUSTRALIA TELESCOPE COMPACT ARRAY H I SURVEY OF THE GALACTIC CENTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClure-Griffiths, N. M.; Green, J. A.; Dickey, J. M.
2012-03-01
We present a survey of atomic hydrogen (H I) emission in the direction of the Galactic Center (GC) conducted with the CSIRO Australia Telescope Compact Array (ATCA). The survey covers the area -5 Degree-Sign {<=} l {<=} +5 Degree-Sign , -5 Degree-Sign {<=} b {<=} +5 Degree-Sign over the velocity range -309 km s{sup -1} {<=} v{sub LSR} {<=} 349 km s{sup -1} with a velocity resolution of 1 km s{sup -1}. The ATCA data are supplemented with data from the Parkes Radio Telescope for sensitivity to all angular scales larger than the 145'' angular resolution of the survey. Themore » mean rms brightness temperature across the field is 0.7 K, except near (l, b) = 0 Degree-Sign , 0 Degree-Sign where it increases to {approx}2 K. This survey complements the Southern Galactic Plane Survey to complete the continuous coverage of the inner Galactic plane in H I at {approx}2' resolution. Here, we describe the observations and analysis of this GC survey and present the final data product. Features such as Bania's Clump 2, the far 3 kpc arm, and small high-velocity clumps are briefly described.« less
Head Movement Dynamics During Play and Perturbed Mother-Infant Interaction
Hammal, Zakia; Cohn, Jeffrey F; Messinger, Daniel S
2015-01-01
We investigated the dynamics of head movement in mothers and infants during an age-appropriate, well-validated emotion induction, the Still Face paradigm. In this paradigm, mothers and infants play normally for 2 minutes (Play) followed by 2 minutes in which the mothers remain unresponsive (Still Face), and then two minutes in which they resume normal behavior (Reunion). Participants were 42 ethnically diverse 4-month-old infants and their mothers. Mother and infant angular displacement and angular velocity were measured using the CSIRO head tracker. In male but not female infants, angular displacement increased from Play to Still-Face and decreased from Still Face to Reunion. Infant angular velocity was higher during Still-Face than Reunion with no differences between male and female infants. Windowed cross-correlation suggested changes in how infant and mother head movements are associated, revealing dramatic changes in direction of association. Coordination between mother and infant head movement velocity was greater during Play compared with Reunion. Together, these findings suggest that angular displacement, angular velocity and their coordination between mothers and infants are strongly related to age-appropriate emotion challenge. Attention to head movement can deepen our understanding of emotion communication. PMID:26640622
NASA Astrophysics Data System (ADS)
Orchiston, Wayne; Robertson, Peter
2017-12-01
Initial post-war developments in non-solar radio astronomy were inspired by Hey, Phillips and Parson’s report in 1946 of an intense source of radio emission in Cygnus. This so-called ‘radio star’ was unique, and questions immediately were raised about its true nature. But it did not remain unique for long. Observing from Sydney, John Bolton, Gordon Stanley and Bruce Slee followed up the Cygnus discovery with more radio star detections, beginning what would evolve into a long-term multi-faceted research program and one of the mainstays of the CSIRO’s Division of Radiophysics. But more than this, these early discoveries in England and in Sydney opened up a whole new field of investigation, extragalactic radio astronomy, which has remained a major area of investigation through to the present day. This paper focusses on the early years of this program when the observations were carried out at Dover Heights Field Station in Sydney, and the ways in which new developments in instrumentation that allowed a major expansion of the program eventually led to the closure of Dover Heights and the founding of the Fleurs Field Station.
NASA Astrophysics Data System (ADS)
Shabani, Farzin; Kumar, Lalit; Taylor, Subhashni
2014-11-01
This study set out to model potential date palm distribution under current and future climate scenarios using an emission scenario, in conjunction with two different global climate models (GCMs): CSIRO-Mk3.0 (CS), and MIROC-H (MR), and to refine results based on suitability under four nonclimatic parameters. Areas containing suitable physicochemical soil properties and suitable soil taxonomy, together with land slopes of less than 10° and suitable land uses for date palm ( Phoenix dactylifera) were selected as appropriate refining tools to ensure the CLIMEX results were accurate and robust. Results showed that large regions of Iran are projected as likely to become climatically suitable for date palm cultivation based on the projected scenarios for the years 2030, 2050, 2070, and 2100. The study also showed CLIMEX outputs merit refinement by nonclimatic parameters and that the incremental introduction of each additional parameter decreased the disagreement between GCMs. Furthermore, the study indicated that the least amount of disagreement in terms of areas conducive to date palm cultivation resulted from CS and MR GCMs when the locations of suitable physicochemical soil properties and soil taxonomy were used as refinement tools.
Impacts of fire on forest age and runoff in mountain ash forests
Wood, S.A.; Beringer, J.; Hutley, L.B.; McGuire, A.D.; Van Dijk, A.; Kilinc, M.
2008-01-01
Runoff from mountain ash (Eucalyptus regnans F.Muell.) forested catchments has been shown to decline significantly in the few decades following fire - returning to pre-fire levels in the following centuries - owing to changes in ecosystem water use with stand age in a relationship known as Kuczera's model. We examined this relationship between catchment runoff and stand age by measuring whole-ecosystem exchanges of water using an eddy covariance system measuring forest evapotranspiration (ET) combined with sap-flow measurements of tree water use, with measurements made across a chronosequence of three sites (24, 80 and 296 years since fire). At the 296-year old site eddy covariance systems were installed above the E. regnans overstorey and above the distinct rainforest understorey. Contrary to predictions from the Kuczera curve, we found that measurements of whole-forest ET decreased by far less across stand age between 24 and 296 years. Although the overstorey tree water use declined by 1.8 mm day-1 with increasing forest age (an annual decrease of 657 mm) the understorey ET contributed between 1.2 and 1.5 mm day-1, 45% of the total ET (3 mm day-1) at the old growth forest. ?? CSIRO 2008.
NASA Technical Reports Server (NTRS)
Swap, R. J.; Annegarn, H. J.; Suttles, J. T.; Haywood, J.; Helmlinger, M. C.; Hely, C.; Hobbs, P. V.; Holben, B. N.; Ji, J.; King, M. D.
2002-01-01
The Southern African Regional Science Initiative (SAFARI 2000) is an international project investigating the earth atmosphere -human system in southern Africa. The programme was conducted over a two year period from March 1999 to March 2001. The dry season field campaign (August-September 2000) was the most intensive activity involved over 200 scientist from eighteen countries. The main objectives were to characterize and quantify biogenic, pyrogenic and anthropogenic aerosol and trace gas emissions and their transport and transformations in the atmosphere and to validate NASA's Earth Observing System's Satellite Terra within a scientific context. Five aircraft-- two South African Weather Service Aeorcommanders, the University of Washington's CV-880, the U.K. Meteorological Office's C-130, and NASA's ER-2 --with different altitude capabilities, participated in the campaign. Additional airborne sampling of southern African air masses, that had moved downwind of the subcontinent, was conducted by the CSIRO over Australia. Multiple Observations were made in various geographical sections under different synoptic conditions. Airborne missions were designed to optimize the value of synchronous over-flights of the Terra Satellite platform, above regional ground validation and science targets. Numerous smaller scale ground validation activities took place throughout the subcontinent during the campaign period.
NASA Astrophysics Data System (ADS)
Barcikowska, M. J.; Knutson, T. R.; Zhang, R.
2016-12-01
This study investigates mechanisms and global-scale climate impacts of multidecadal climate variability. Here we show, using observations and CSIRO-Mk3.6.0 model control run, that multidecadal variability of the Atlantic Meridional Overturning Circulation (AMOC) may have a profound impact on the thermal- and hydro-climatic changes over the Pacific region. In our model-based analysis we propose a mechanism, which comprises a coupled ocean-atmosphere teleconnection, established through the atmospheric overturning circulation cell between the tropical North Atlantic and tropical Pacific. For example, warming SSTs over the tropical North Atlantic intensify local convection and reinforce subsidence, low-level divergence in the eastern tropical Pacific. This is also accompanied with an intensification of trade winds, cooling and drying anomalies in the tropical central-east Pacific. The derived multidecadal changes, associated with the AMOC, contribute remarkably to the global temperature and precipitation variations. This highlights its potential predictive value. Shown here results suggest a possibility that: 1) recently observed slowdown in global warming may partly originate from internal variability, 2) climate system may be undergoing a transition to a cold AMO phase which could prolong the global slowdown.
NASA Astrophysics Data System (ADS)
Chapman, Thomas G.; Philip, J. R.
The names of Horton and Philip occur together so frequently in any discussion of infiltration that it is particularly appropriate that John R. Philip should be the recipient of the Robert E. Horton Medal. In 1931, Horton emphasized the need for ‘research to provide connective tissue between related problems,’ and Philip's achievements have gone far toward satisfying that need.John R. Philip was born at Ballarat, some 70 miles from Melbourne, and attended Melbourne University, graduating as bachelor of civil engineering in 1946. After a few years of engineering experience with Queensland's Irrigation Commission, he took up a position as research scientist in the Commonwealth Scientific and Industrial Research Organization (CSIRO) in 1951. He moved rapidly from analysis of the hydraulics of border irrigation to the general problems of infiltration and soil water movement and found analytical solutions to a wide range of problems in homogeneous porous media. In 1957, Philip and his colleague D. A. de Vries were awarded the Horton prize of the AGU for their paper on moisture movement in porous materials under temperature gradients. In 1960, Melbourne University awarded John Philip his doctorate of science for a thesis, ‘Physical Contributions to Microhydrology,’ consisting of 19 published papers.
Building an Internet of Samples: The Australian Contribution
NASA Astrophysics Data System (ADS)
Wyborn, Lesley; Klump, Jens; Bastrakova, Irina; Devaraju, Anusuriya; McInnes, Brent; Cox, Simon; Karssies, Linda; Martin, Julia; Ross, Shawn; Morrissey, John; Fraser, Ryan
2017-04-01
Physical samples are often the ground truth to research reported in the scientific literature across multiple domains. They are collected by many different entities (individual researchers, laboratories, government agencies, mining companies, citizens, museums, etc.). Samples must be curated over the long-term to ensure both that their existence is known, and to allow any data derived from them through laboratory and field tests to be linked to the physical samples. For example, having unique identifiers that link back ground truth data on the original sample helps calibrate large volumes of remotely sensed data. Access to catalogues of reliably identified samples from several collections promotes collaboration across all Earth Science disciplines. It also increases the cost effectiveness of research by reducing the need to re-collect samples in the field. The assignment of web identifiers to the digital representations of these physical objects allows us to link to data, literature, investigators and institutions, thus creating an "Internet of Samples". An Australian implementation of the "Internet of Samples" is using the IGSN (International Geo Sample Number, http://igsn.github.io) to identify samples in a globally unique and persistent way. IGSN was developed in the solid earth science community and is recommended for sample identification by the Coalition for Publishing Data in the Earth and Space Sciences (COPDESS). IGSN is interoperable with other persistent identifier systems such as DataCite. Furthermore, the basic IGSN description metadata schema is compatible with existing schemas such as OGC Observations and Measurements (O&M) and DataCite Metadata Schema which makes crosswalks to other metadata schemas easy. IGSN metadata is disseminated through the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) allowing it to be aggregated in other applications such as portals (e.g. the Australian IGSN catalogue http://igsn2.csiro.au). The metadata is available in more than one format. The software for IGSN web services is based on components developed for DataCite and adapted to the specific requirements of IGSN. This cooperation in open source development ensures sustainable implementation and faster turnaround times for updates. IGSN, in particular in its Australian implementation, is characterised by a federated approach to system architecture and organisational governance giving it the necessary flexibility to adapt to particular local practices within multiple domains, whilst maintaining an overarching international standard. The three current IGSN allocation agents in Australia: Geoscience Australia, CSIRO and Curtin University, represent different sectors. Through funding from the Australian Research Data Services Program they have combined to develop a common web portal that allows discovery of physical samples and sample collections at a national level.International governance then ensures we can link to an international community but at the same time act locally to ensure the services offered are relevant to the needs of Australian researchers. This flexibility aids the integration of new disciplines into a global community of a physical samples information network.
NASA Astrophysics Data System (ADS)
López, Oliver; Houborg, Rasmus; McCabe, Matthew Francis
2017-01-01
Advances in space-based observations have provided the capacity to develop regional- to global-scale estimates of evaporation, offering insights into this key component of the hydrological cycle. However, the evaluation of large-scale evaporation retrievals is not a straightforward task. While a number of studies have intercompared a range of these evaporation products by examining the variance amongst them, or by comparison of pixel-scale retrievals against ground-based observations, there is a need to explore more appropriate techniques to comprehensively evaluate remote-sensing-based estimates. One possible approach is to establish the level of product agreement between related hydrological components: for instance, how well do evaporation patterns and response match with precipitation or water storage changes? To assess the suitability of this consistency
-based approach for evaluating evaporation products, we focused our investigation on four globally distributed basins in arid and semi-arid environments, comprising the Colorado River basin, Niger River basin, Aral Sea basin, and Lake Eyre basin. In an effort to assess retrieval quality, three satellite-based global evaporation products based on different methodologies and input data, including CSIRO-PML, the MODIS Global Evapotranspiration product (MOD16), and Global Land Evaporation: the Amsterdam Methodology (GLEAM), were evaluated against rainfall data from the Global Precipitation Climatology Project (GPCP) along with Gravity Recovery and Climate Experiment (GRACE) water storage anomalies. To ensure a fair comparison, we evaluated consistency using a degree correlation approach after transforming both evaporation and precipitation data into spherical harmonics. Overall we found no persistent hydrological consistency in these dryland environments. Indeed, the degree correlation showed oscillating values between periods of low and high water storage changes, with a phase difference of about 2-3 months. Interestingly, after imposing a simple lag in GRACE data to account for delayed surface runoff or baseflow components, an improved match in terms of degree correlation was observed in the Niger River basin. Significant improvements to the degree correlations (from ˜ 0 to about 0.6) were also found in the Colorado River basin for both the CSIRO-PML and GLEAM products, while MOD16 showed only half of that improvement. In other basins, the variability in the temporal pattern of degree correlations remained considerable and hindered any clear differentiation between the evaporation products. Even so, it was found that a constant lag of 2 months provided a better fit compared to other alternatives, including a zero lag. From a product assessment perspective, no significant or persistent advantage could be discerned across any of the three evaporation products in terms of a sustained hydrological consistency with precipitation and water storage anomaly data. As a result, our analysis has implications in terms of the confidence that can be placed in independent retrievals of the hydrological cycle, raises questions on inter-product quality, and highlights the need for additional techniques to evaluate large-scale products.
NASA Astrophysics Data System (ADS)
Feldman, D.; Collins, W. D.; Wielicki, B. A.; Shea, Y.; Mlynczak, M. G.; Kuo, C.; Nguyen, N.
2017-12-01
Shortwave feedbacks are a persistent source of uncertainty for climate models and a large contributor to the diagnosed range of equilibrium climate sensitivity (ECS) for the international multi-model ensemble. The processes that contribute to these feedbacks affect top-of-atmosphere energetics and produce spectral signatures that may be time-evolving. We explore the value of such spectral signatures for providing an observational constraint on model ECS by simulating top-of-atmosphere shortwave reflectance spectra across much of the energetically-relevant shortwave bandpass (300 to 2500 nm). We present centennial-length shortwave hyperspectral simulations from low, medium and high ECS models that reported to the CMIP5 archive as part of an Observing System Simulation Experiment (OSSE) in support of the CLimate Absolute Radiance and Refractivity Observatory (CLARREO). Our framework interfaces with CMIP5 archive results and is agnostic to the choice of model. We simulated spectra from the INM-CM4 model (ECS of 2.08 °K/2xCO2), the MIROC5 model (ECS of 2.70 °K/2xCO2), and the CSIRO Mk3-6-0 (ECS of 4.08 °K/2xCO2) based on those models' integrations of the RCP8.5 scenario for the 21st Century. This approach allows us to explore how perfect data records can exclude models of lower or higher climate sensitivity. We find that spectral channels covering visible and near-infrared water-vapor overtone bands can potentially exclude a low or high sensitivity model with under 15 years' of absolutely-calibrated data. These different spectral channels are sensitive to model cloud radiative effect and cloud height changes, respectively. These unprecedented calculations lay the groundwork for spectral simulations of perturbed-physics ensembles in order to identify those shortwave observations that can help narrow the range in shortwave model feedbacks and ultimately help reduce the stubbornly-large range in model ECS.
Shafapour Tehrany, Mahyat; Solhjouy-fard, Samaneh; Kumar, Lalit
2018-01-01
Aedes albopictus, the Asian Tiger Mosquito, vector of Chikungunya, Dengue Fever and Zika viruses, has proven its hardy adaptability in expansion from its natural Asian, forest edge, tree hole habitat on the back of international trade transportation, re-establishing in temperate urban surrounds, in a range of water receptacles and semi-enclosures of organic matter. Conventional aerial spray mosquito vector controls focus on wetland and stagnant water expanses, proven to miss the protected hollows and crevices favoured by Ae. albopictus. New control or eradication strategies are thus essential, particular in light of potential expansions in the southeastern and eastern USA. Successful regional vector control strategies require risk level analysis. Should strategies prioritize regions with non-climatic or climatic suitability parameters for Ae. albopictus? Our study used current Ae. albopictus distribution data to develop two independent models: (i) regions with suitable non-climatic factors, and (ii) regions with suitable climate for Ae. albopictus in southeastern USA. Non-climatic model processing used Evidential Belief Function (EBF), together with six geographical conditioning factors (raster data layers), to establish the probability index. Validation of the analysis results was estimated with area under the curve (AUC) using Ae. albopictus presence data. Climatic modeling was based on two General Circulation Models (GCMs), Miroc3.2 and CSIRO-MK30 running the RCP 8.5 scenario in MaxEnt software. EBF non-climatic model results achieved a 0.70 prediction rate and 0.73 success rate, confirming suitability of the study site regions for Ae. albopictus establishment. The climatic model results showed the best-fit model comprised Coldest Quarter Mean Temp, Precipitation of Wettest Quarter and Driest Quarter Precipitation factors with mean AUC value of 0.86. Both GCMs showed that the whole study site is highly suitable and will remain suitable climatically, according to the prediction for 2055, for Ae. albopictus expansion. PMID:29576954
Global and regional emissions estimates for N2O
NASA Astrophysics Data System (ADS)
Saikawa, E.; Prinn, R. G.; Dlugokencky, E.; Ishijima, K.; Dutton, G. S.; Hall, B. D.; Langenfelds, R.; Tohjima, Y.; Machida, T.; Manizza, M.; Rigby, M.; O'Doherty, S.; Patra, P. K.; Harth, C. M.; Weiss, R. F.; Krummel, P. B.; van der Schoot, M.; Fraser, P. J.; Steele, L. P.; Aoki, S.; Nakazawa, T.; Elkins, J. W.
2014-05-01
We present a comprehensive estimate of nitrous oxide (N2O) emissions using observations and models from 1995 to 2008. High-frequency records of tropospheric N2O are available from measurements at Cape Grim, Tasmania; Cape Matatula, American Samoa; Ragged Point, Barbados; Mace Head, Ireland; and at Trinidad Head, California using the Advanced Global Atmospheric Gases Experiment (AGAGE) instrumentation and calibrations. The Global Monitoring Division of the National Oceanic and Atmospheric Administration/Earth System Research Laboratory (NOAA/ESRL) has also collected discrete air samples in flasks and in situ measurements from remote sites across the globe and analyzed them for a suite of species including N2O. In addition to these major networks, we include in situ and aircraft measurements from the National Institute of Environmental Studies (NIES) and flask measurements from the Tohoku University and Commonwealth Scientific and Industrial Research Organization (CSIRO) networks. All measurements show increasing atmospheric mole fractions of N2O, with a varying growth rate of 0.1-0.7% per year, resulting in a 7.4% increase in the background atmospheric mole fraction between 1979 and 2011. Using existing emission inventories as well as bottom-up process modeling results, we first create globally gridded a priori N2O emissions over the 37 years since 1975. We then use the three-dimensional chemical transport model, Model for Ozone and Related Chemical Tracers version 4 (MOZART v4), and a Bayesian inverse method to estimate global as well as regional annual emissions for five source sectors from 13 regions in the world. This is the first time that all of these measurements from multiple networks have been combined to determine emissions. Our inversion indicates that global and regional N2O emissions have an increasing trend between 1995 and 2008. Despite large uncertainties, a significant increase is seen from the Asian agricultural sector in recent years, most likely due to an increase in the use of nitrogenous fertilizers, as has been suggested by previous studies.
Francey, R. J. [CSIRO Division of Atmospheric Research, Mordialloc, Victoria, Australia; Allison, C. E. [CSIRO Division of Atmospheric Research, Mordialloc, Victoria, Australia
1998-01-01
Since 1982, a continuous program of sampling atmospheric CO2 to determine stable isotope ratios has been maintained at the Australian Baseline Air Pollution Station, Cape Grim, Tasmania (40°, 40'56"S, 144°, 41'18"E). The process of in situ extraction of CO2 from air, the preponderance of samples collected in conditions of strong wind from the marine boundary layer of the Southern Ocean, and the determination of all isotope ratios relative to a common high purity CO2 reference gas with isotopic δ13C close to atmospheric values, are a unique combination of factors with respect to obtaining a globally representative signal from a surface site. Air samples are collected during baseline condition episodes at a frequency of around one sample per week. Baseline conditions are characterized by wind direction in the sector 190°-280°, condensation nucleus concentration below 600 per cm-3, and steady continuous CO2 concentrations (variation ± 0.2 ppmv per hour). A vacuum pump draws air from either the 10 m or 70 m intakes and sampling alternates between the two intakes. The air from the intake is dried with a trap immersed in an alcohol bath at about -80°C. Mass spectrometer analyses for δ13C and δ18O are carried out by CSIRO's Division of Atmospheric Research in Aspendale, usually one to three weeks following collection. This record is possibly the most accurate representation of global atmospheric 13C behavior over the last decade and may be used to partition the uptake of fossil-fuel carbon emissions between ocean and terrestrial plant reservoirs. Using these data, Francey et al. (1995) observed a gradual decrease in δ13C from 1982 to 1993, but with a pronounced flattening from 1988 to 1990; a trend that appears to involve the terrestrial carbon cycle.
Shabani, Farzin; Shafapour Tehrany, Mahyat; Solhjouy-Fard, Samaneh; Kumar, Lalit
2018-01-01
Aedes albopictus , the Asian Tiger Mosquito, vector of Chikungunya, Dengue Fever and Zika viruses, has proven its hardy adaptability in expansion from its natural Asian, forest edge, tree hole habitat on the back of international trade transportation, re-establishing in temperate urban surrounds, in a range of water receptacles and semi-enclosures of organic matter. Conventional aerial spray mosquito vector controls focus on wetland and stagnant water expanses, proven to miss the protected hollows and crevices favoured by Ae. albopictus. New control or eradication strategies are thus essential, particular in light of potential expansions in the southeastern and eastern USA. Successful regional vector control strategies require risk level analysis. Should strategies prioritize regions with non-climatic or climatic suitability parameters for Ae. albopictus ? Our study used current Ae. albopictus distribution data to develop two independent models: (i) regions with suitable non-climatic factors, and (ii) regions with suitable climate for Ae. albopictus in southeastern USA. Non-climatic model processing used Evidential Belief Function (EBF), together with six geographical conditioning factors (raster data layers), to establish the probability index. Validation of the analysis results was estimated with area under the curve (AUC) using Ae. albopictus presence data. Climatic modeling was based on two General Circulation Models (GCMs), Miroc3.2 and CSIRO-MK30 running the RCP 8.5 scenario in MaxEnt software. EBF non-climatic model results achieved a 0.70 prediction rate and 0.73 success rate, confirming suitability of the study site regions for Ae. albopictus establishment. The climatic model results showed the best-fit model comprised Coldest Quarter Mean Temp, Precipitation of Wettest Quarter and Driest Quarter Precipitation factors with mean AUC value of 0.86. Both GCMs showed that the whole study site is highly suitable and will remain suitable climatically, according to the prediction for 2055, for Ae. albopictus expansion.
The Pacific Islands Climate Science Center five-year science agenda, 2014-2018
Helweg, David; Nash, Sarah A.B.; Polhemus, Dan A.
2014-01-01
From the heights of Mauna Kea on Hawaiʻi Island to the depths of the Mariana Trench, from densely populated cities to sparse rural indigenous communities and uninhabited sandy atolls, the Pacific region encompasses diverse associations of peoples and places that are directly affected by changes to the atmosphere, ocean, and land. The peoples of the Pacific are among the first to observe and experience the effects of global climatic changes. Because the Pacific region is predominantly composed of vast ocean expanses punctuated only by small, isolated emergent islands and atolls, marine processes are critical factors in the region’s climate systems, and their impacts occur here to a greater degree than in continental regions. Rates of sea-level rise in the region during the modern altimetry period exceed the global rate, with the highest increases occurring in the western North Pacific (Cazenave and Llovel, 2010; Nerem and others, 2010; Timmermann and others, 2010). The ocean has also warmed during this period. Since the 1970s, sea-surface temperature has increased at a rate of 0.13 to 0.41 °F (0.07 to 0.23 °C) per decade, depending on the location (Keener and others, 2012a). Ocean chemistry has changed during this period as well, with surface pH having dropped by 0.1 pH units (Feely and others, 2009; Doney and others, 2012). Over the past century, air temperature has increased throughout the Pacific region. In Hawaiʻi, average temperatures increased by 0.08 °F per decade during the period 1919 to 2006, and in recent years, the rate of increase has been accelerating, particularly at high elevations (Giambelluca and others, 2008). In the western North Pacific, temperatures also increased over the past 60 years (Lander and Guard, 2003; Lander, 2004; Lander and Khosrowpanah, 2004; Kruk and others, 2013), with a concurrent warming trend in the central South Pacific since the 1950s (Australian Bureau of Meteorology and CSIRO, 2011).
Organic Species in Infrared Dark Clouds
NASA Astrophysics Data System (ADS)
Vasyunina, T.; Vasyunin, A. I.; Herbst, Eric; Linz, Hendrik; Voronkov, Maxim; Britton, Tui; Zinchenko, Igor; Schuller, Frederic
2014-01-01
It is currently assumed that infrared dark clouds (IRDCs) represent the earliest evolutionary stages of high-mass stars (>8 M ⊙). Submillimeter and millimeter-wave studies performed over the past 15 yr show that IRDCs possess a broad variety of properties, and hence a wide range of problems and questions that can be tackled. In this paper, we report an investigation of the molecular composition and chemical processes in two groups of IRDCs. Using the Mopra, APEX, and IRAM radio telescopes over the last four years, we have collected molecular line data for CO, H2CO, HNCO, CH3CCH, CH3OH, CH3CHO, CH3OCHO, and CH3OCH3. For all of these species we estimated molecular abundances. We then undertook chemical modeling studies, concentrating on the source IRDC028.34+0.06, and compared observed and modeled abundances. This comparison showed that to reproduce observed abundances of complex organic molecules, a zero-dimensional gas-grain model with constant physical conditions is not sufficient. We achieved greater success with the use of a warm-up model, in which warm-up from 10 K to 30 K occurs following a cold phase. Based on observations carried out with the IRAM 30 m Telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). This publication is based on data acquired with the Atacama Pathfinder Experiment (APEX). APEX is a collaboration between the Max-Planck-Institut für Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory. The 22 m Mopra antenna is part of the Australia Telescope, which is funded by the Commonwealth of Australia for operations as a National Facility managed by CSIRO. The University of New South Wales Digital Filter Bank used for the observations with the Mopra Telescope was provided with support from the Australian Research Council.
NASA Astrophysics Data System (ADS)
Wee, B.; Car, N.; Percivall, G.; Allen, D.; Fitch, P. G.; Baumann, P.; Waldmann, H. C.
2014-12-01
The Belmont Forum E-Infrastructure and Data Management Cooperative Research Agreement (CRA) is designed to foster a global community to collaborate on e-infrastructure challenges. One of the deliverables is an implementation plan to address global data infrastructure interoperability challenges and align existing domestic and international capabilities. Work package three (WP3) of the CRA focuses on the harmonization of global data infrastructure for sharing environmental data. One of the subtasks under WP3 is the development of user scenarios that guide the development of applicable deliverables. This paper describes the proposed protocol for user scenario development. It enables the solicitation of user scenarios from a broad constituency, and exposes the mechanisms by which those solicitations are evaluated against requirements that map to the Belmont Challenge. The underlying principle of traceability forms the basis for a structured, requirements-driven approach resulting in work products amenable to trade-off analyses and objective prioritization. The protocol adopts the ISO Reference Model for Open Distributed Processing (RM-ODP) as a top level framework. User scenarios are developed within RM-ODP's "Enterprise Viewpoint". To harmonize with existing frameworks, the protocol utilizes the conceptual constructs of "scenarios", "use cases", "use case categories", and use case templates as adopted by recent GEOSS Architecture Implementation Project (AIP) deliverables and CSIRO's eReefs project. These constructs are encapsulated under the larger construct of "user scenarios". Once user scenarios are ranked by goodness-of-fit to the Belmont Challenge, secondary scoring metrics may be generated, like goodness-of-fit to FutureEarth science themes. The protocol also facilitates an assessment of the ease of implementing given user scenario using existing GEOSS AIP deliverables. In summary, the protocol results in a traceability graph that can be extended to coordinate across research programmes. If implemented using appropriate technologies and harmonized with existing ontologies, this approach enables queries, sensitivity analyses, and visualization of complex relationships.
Miniaturised Space Payloads for Outdoor Environmental Applications
NASA Astrophysics Data System (ADS)
de Souza, P. A.
2012-12-01
The need for portable, robust and acurate sensors has increased in recent years resulting from industrial and environmental needs. This paper describes a number of applications of engineering copies of those Moessbauer spectrometers (MIMOS II) used by Mars Exploration Rovers, and the use of portable XRF spectrometers in the analysis of heavy metals in sediments. MIMOS II has been applied in the characterisation of Fe-bearing phases in airborne particles in industrialised urban centres, The results have allowed an identification of sources or air pollution in near-real-time. The results help to combine production parameters with pollution impact in the urban area. MIMOS II became a powerful tool because its constructive requirements to flight has produced a robust, power efficient, miniaturised, and light. On the limitation side, the technique takes sometime to produce a good result and the instrument requires a radioactive source to operate. MIMOS II Team has reported a new generation of this instrument incorporating a XRF spectrometer using the radioactive source to generate fluorescence emissions from sample. The author, and its research group, adapted a portable XRF spectrometer to an autonomous underwater vehicle (AUV) and conducted heavy metals survey in sediments across the Derwent Estuary in Tasmania, Australia. The AUV lands on suitable locations underwater, makes the chemical analysis and decide based on the result to move to a closer location, should high concentration of chemicals of interest be found, or to another distant location otherwise. Beyond environmental applications, these instruments were applied in archaeology and in industrial process control.oessbauer spectra recorded on airborne particles (Total Suspended Particles) collected at Ilha do Boi, VItoria, ES, Brazil. SIRO's Autonomous Underwater Vehicle carring a miniaturised XRF spectrometer for underwater chemistry. Students involved in this Project: Mr Jeremy Breen and Mr Andrew Davie. Collaborators: Dr. Greg Timms (CSIRO) and Dr. Robert Ollington (UTAS). This AUV us 1.2m long.
NASA Astrophysics Data System (ADS)
Kiyan, Duygu; Rath, Volker; Delhaye, Robert
2017-04-01
The frequency- and time-domain airborne electromagnetic (AEM) data collected under the Tellus projects of the Geological Survey of Ireland (GSI) which represent a wealth of information on the multi-dimensional electrical structure of Ireland's near-surface. Our project, which was funded by GSI under the framework of their Short Call Research Programme, aims to develop and implement inverse techniques based on various Bayesian methods for these densely sampled data. We have developed a highly flexible toolbox using Python language for the one-dimensional inversion of AEM data along the flight lines. The computational core is based on an adapted frequency- and time-domain forward modelling core derived from the well-tested open-source code AirBeo, which was developed by the CSIRO (Australia) and the AMIRA consortium. Three different inversion methods have been implemented: (i) Tikhonov-type inversion including optimal regularisation methods (Aster el al., 2012; Zhdanov, 2015), (ii) Bayesian MAP inversion in parameter and data space (e.g. Tarantola, 2005), and (iii) Full Bayesian inversion with Markov Chain Monte Carlo (Sambridge and Mosegaard, 2002; Mosegaard and Sambridge, 2002), all including different forms of spatial constraints. The methods have been tested on synthetic and field data. This contribution will introduce the toolbox and present case studies on the AEM data from the Tellus projects.
Aljaryian, Rasha; Kumar, Lalit; Taylor, Subhashni
2016-10-01
The sunn pest, Eurygaster integriceps (Hemiptera: Scutelleridae), is an economically significant pest throughout Western Asia and Eastern Europe. This study was conducted to examine the possible risk posed by the influence of climate change on its spread. CLIMEX software was used to model its current global distribution. Future invasion potential was investigated using two global climate models (GCMs), CSIRO-Mk3.0 (CS) and MIROC-H (MR), under A1B and A2 emission scenarios for 2030, 2070 and 2100. Dry to temperate climatic areas favour sunn pests. The potential global range for E. integriceps is expected to extend further polewards between latitudes 60° N and 70° N. Northern Europe and Canada will be at risk of sunn pest invasion as cold stress boundaries recede under the emission scenarios of these models. However, current highly suitable areas, such as South Africa and central Australia, will contract where precipitation is projected to decrease substantially with increased heat stress. Estimating the sunn pest's potential geographic distribution and detecting its climatic limits can provide useful information for management strategies and allow biosecurity authorities to plan ahead and reduce the expected harmful economic consequences by identifying the new areas for pest invasion. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
The Southern African Regional Science Initiative (SAFARI 2000). Dry-Season Campaign: An Overview
NASA Technical Reports Server (NTRS)
Swap, R. J.; Annegarn, H. J.; Suttles, J. T.; Haywood, J.; Hely, C.; Hobbs, P. V.; Holben, B. N.; Ji, J.; King, M. D.; Bhartia, P. K. (Technical Monitor)
2002-01-01
The Southern African Regional Science Initiative (SAFARI 2000) is an international science project investigating the southern African earth-atmosphere-human system. The experiment was conducted over a two-year period March 1999 - March 2001. The dry season field campaign (August-Steptember 2000) was the most intensive activity and involving over 200 scientists from 18 different nations. The main objectives of this campaign were to characterize and quantify the biogenic, pyrogenic and anthropogenic aerosol and trace gas emissions and their transport and transformations in the atmosphere and to validate the NASA Earth Observing System (EOS) satellite Terra within a scientific context. Five aircraft, namely two South African Weather Service aircraft, University of Washington CV-580, the UK Meteorological Office C-130 and the NASA ER-2, with different altitude capabilities, participated in the campaign. Additional airborne sampling of southern African air masses that had moved downwind of the subcontinent was conducted by the CSIRO over Australia. Multiple observations were taken in various sectors for a variety of synoptic conditions. Flight missions were designed to maximize synchronous over-flights of the NASA TERRA satellite platform, above regional ground validation and science targets. Numerous smaller-scale ground validation activities took place throughout the region during the campaign period.
Long term performance of wearable transducer for motion energy harvesting
NASA Astrophysics Data System (ADS)
McGarry, Scott A.; Behrens, Sam
2010-04-01
Personal electronic devices such as cell phones, GPS and MP3 players have traditionally depended on battery energy storage technologies for operation. By harvesting energy from a person's motion, these devices may achieve greater run times without increasing the mass or volume of the electronic device. Through the use of a flexible piezoelectric transducer such as poly-vinylidene fluoride (PVDF), and integrating it into a person's clothing, it becomes a 'wearable transducer'. As the PVDF transducer is strained during the person's routine activities, it produces an electrical charge which can then be harvested to power personal electronic devices. Existing wearable transducers have shown great promise for personal motion energy harvesting applications. However, they are presently physically bulky and not ergonomic for the wearer. In addition, there is limited information on the energy harvesting performance for wearable transducers, especially under realistic conditions and for extended cyclic force operations - as would be experienced when worn. In this paper, we present experimental results for a wearable PVDF transducer using a person's measured walking force profile, which is then cycled for a prolonged period of time using an experimental apparatus. Experimental results indicate that after an initial drop in performance, the transducer energy harvesting performance does not substantially deteriorate over time, as less than 10% degradation was observed. Longevity testing is still continuing at CSIRO.
Using population genetic tools to develop a control strategy for feral cats (Felis catus) in Hawai'i
Hansen, H.; Hess, S.C.; Cole, D.; Banko, P.C.
2007-01-01
Population genetics can provide information about the demographics and dynamics of invasive species that is beneficial for developing effective control strategies. We studied the population genetics of feral cats on Hawai'i Island by microsatellite analysis to evaluate genetic diversity and population structure, assess gene flow and connectivity among three populations, identify potential source populations, characterise population dynamics, and evaluate sex-biased dispersal. High genetic diversity, low structure, and high number of migrants per generation supported high gene flow that was not limited spatially. Migration rates revealed that most migration occurred out of West Mauna Kea. Effective population size estimates indicated increasing cat populations despite control efforts. Despite high gene flow, relatedness estimates declined significantly with increased geographic distance and Bayesian assignment tests revealed the presence of three population clusters. Genetic structure and relatedness estimates indicated male-biased dispersal, primarily from Mauna Kea, suggesting that this population should be targeted for control. However, recolonisation seems likely, given the great dispersal ability that may not be inhibited by barriers such as lava flows. Genetic monitoring will be necessary to assess the effectiveness of future control efforts. Management of other invasive species may benefit by employing these population genetic tools. ?? CSIRO 2007.
Improvements to active material for VRLA batteries
NASA Astrophysics Data System (ADS)
Prengaman, R. David
In the past several years, there have been many developments in the materials for lead-acid batteries. Silver in grid alloys for high temperature climates in SLI batteries has increased the silver content of the recycled lead stream. Concern about silver and other contaminants in lead for the active material for VRLA batteries led to the initiation of a study by ALABC at CSIRO. The study evaluated the effects of many different impurities on the hydrogen and oxygen evolution currents in float service for flooded and VRLA batteries at different temperatures and potentials. The study results increased the understanding about the effects of various impurities in lead for use in active material, as well as possible performance and life improvements in VRLA batteries. Some elements thought to be detrimental have been found to be beneficial. Studies have now uncovered the effects of the beneficial elements as well as additives to both the positive and negative active material in increasing battery capacity, extending life and improving recharge. Glass separator materials have also been re-examined in light of the impurities study. Old glass compositions may be revived to give improved battery performance via compositional changes to the glass chemistry. This paper reviews these new developments and outline suggestions for improved battery performance based on unique impurities and additives.
Evaluation of East Asian climatology as simulated by seven coupled models
NASA Astrophysics Data System (ADS)
Jiang, Dabang; Wang, Huijun; Lang, Xianmei
2005-07-01
Using observation and reanalysis data throughout 1961 1990, the East Asian surface air temperature, precipitation and sea level pressure climatology as simulated by seven fully coupled atmosphere-ocean models, namely CCSR/NIES, CGCM2, CSIRO-Mk2, ECHAM4/OPYC3, GFDL-R30, HadCM3, and NCAR-PCM, are systematically evaluated in this study. It is indicated that the above models can successfully reproduce the annual and seasonal surface air temperature and precipitation climatology in East Asia, with relatively good performance for boreal autumn and annual mean. The models’ ability to simulate surface air temperature is more reliable than precipitation. In addition, the models can dependably capture the geographical distribution pattern of annual, boreal winter, spring and autumn sea level pressure in East Asia. In contrast, relatively large simulation errors are displayed when simulated boreal summer sea level pressure is compared with reanalysis data in East Asia. It is revealed that the simulation errors for surface air temperature, precipitation and sea level pressure are generally large over and around the Tibetan Plateau. No individual model is best in every aspect. As a whole, the ECHAM4/OPYC3 and HadCM3 performances are much better, whereas the CGCM2 is relatively poorer in East Asia. Additionally, the seven-model ensemble mean usually shows a relatively high reliability.
Mitchell, Patrick J; O'Grady, Anthony P; Tissue, David T; White, Donald A; Ottenschlaeger, Maria L; Pinkard, Elizabeth A
2013-02-01
Plant survival during drought requires adequate hydration in living tissues and carbohydrate reserves for maintenance and recovery. We hypothesized that tree growth and hydraulic strategy determines the intensity and duration of the 'physiological drought', thereby affecting the relative contributions of loss of hydraulic function and carbohydrate depletion during mortality. We compared patterns in growth rate, water relations, gas exchange and carbohydrate dynamics in three tree species subjected to prolonged drought. Two Eucalyptus species (E. globulus, E. smithii) exhibited high growth rates and water-use resulting in rapid declines in water status and hydraulic conductance. In contrast, conservative growth and water relations in Pinus radiata resulted in longer periods of negative carbon balance and significant depletion of stored carbohydrates in all organs. The ongoing demand for carbohydrates from sustained respiration highlighted the role that duration of drought plays in facilitating carbohydrate consumption. Two drought strategies were revealed, differentiated by plant regulation of water status: plants maximized gas exchange, but were exposed to low water potentials and rapid hydraulic dysfunction; and tight regulation of gas exchange at the cost of carbohydrate depletion. These findings provide evidence for a relationship between hydraulic regulation of water status and carbohydrate depletion during terminal drought. © 2012 CSIRO. New Phytologist © 2012 New Phytologist Trust.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Carere, Jason; Colgrave, Michelle L; Stiller, Jiri; Liu, Chunji; Manners, John M; Kazan, Kemal; Gardiner, Donald M
2016-11-01
Plants produce a variety of secondary metabolites to defend themselves from pathogen attack, while pathogens have evolved to overcome plant defences by producing enzymes that degrade or modify these defence compounds. However, many compounds targeted by pathogen enzymes currently remain enigmatic. Identifying host compounds targeted by pathogen enzymes would enable us to understand the potential importance of such compounds in plant defence and modify them to make them insensitive to pathogen enzymes. Here, a proof of concept metabolomics-based method was developed to discover plant defence compounds modified by pathogens using two pathogen enzymes with known targets in wheat and tomato. Plant extracts treated with purified pathogen enzymes were subjected to LC-MS, and the relative abundance of metabolites before and after treatment were comparatively analysed. Using two enzymes from different pathogens the in planta targets could be found by combining relatively simple enzymology with the power of untargeted metabolomics. Key to the method is dataset simplification based on natural isotope occurrence and statistical filtering, which can be scripted. The method presented here will aid in our understanding of plant-pathogen interactions and may lead to the development of new plant protection strategies. © 2016 CSIRO. New Phytologist © 2016 New Phytologist Trust.
Australia's marine virtual laboratory
NASA Astrophysics Data System (ADS)
Proctor, Roger; Gillibrand, Philip; Oke, Peter; Rosebrock, Uwe
2014-05-01
In all modelling studies of realistic scenarios, a researcher has to go through a number of steps to set up a model in order to produce a model simulation of value. The steps are generally the same, independent of the modelling system chosen. These steps include determining the time and space scales and processes of the required simulation; obtaining data for the initial set up and for input during the simulation time; obtaining observation data for validation or data assimilation; implementing scripts to run the simulation(s); and running utilities or custom-built software to extract results. These steps are time consuming and resource hungry, and have to be done every time irrespective of the simulation - the more complex the processes, the more effort is required to set up the simulation. The Australian Marine Virtual Laboratory (MARVL) is a new development in modelling frameworks for researchers in Australia. MARVL uses the TRIKE framework, a java-based control system developed by CSIRO that allows a non-specialist user configure and run a model, to automate many of the modelling preparation steps needed to bring the researcher faster to the stage of simulation and analysis. The tool is seen as enhancing the efficiency of researchers and marine managers, and is being considered as an educational aid in teaching. In MARVL we are developing a web-based open source application which provides a number of model choices and provides search and recovery of relevant observations, allowing researchers to: a) efficiently configure a range of different community ocean and wave models for any region, for any historical time period, with model specifications of their choice, through a user-friendly web application, b) access data sets to force a model and nest a model into, c) discover and assemble ocean observations from the Australian Ocean Data Network (AODN, http://portal.aodn.org.au/webportal/) in a format that is suitable for model evaluation or data assimilation, and d) run the assembled configuration in a cloud computing environment, or download the assembled configuration and packaged data to run on any other system of the user's choice. MARVL is now being applied in a number of case studies around Australia ranging in scale from locally confined estuaries to the Tasman Sea between Australia and New Zealand. In time we expect the range of models offered will include biogeochemical models.
Discerning the biochemical stability of pyrogenic C in soils
NASA Astrophysics Data System (ADS)
De la Rosa, José M.; Paneque, Marina; Contreras-Bernal, Lidia; Miller, Ana Z.; Knicker, Heike
2016-04-01
The soil organic matter (SOM) constitutes approximately 2/3 of the global terrestrial C pool, which corresponds to estimated 4000 Pg to a depth of 3 m [1] and therefore, the dynamics of organic carbon (OC) in soils control a large part of the terrestrial C cycle. The term Pyrogenic Carbon (PyC) comprises the whole range of pyrogenic organic materials, from partly charred material through charcoal to soot produced during fire, as well as technical chars (biochars) produced by pyrolysis of biomass. The previously common assumption of PyC being inert has long been proven wrong [2]. In theory, the pyrogenic process confers these materials a longer mean residence time in the soils than their precursors, thus the application of PyC in general and particularly biochar to soil is proposed as a valid approach to establish a significant, long-term sink for atmospheric carbon dioxide in terrestrial ecosystems [3]. Nevertheless, the knowledge concerning the biochemical recalcitrance of PyOM in soils is still limited. This study combines the analysis by 13C solid state Nuclear Magnetic Resonance Spectroscopy (13C NMR), Field Emission Scanning Electron Microscopy (FESEM), analytical pyrolysis (Py-GC/MS) and CO2 emissions in incubated pots of burned and unburned soils as well as in biochar amended and un-amended soils. By using this integrated approach we achieved a more complete understanding of the stability of different forms of PyC in the soil and the chemical changes occurring during aging. Significant differences are found between the stability of PyC. They depend on the nature of the source material, surficial properties of PyC, the pyrolysis process and the soil conditions during aging. Acknowledgements: The Marie Skłodowska-Curie actions (PCIG12-GA-2012-333784-Biocharisma project and PIEF-GA-2012-328689-DECAVE project), and the Spanish Ministry of Economy and Competitiveness (MINECO) (project PCGL2012-37041) are thanked for the financial support of the present study. References: [1] Hernández-Soriano M.C., Peña A., Mingorance MD., 2013. Env. Toxicol. Chem. 32(5), 1027-1032. [2] De la Rosa J.M., Knicker H., 2011. Soil Biol. Biochem. 43, 2368-2373. [3] Sohi S., Lopez-Capel E., Krull E., Bol R., 2009. Rep. No. 05/09. CSIRO. 1834-6618.
Upper-Ocean Heat Balance Processes and the Walker Circulation in CMIP5 Model Projections
NASA Technical Reports Server (NTRS)
Robertson, F. R.; Roberts, J. B.; Funk, C.; Lyon, B.; Ricciardulli, L.
2012-01-01
Considerable uncertainty remains as to the importance of mechanisms governing decadal and longer variability of the Walker Circulation, its connection to the tropical climate system, and prospects for tropical climate change in the face of anthropogenic forcing. Most contemporary climate models suggest that in response to elevated CO2 and a warmer but more stratified atmosphere, the required upward mass flux in tropical convection will diminish along with the Walker component of the tropical mean circulation as well. Alternatively, there is also evidence to suggest that the shoaling and increased vertical stratification of the thermocline in the eastern Pacific will enable a muted SST increase there-- preserving or even enhancing some of the dynamical forcing for the Walker cell flow. Over the past decade there have been observational indications of an acceleration in near-surface easterlies, a strengthened Pacific zonal SST gradient, and globally-teleconnected dislocations in precipitation. But is this evidence in support of an ocean dynamical thermostat process posited to accompany anthropogenic forcing, or just residual decadal fluctuations associated with variations in warm and cold ENSO events and other stochastic forcing? From a modeling perspective we try to make headway on this question by examining zonal variations in surface energy fluxes and dynamics governing tropical upper ocean heat content evolution in the WCRP CMIP5 model projections. There is some diversity among model simulations; for example, the CCSM4 indicates net ocean warming over the IndoPacific region while the CSIRO model concentrates separate warming responses over the central Pacific and Indian Ocean regions. The models, as with observations, demonstrate strong local coupling between variations in column water vapor, downward surface longwave radiation and SST; but the spatial patterns of changes in the sign of this relationship differ among models and, for models as a whole, with observations. Our analysis focuses initially on probing the inter-model differences in energy fluxes / transports and Walker Circulation response to forcing. We then attempt to identify statistically the El Nino- / La Nina-related ocean heat content variability unique to each model and regress out the associated energy flux, ocean heat transport and Walker response on these shorter time scales for comparison to that of the anthropogenic signals.
Surveillance of Space in Australia
NASA Astrophysics Data System (ADS)
Newsam, G.
Australia's geography and technology base got it off to a flying start in the early days of surveillance of space, starting with CSIRO's first radio telescope in the 1940's and climaxing in NASA's establishment of station 43 in the Deep Space Network at Tidbinbilla in 1965. But Britain's exit from space and the subsequent closure of the Woomera launch range and associated space tracking facilities in the early 1970's saw the start of a long draw-down of capability. Programs such as CSIRO's radio astronomy telescopes, Electro-Optic Systems' adoption of laser technology for satellite laser ranging and tracking system, and the exploration of the use of technology developed in Australia's over-the-horizon-radar program for surveillance of space, kept some interest in the problem alive, but there has been no serious national investment in the area for the last thirty years. Recently, however, increased awareness of the vulnerability of space systems and the need to include potential opponents' space capabilities in operations planning has led to a revival of interest in space situational awareness within the Australian Defence Organisation. While firm commitments to new systems must wait on the next Defence White Paper due out at the end of 2007 and the policy directions it formally endorses, discussions have already started with the US on participating in the Space Surveillance Network (SSN) and developing a comprehensive space situational awareness capability. In support of these initiatives the Defence Science and Technology Organisation (DSTO) is drawing up an inventory of relevant Australian capabilities, technologies and activities: the paper will describe the findings of this inventory, and in particular local technologies and systems that might be deployed in Australia to contribute to the SSN. In the optical regime the available options are rather limited; they centre primarily on the satellite laser ranging technology developed by Electro-Optic Systems and operating in stations at Yarragadee, Western Australia and Mt Stromlo, Australian Capital Territory. Recently, however, Australia has also agreed to host a node of AFRL's Extended HANDS telescope network in Learmonth, Western Australia, and discussions are underway with researchers in Australian academia about also participating in this research program. In the RF regime, however, DSTO has substantial HF and microwave radar programs, elements of which could be readily adapted to surveillance of space. Proposals have already been developed internally within both programs for various forms of space surveillance systems including both broad area surveillance and imaging along with some very initial technology concept demonstrator systems. Recently proposals have also been floated to substantially increase Australia's civilian space surveillance programs including the Ionospheric Prediction Service's longstanding program to monitor the ionosphere and space weather, meteor radars and other systems. Finally Australia's bid to host the international Square Kilometre Array radio telescope has already generated concrete commitments to establish several very substantial RF arrays in Western Australia that may also provide instruments of unprecedented sensitivity and resolution for surveillance of space. The paper will survey these technology development programs and associated progress on integrating them into some sort of national program for space situational awareness.
NASA Astrophysics Data System (ADS)
Post, David
2010-05-01
In a water-scarce country such as Australia, detailed, accurate and reliable assessments of current and future water availability are essential in order to adequately manage the limited water resource. This presentation describes a recently completed study which provided an assessment of current water availability in Tasmania, Australia, and also determined how this water availability would be impacted by climate change and proposed catchment development by the year 2030. The Tasmania Sustainable Yields Project (http://www.csiro.au/partnerships/TasSY.html) assessed current water availability through the application of rainfall-runoff models, river models, and recharge and groundwater models. These were calibrated to streamflow records and parameterised using estimates of current groundwater and surface water extractions and use. Having derived a credible estimate of current water availability, the impacts of future climate change on water availability were determined through deriving changes in rainfall and potential evapotranspiration from 15 IPCC AR4 global climate models. These changes in rainfall were then dynamically downscaled using the CSIRO-CCAM model over the relatively small study area (50,000 square km). A future climate sequence was derived by modifying the historical 84-year climate sequence based on these changes in rainfall and potential evapotranspiration. This future climate sequence was then run through the rainfall-runoff, river, recharge and groundwater models to give an estimate of water availability under future climate. To estimate the impacts of future catchment development on water availability, the models were modified and re-run to reflect projected increases in development. Specifically, outputs from the rainfall-runoff and recharge models were reduced over areas of projected future plantation forestry. Conversely, groundwater recharge was increased over areas of new irrigated agriculture and new extractions of water for irrigation were implemented in the groundwater and river models. Results indicate that historical average water availability across the project area was 21,815 GL/year. Of this, 636 GL/year of surface water and 38 GL/year of groundwater are currently extracted for use. By 2030, rainfall is projected to decrease by an average of 3% over the project area. This decrease in rainfall and concurrent increase in potential evapotranspiration leads to a decrease in water availability of 5% by 2030. As a result of lower streamflows, under current cease-to-take rules, currently licensed extractions are projected to decrease by 3% (19 GL/year). This however is offset by an additional 120 GL/year of extractions for proposed new irrigated agriculture. These new extractions, along with the increase in commercial forest plantations lead to a reduction in total surface water of 1% in addition to the 5% reduction due to climate change. Results from this study are being used by the Tasmanian and Australian governments to guide the development of a sustainable irrigated agriculture industry in Tasmania. In part, this is necessary to offset the loss of irrigated agriculture from the southern Murray-Darling Basin where climate change induced reductions in rainfall are projected to be far worse.
The Soil Spectroscopy Group and the development of a global soil spectral library
NASA Astrophysics Data System (ADS)
Rossel, R. Viscarra Rossel; Soil Spectroscopy Group
2009-04-01
This collaboration aims to develop a global soil spectral library and to establish a community of practice for soil spectroscopy. This will help progress soil spectroscopy from an almost purely research tool to a more widely adopted and useful technique for soil analysis, proximal soil sensing, soil monitoring and digital soil mapping. The initiative started in April 2008 with a proposal for the project to be conducted in a number of stages to investigate the following topics: Global soil diversity and variation can be characterised using diffuse reflectance spectra. Soil spectral calibrations can be used to predict soil properties globally. Soil spectroscopy can be a useful tool for digital soil mapping. Currently, the soil spectral library is being developed using legacy soil organic carbon (OC) and clay content data and vis-NIR (350-2500 nm) spectra, but in future we aim to include other soil properties and mid-IR (2500-25000 nm) spectra. The group already has more than 40 collaborators from six continents and 20 countries and the library consists of 5223 spectra from 43 countries. The library accounts for spectra from approximately only 22% of the world's countries, some of which are poorly represented with only very few spectra. We would like to encourage participation from as many countries as possible, particularly, we would like contributions from counties in Central and South America, Mexico, Canada, Russia and countries in Eastern Europe, Africa and Asia. We are missing a lot of countries and for some, e.g. China we have only very few data! Do you want to join the group and contribute spectra to the global library? The requirements for contributing spectra to the global library are as follows: Spectra collected in the 350-2500 nm range every 1 nm. At least soil OC and clay content data but also any other soil chemical, physical, biological and mineralogical data, noting which analytical techniques were used. Coordinates (in WGS84 format) for each sample. Soil classification for each sample, preferably using FAO-WRB (FAO, 1998). Future access to soil samples for mid-IR scanning. If you do not have all of the requested metadata for every sample, but would like to contribute to the library, please let us know. Also, if you do not have access to a spectrometer but have a good set of soils that you would like to contribute to the library, we can arrange to have the soils scanned at CSIRO in Australia or in a collaborating institution nearer to you. We have done this with a number of countries already. There are leading collaborators in each continent: Bo Stenberg in Europe, David Brown in USA, Alexandre Dematte in South America, Keith Shepherd in Africa, Eyal Ben-Dor in the Middle East and Asia and Raphael Viscarra Rossel in Oceania and Asia. To make this work we need participation from as many people around the world as possible. If you are interested in contributing spectra to the global library please send me an email (raphael.viscarra-rossel@csiro.au) and join the group!
COSPAR Round Table on `How can GEO and COSPAR scientific community work together?'
NASA Astrophysics Data System (ADS)
Gobron, Nadine; Ollier, Gilles
The Group on Earth Observations is coordinating efforts to build a Global Earth Observation System of Systems, or GEOSS. GEO is a voluntary partnership of governments and inter-national organizations. It provides a framework within which these partners can develop new projects and coordinate their strategies and investments. The Science and Technology Commit-tee of GEO is working to strengthen this role by encouraging the wider scientific and technology community to participate as contributors to and benefactors of a sustained GEOSS. The proposed round table aims at discussing how are scientists and GEO currently working together, using specific examples and how do space agencies or data providers contribute to GEO? Round table participants are welcome to present their own vision on future actions to improve, if necessary, the relations between contributors and GEO. Meeting participants will also be offered the opportunity to intervene and ask questions to the panel. Moderators: • Dr. Nadine Gobron -EC-JRC -Chair of Commission A • Gilles Ollier -DG Research Participants: • Prof. José Achache -Director of GEO • Prof. Maurice Bonnet -President of COSPAR • Dr. Tamatsu Igarashi -JAXA EORC Manager • Dr. Stuart Minchin (CSIRO and member of GEO STC) TBC • Prof. Berrien Moore -Executive Director of Climate Central • Dr. Diane E. Wickland -NASA Headquarters • Dr. Stephen Briggs -ESA (TBC)
Estimation of the climate change impact on a catchment water balance using an ensemble of GCMs
NASA Astrophysics Data System (ADS)
Reshmidevi, T. V.; Nagesh Kumar, D.; Mehrotra, R.; Sharma, A.
2018-01-01
This work evaluates the impact of climate change on the water balance of a catchment in India. Rainfall and hydro-meteorological variables for current (20C3M scenario, 1981-2000) and two future time periods: mid of the 21st century (2046-2065) and end of the century (2081-2100) are simulated using Modified Markov Model-Kernel Density Estimation (MMM-KDE) and k-nearest neighbor downscaling models. Climate projections from an ensemble of 5 GCMs (MPI-ECHAM5, BCCR-BCM2.0, CSIRO-mk3.5, IPSL-CM4, and MRI-CGCM2) are used in this study. Hydrologic simulations for the current as well as future climate scenarios are carried out using Soil and Water Assessment Tool (SWAT) integrated with ArcGIS (ArcSWAT v.2009). The results show marginal reduction in runoff ratio, annual streamflow and groundwater recharge towards the end of the century. Increased temperature and evapotranspiration project an increase in the irrigation demand towards the end of the century. Rainfall projections for the future shows marginal increase in the annual average rainfall. Short and moderate wet spells are projected to decrease, whereas short and moderate dry spells are projected to increase in the future. Projected reduction in streamflow and groundwater recharge along with the increase in irrigation demand is likely to aggravate the water stress in the region under the future scenario.
Shabani, Farzin; Kumar, Lalit
2013-01-01
Global climate model outputs involve uncertainties in prediction, which could be reduced by identifying agreements between the output results of different models, covering all assumptions included in each. Fusarium oxysporum f.sp. is an invasive pathogen that poses risk to date palm cultivation, among other crops. Therefore, in this study, the future distribution of invasive Fusarium oxysporum f.sp., confirmed by CSIRO-Mk3.0 (CS) and MIROC-H (MR) GCMs, was modeled and combined with the future distribution of date palm predicted by the same GCMs, to identify areas suitable for date palm cultivation with different risk levels of invasive Fusarium oxysporum f.sp., for 2030, 2050, 2070 and 2100. Results showed that 40%, 37%, 33% and 28% areas projected to become highly conducive to date palm are under high risk of its lethal fungus, compared with 37%, 39%, 43% and 42% under low risk, for the chosen years respectively. Our study also indicates that areas with marginal risk will be limited to 231, 212, 186 and 172 million hectares by 2030, 2050, 2070 and 2100. The study further demonstrates that CLIMEX outputs refined by a combination of different GCMs results of different species that have symbiosis or parasite relationship, ensure that the predictions become robust, rather than producing hypothetical findings, limited purely to publication.
High resolution radio imaging study of the Pulsar Wind Nebula MSH 15-52
NASA Astrophysics Data System (ADS)
Leung, W.-Y.; Ng, C.-Y.
2016-06-01
We present a new high-resolution radio imaging study of the pulsar wind nebula (PWN) MSH 15-52, also dubbed as "the hand of God", with the Australia Telescope Compact Array observations. The system is powered by a young and energetic radio pulsar B1509-58 with high spin down luminosity of E(dot) = 2 x 10^37 erg/s. Previous X-ray images have shown that the PWN has a complex hand-shape morphology extending over 10 pc with features like jets, arc, filaments and enhanced emission knots in the HII region RCW 89. The new 6cm and 3cm radio images show different morphology than the X-ray counterpart. No radio counterpart of the X-ray jet is detected, instead we found enhanced emission in a sheath surrounding the jet. Additional small-scale features including a polarized linear filament next to the pulsar have also been discovered. Our polarisation measurements show that the intrinsic orientation of magnetic field aligns with the sheath. Finally, spectral analysis results indicate a steep spectrum for the system, which is rather unusual among PWNe. Implications of these findings will be discussed. The Australia Telescope Compact Array is part of the Australia Telescope National Facility which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. This work is supported by an ECS grant under HKU 709713P.
Hook, S E
2010-12-01
The advent of any new technology is typically met with great excitement. So it was a few years ago, when the combination of advances in sequencing technology and the development of microarray technology made measurements of global gene expression in ecologically relevant species possible. Many of the review papers published around that time promised that these new technologies would revolutionize environmental biology as they had revolutionized medicine and related fields. A few years have passed since these technological advancements have been made, and the use of microarray studies in non-model fish species has been adopted in many laboratories internationally. Has the relatively widespread adoption of this technology really revolutionized the fields of environmental biology, including ecotoxicology, aquaculture and ecology, as promised? Or have these studies merely become a novelty and a potential distraction for scientists addressing environmentally relevant questions? In this review, the promises made in early review papers, in particular about the advances that the use of microarrays would enable, are summarized; these claims are compared to the results of recent studies to determine whether the forecasted changes have materialized. Some applications, as discussed in the paper, have been realized and have led to advances in their field, others are still under development. © 2010 CSIRO. Journal of Fish Biology © 2010 The Fisheries Society of the British Isles.
da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho
2017-03-01
Neoleucinodes elegantalis is one of the major insect pests of Solanum lycopersicum. Currently, N. elegantalis is present only in America and the Caribbean, and is a threat in the world's largest S. lycopersicum-producing countries. In terms of potential impact on agriculture, the impact of climate change on insect invasions must be a concern. At present, no research exists regarding the effects of climatic change on the risk level of N. elegantalis. The purpose of this study was to develop a model for S. lycopersicum and N. elegantalis, utilizing CLIMEX to determine risk levels of N. elegantalis in open-field S. lycopersicum cultivation in the present and under projected climate change, using the global climate model CSIRO-Mk3.0. Large areas are projected to be suitable for N. elegantalis and optimal for open-field S. lycopersicum cultivation at the present time. However, in the future these areas will become unsuitable for both species. Conversely, other regions in the future may become optimal for open-field S. lycopersicum cultivation, with a varying risk level for N. elegantalis. The risk level results presented here provide a useful tool to design strategies to prevent the introduction and establishment of N. elegantalis in open-field S. lycopersicum cultivation. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Effects of climate change on hydrology and hydraulics of Qu River Basin, East China.
NASA Astrophysics Data System (ADS)
Gao, C.; Zhu, Q.; Zhao, Z.; Pan, S.; Xu, Y. P.
2015-12-01
The impacts of climate change on regional hydrological extreme events have attracted much attention in recent years. This paper aims to provide a general overview of changes on future runoffs and water levels in the Qu River Basin, upper reaches of Qiantang River, East China by combining future climate scenarios, hydrological model and 1D hydraulic model. The outputs of four GCMs BCC, BNU, CanESM and CSIRO under two scenarios RCP4.5 and RCP8.5 for 2021-2050 are chosen to represent future climate change projections. The LARS-WG statistical downscaling method is used to downscale the coarse GCM outputs and generate 50 years of synthetic precipitation and maximum and minimum temperatures to drive the GR4J hydrological model and the 1D hydraulic model for the baseline period 1971-2000 and the future period 2021-2050. Finally the POT (Peaks Over Threshold) method is applied to analyze the change of extreme events in the study area. The results show that design runoffs and water levels all indicate an increasing trend in the future period for Changshangang River, Jiangshangang River and Qu River at most cases, especially for small return periods(≤20), and for Qu River the increase becomes larger, which suggests that the risk of flooding will probably become greater and appropriate adaptation measures need to be taken.
Shabani, Farzin; Kumar, Lalit
2013-01-01
Global climate model outputs involve uncertainties in prediction, which could be reduced by identifying agreements between the output results of different models, covering all assumptions included in each. Fusarium oxysporum f.sp. is an invasive pathogen that poses risk to date palm cultivation, among other crops. Therefore, in this study, the future distribution of invasive Fusarium oxysporum f.sp., confirmed by CSIRO-Mk3.0 (CS) and MIROC-H (MR) GCMs, was modeled and combined with the future distribution of date palm predicted by the same GCMs, to identify areas suitable for date palm cultivation with different risk levels of invasive Fusarium oxysporum f.sp., for 2030, 2050, 2070 and 2100. Results showed that 40%, 37%, 33% and 28% areas projected to become highly conducive to date palm are under high risk of its lethal fungus, compared with 37%, 39%, 43% and 42% under low risk, for the chosen years respectively. Our study also indicates that areas with marginal risk will be limited to 231, 212, 186 and 172 million hectares by 2030, 2050, 2070 and 2100. The study further demonstrates that CLIMEX outputs refined by a combination of different GCMs results of different species that have symbiosis or parasite relationship, ensure that the predictions become robust, rather than producing hypothetical findings, limited purely to publication. PMID:24340100
Observations of Sea Surface Mean Square Slope During the Southern Ocean Waves Experiment
NASA Technical Reports Server (NTRS)
Walsh, E. J.; Vandemark, D. C.; Hines, D. E.; Banner, M. L.; Chen, W.; Swift, R. N.; Scott, J. F.; Jensen, J.; Lee, S.; Fandry, C.
1999-01-01
For the Southern Ocean Waves Experiment (SOWEX), conducted in June 1992 out of Hobart, Tasmania, the 36 GHz (8.3 mm) NASA Scanning Radar Altimeter (SRA) was shipped to Australia and installed on a CSIRO Fokker F-27 research aircraft instrumented to make comprehensive surface layer measurements of air-sea interaction fluxes. The sea surface mean square slope (mss), which is predominantly caused by the short waves, was determined from the backscattered power falloff with incidence angle measured by the SRA in the plane normal to the aircraft heading. On each flight, data were acquired at 240 m altitude while the aircraft was in a 7 deg roll attitude, interrogating off-nadir incidence angles from -15 deg through nadir to +29 deg. The aircraft turned azimuthally through 810 deg in this attitude, mapping the azimuthal dependence of the backscattered power falloff with incidence angle. Two sets of turning data were acquired on each day, before and after the aircraft measured wind stress at low altitude (12 m to 65 m). Wave topography and backscattered power for mss were also acquired during those level flight segments whenever the aircraft altitude was above the SRA minimum range of 35 m. A unique feature of this experiment was the use of a nadir-directed low-gain horn antenna (35 deg beamwidth) to acquire azimuthally integrated backscattered power data versus incidence angle before and after the turn data.
NASA Astrophysics Data System (ADS)
Kunnath-Poovakka, A.; Ryu, D.; Renzullo, L. J.; George, B.
2016-04-01
Calibration of spatially distributed hydrologic models is frequently limited by the availability of ground observations. Remotely sensed (RS) hydrologic information provides an alternative source of observations to inform models and extend modelling capability beyond the limits of ground observations. This study examines the capability of RS evapotranspiration (ET) and soil moisture (SM) in calibrating a hydrologic model and its efficacy to improve streamflow predictions. SM retrievals from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) and daily ET estimates from the CSIRO MODIS ReScaled potential ET (CMRSET) are used to calibrate a simplified Australian Water Resource Assessment - Landscape model (AWRA-L) for a selection of parameters. The Shuffled Complex Evolution Uncertainty Algorithm (SCE-UA) is employed for parameter estimation at eleven catchments in eastern Australia. A subset of parameters for calibration is selected based on the variance-based Sobol' sensitivity analysis. The efficacy of 15 objective functions for calibration is assessed based on streamflow predictions relative to control cases, and relative merits of each are discussed. Synthetic experiments were conducted to examine the effect of bias in RS ET observations on calibration. The objective function containing the root mean square deviation (RMSD) of ET result in best streamflow predictions and the efficacy is superior for catchments with medium to high average runoff. Synthetic experiments revealed that accurate ET product can improve the streamflow predictions in catchments with low average runoff.
Aquifer characterisation in East Timor, with ground TEM
NASA Astrophysics Data System (ADS)
Ley-Cooper, A.
2011-12-01
An assessment of Climate Change Impacts on Groundwater Resources in East Timor led by Geosciences Australia is aimed at assisting East Timor's government to better understand and manage their groundwater resources. Form the current known information most aquifers in Timor-Leste are recharged by rainfall during the wet season. There is a concern that without a regular recharge, the stored groundwater capacity will decrease. Timor's population increase has caused a higher demand for groundwater which is currently been met by regulated pumping bores which are taped into deep aquifers, plus the sprouting of unregulated spear point bores in the shallow aquifers . Both groundwater recharge and the aquifers morphology need to be better understood in order to ensure supply and so groundwater can be managed for the future. Current weather patterns are expected to change and this could cause longer periods of drought or more intense rainfall, which in turn, would affect the availability and quality of groundwater. Salt water intrusions pose a threat on the low-lying aquifers as sea level rises. Australia's CSIRO has undertaken a series hydrogeophysical investigations employing ground TEM to assist in the characterisation of three aquifers near Dili, Timor Leste's capital. Interpreting ground water chemistry and dating; jointly with EM data has enhanced the understanding of the aquifers architecture, groundwater quality and helped identify potential risks of seawater intrusions.
Global Potential for Hydro-generated Electricity and Climate Change Impact
NASA Astrophysics Data System (ADS)
Zhou, Y.; Hejazi, M. I.; Leon, C.; Calvin, K. V.; Thomson, A. M.; Li, H. Y.
2014-12-01
Hydropower is a dominant renewable energy source at the global level, accounting for more than 15% of the world's total power supply. It is also very vulnerable to climate change. Improved understanding of climate change impact on hydropower can help develop adaptation measures to increase the resilience of energy system. In this study, we developed a comprehensive estimate of global hydropower potential using runoff and stream flow data derived from a global hydrologic model with a river routing sub-model, along with turbine technology performance, cost assumptions, and environmental consideration (Figure 1). We find that hydropower has the potential to supply a significant portion of the world energy needs, although this potential varies substantially by regions. Resources in a number of countries exceed by multiple folds the total current demand for electricity, e.g., Russia and Indonesia. A sensitivity analysis indicates that hydropower potential can be highly sensitive to a number of parameters including designed flow for capacity, cost and financing, turbine efficiency, and stream flow. The climate change impact on hydropower potential was evaluated by using runoff outputs from 4 climate models (HadCM3, PCM, CGCM2, and CSIRO2). It was found that the climate change on hydropower shows large variation not only by regions, but also climate models, and this demonstrates the importance of incorporating climate change into infrastructure-planning at the regional level though the existing uncertainties.
Components for Maintaining and Publishing Earth Science Vocabularies
NASA Astrophysics Data System (ADS)
Cox, S. J. D.; Yu, J.
2014-12-01
Shared vocabularies are an important aid to geoscience data interoperability. Many organizations maintain useful vocabularies, with Geologic Surveys having a particularly long history of vocabulary and lexicon development. However, the mode of publication is heterogeneous, ranging from PDFs and HTML web pages, spreadsheets and CSV, through various user-interfaces and APIs. Update and maintenance ranges from tightly-governed and externally opaque, through various community processes, all the way to crowd-sourcing ('folksonomies'). A general expectation, however, is for greater harmonization and vocabulary re-use. In order to be successful this requires (a) standardized content formalization and APIs (b) transparent content maintenance and versioning. We have been trialling a combination of software dealing with registration, search and linking. SKOS is designed for formalizing multi-lingual, hierarchical vocabularies, and has been widely adopted in earth and environmental sciences. SKOS is an RDF vocabulary, for which SPARQL is the standard low-level API. However, for interoperability between SKOS vocabulary sources, a SKOS-based API (i.e. based on the SKOS predicates prefLabel, broader, narrower, etc) is required. We have developed SISSvoc for this purpose, and used it to deploy a number of vocabularies on behalf of the IUGS, ICS, NERC, OGC, the Australian Government, and CSIRO projects. SISSvoc Search provides simple search UI on top of one or more SISSvoc sources. Content maintenance is composed of many elements, including content-formalization, definition-update, and mappings to related vocabularies. Typically there is a degree of expert judgement required. In order to provide confidence in users, two requirements are paramount: (i) once published, a URI that denotes a vocabulary item must remain dereferenceable; (ii) the history and status of the content denoted by a URI must be available. These requirements match the standard 'registration' paradigm which is implemented in the Linked Data Registry, which is currently used by WMO and the UK Environment Agency for publication of vocabularies. Together, these components provide a powerful and flexible system for providing earth science vocabularies for the community, consistent with semantic web and linked-data principles.
Towards seamless workflows in agile data science
NASA Astrophysics Data System (ADS)
Klump, J. F.; Robertson, J.
2017-12-01
Agile workflows are a response to projects with requirements that may change over time. They prioritise rapid and flexible responses to change, preferring to adapt to changes in requirements rather than predict them before a project starts. This suits the needs of research very well because research is inherently agile in its methodology. The adoption of agile methods has made collaborative data analysis much easier in a research environment fragmented across institutional data stores, HPC, personal and lab computers and more recently cloud environments. Agile workflows use tools that share a common worldview: in an agile environment, there may be more that one valid version of data, code or environment in play at any given time. All of these versions need references and identifiers. For example, a team of developers following the git-flow conventions (github.com/nvie/gitflow) may have several active branches, one for each strand of development. These workflows allow rapid and parallel iteration while maintaining identifiers pointing to individual snapshots of data and code and allowing rapid switching between strands. In contrast, the current focus of versioning in research data management is geared towards managing data for reproducibility and long-term preservation of the record of science. While both are important goals in the persistent curation domain of the institutional research data infrastructure, current tools emphasise planning over adaptation and can introduce unwanted rigidity by insisting on a single valid version or point of truth. In the collaborative curation domain of a research project, things are more fluid. However, there is no equivalent to the "versioning iso-surface" of the git protocol for the management and versioning of research data. At CSIRO we are developing concepts and tools for the agile management of software code and research data for virtual research environments, based on our experiences of actual data analytics projects in the geosciences. We use code management that allows researchers to interact with the code through tools like Jupyter Notebooks while data are held in an object store. Our aim is an architecture allowing seamless integration of code development, data management, and data processing in virtual research environments.
NASA Astrophysics Data System (ADS)
Rotstayn, L. D.; Jeffrey, S. J.; Collier, M. A.; Dravitzki, S. M.; Hirst, A. C.; Syktus, J. I.; Wong, K. K.
2012-07-01
We use a coupled atmosphere-ocean global climate model (CSIRO-Mk3.6) to investigate the drivers of trends in summer rainfall and circulation in the vicinity of northern Australia. As part of the Coupled Model Intercomparison Project Phase 5 (CMIP5), we perform a 10-member 21st century ensemble driven by Representative Concentration Pathway 4.5 (RCP4.5). To investigate the roles of different forcing agents, we also perform multiple 10-member ensembles of historical climate change, which are analysed for the period 1951-2010. The historical runs include ensembles driven by "all forcings" (HIST), all forcings except anthropogenic aerosols (NO_AA) and forcing only from long-lived greenhouse gases (GHGAS). Anthropogenic aerosol-induced effects in a warming climate are calculated from the difference of HIST minus NO_AA. CSIRO-Mk3.6 simulates a strong summer rainfall decrease over north-western Australia (NWA) in RCP4.5, whereas simulated trends in HIST are weakly positive (but insignificant) during 1951-2010. The weak rainfall trends in HIST are due to compensating effects of different forcing agents: there is a significant decrease in GHGAS, offset by an aerosol-induced increase. Observations show a significant increase of summer rainfall over NWA during the last few decades. The large magnitude of the observed NWA rainfall trend is not captured by 440 unforced 60-yr trends calculated from a 500-yr pre-industrial control run, even though the model's decadal variability appears to be realistic. This suggests that the observed trend includes a forced component, despite the fact that the model does not simulate the magnitude of the observed rainfall increase in response to "all forcings" (HIST). We investigate the mechanism of simulated and observed NWA rainfall changes by exploring changes in circulation over the Indo-Pacific region. The key circulation feature associated with the rainfall increase in reanalyses is a lower-tropospheric cyclonic circulation trend off the coast of NWA, which enhances the monsoonal flow. The model shows an aerosol-induced cyclonic circulation trend off the coast of NWA in HIST minus NO_AA, whereas GHGAS shows an anticyclonic circulation trend. This explains why the aerosol-induced effect is an increase of rainfall over NWA, and the greenhouse gas-induced effect is of opposite sign. Possible explanations for the cyclonic (anticyclonic) circulation trend in HIST minus NO_AA (GHGAS) involve changes in the Walker circulation or the local Hadley circulation. In either case, a plausible atmospheric mechanism is that the circulation anomaly is a Rossby wave response to convective heating anomalies south of the Equator. We also discuss the possible role of air-sea interactions, e.g. an increase (decrease) of sea-surface temperatures off the coast of NWA in HIST minus NO_AA (GHGAS). Further research is needed to better understand the mechanisms and the extent to which these are model-dependent. In summary, our results suggest that anthropogenic aerosols may have "masked" greenhouse gas-induced changes in rainfall over NWA and in circulation over the wider Indo-Pacific region. Due to the opposing effects of greenhouse gases and anthropogenic aerosols, future trends may be very different from trends observed over the last few decades.
ASKAP Joins the Hunt for Mysterious Bursts
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-05-01
A new telescope, the Australian Square Kilometre Array Pathfinder (ASKAP), has joined the search for energetic and elusive fast radio bursts. And in just a few days of looking, its already had success!Elusive TransientsThe Parkes radio telescope, which has detected all but five of the fast radio bursts published to date, has a very narrow field of view. [CSIRO]Fast radio bursts are mysterious millisecond-duration radio pulses that were first discovered around a decade ago. Since that time particularly in recent years weve made some progress toward the goal of localizing them. Were now fairly convinced that fast radio bursts come from outside of the galaxy, and yet theyre enormously bright orders of magnitude more luminous than any pulse seen from the Milky Way.Better identification of where these mysterious bursts come from would help us to determine what they are. But so far, weve discovered only around 30 such bursts, despite the fact that theyre estimated to occur at a rate of 3,000 events per day across the whole sky.Why are they so hard to find? Due to their short duration, effective detection would require instantaneous coverage of a very large fraction of the sky. The Parkes radio telescope which has detected all but five of the fast radio bursts published to date has a field of view spanning less than a square degree,significantly limiting our ability to rapidly survey for these transients.FRB 170107s band-averaged pulse (top) and dynamic spectrum (bottom). [Bannister et al. 2017]A New Array in TownA new player is now on the scene, however, and its already had huge success. ASKAP is a wide-field radio telescope made up of an array of 12-meter antennas. Using phased-array-feed technology, ASKAP is able to instantaneously observe an effective area of 160 square degrees an enormous field compared to Parkes 0.6 square degrees! This capability significantly increases our chances of being able to detect fast radio bursts.In a new study led by Keith Bannister (Australia Telescope National Facility, CSIRO Astronomy and Space Science), a team of scientists presents results from ASKAPs first 3.4-day pilot survey. Bannister and collaborators announce that in this brief time, ASKAP has already detected a fast radio burst: FRB 170107, an especially luminous, 2 millisecond burst that confirms the presence of an ultra-bright population of fast radio bursts.Looking to the FutureLocalization of FRB 170107. [Adapted from Bannister et al. 2017]Using the multiple bands of ASKAP, the authors were able to constrain the position of FRB 170107 to a region just 8 x 8 in size. No known field galaxies exist in that region, so were still not sure exactly where it came from, but this localization is already a significant achievement.The discovery and characterization of a burst already after such a short initial campaign suggests that ASKAP will become a very powerful tool for detecting fast radio bursts including some of the rarest bursts, ultra-bright ones like FRB 170107. We finally appear to be poised to resolve some of the mysteries of this population of transients.CitationK. W. Bannister et al 2017 ApJL 841 L12. doi:10.3847/2041-8213/aa71ff
Developing data aggregation applications from a community standard semantic resource (Invited)
NASA Astrophysics Data System (ADS)
Leadbetter, A.; Lowry, R. K.
2013-12-01
The semantic content of the NERC Vocabulary Server (NVS) has been developed over thirty years. It has been used to mark up metadata and data in a wide range of international projects, including the European Commission (EC) Framework Programme 7 projects SeaDataNet and The Open Service Network for Marine Environmental Data (NETMAR). Within the United States, the National Science Foundation projects Rolling Deck to Repository and Biological & Chemical Data Management Office (BCO-DMO) use concepts from NVS for markup. Further, typed relationships between NVS concepts and terms served by the Marine Metadata Interoperability Ontology Registry and Repository. The vast majority of the concepts publicly served from NVS (35% of ~82,000) form the British Oceanographic Data Centre (BODC) Parameter Usage Vocabulary (PUV). The PUV is instantiated on the NVS as a SKOS concept collection. These terms are used to describe the individual channels in data and metadata served by, for example, BODC, SeaDataNet and BCO-DMO. The PUV terms are designed to be very precise and may contain a high level of detail. Some users have reported that the PUV is difficult to navigate due to its size and complexity (a problem CSIRO have begun to address by deploying a SISSVoc interface to the NVS), and it has been difficult to aggregate data as multiple PUV terms can - with full validity - be used to describe the same data channels. Better approaches to data aggregation are required as a use case for the PUV from the EC European Marine Observation and Data Network (EMODnet) Chemistry project. One solution, proposed and demonstrated during the course of the NETMAR project, is to build new SKOS concept collections which formalise the desired aggregations for given applications, and uses typed relationships to state which PUV concepts contribute to a specific aggregation. Development of these new collections requires input from a group of experts in the application domain who can decide which PUV concepts it is acceptable to aggregate for a given application. Another approach, which has been developed as a use case for concept and data discovery and will be implemented as part of the EC/United States/Australian collaboration the Ocean Data Interoperability Platform, is to expose the well defined, but little publicised, semantic model which underpins each and every concept within the PUV. This will be done in a machine readable form, so that tools can be built to aggregate data and concepts by, for example, the measured parameter; the environmental sphere or compartment of the sampling; and the methodology of the analysis of the parameter. There is interesting work being developed by CSIRO which may be used in this approach. The importance of these data aggregations is growing as more data providers use terms from semantic resources to describe their data, and allows for aggregating data from numerous sources. This importance will grow as data become 'born semantic', i.e. when semantics are embedded with data from the point of collection. In this presentation we introduce a brief history of the development of the PUV; the use cases for data aggregation and discovery outlined above; and the semantic model from which the PUV is built; and the ideas for embedding semantics in data from the point of collection.
NASA Astrophysics Data System (ADS)
Lowry, Roy; Leadbetter, Adam
2014-05-01
The semantic content of the NERC Vocabulary Server (NVS) has been developed over thirty years. It has been used to mark up metadata and data in a wide range of international projects, including the European Commission (EC) Framework Programme 7 projects SeaDataNet and The Open Service Network for Marine Environmental Data (NETMAR). Within the United States, the National Science Foundation projects Rolling Deck to Repository and Biological & Chemical Data Management Office (BCO-DMO) use concepts from NVS for markup. Further, typed relationships between NVS concepts and terms served by the Marine Metadata Interoperability Ontology Registry and Repository. The vast majority of the concepts publicly served from NVS (35% of ~82,000) form the British Oceanographic Data Centre (BODC) Parameter Usage Vocabulary (PUV). The PUV is instantiated on the NVS as a SKOS concept collection. These terms are used to describe the individual channels in data and metadata served by, for example, BODC, SeaDataNet and BCO-DMO. The PUV terms are designed to be very precise and may contain a high level of detail. Some users have reported that the PUV is difficult to navigate due to its size and complexity (a problem CSIRO have begun to address by deploying a SISSVoc interface to the NVS), and it has been difficult to aggregate data as multiple PUV terms can - with full validity - be used to describe the same data channels. Better approaches to data aggregation are required as a use case for the PUV from the EC European Marine Observation and Data Network (EMODnet) Chemistry project. One solution, proposed and demonstrated during the course of the NETMAR project, is to build new SKOS concept collections which formalise the desired aggregations for given applications, and uses typed relationships to state which PUV concepts contribute to a specific aggregation. Development of these new collections requires input from a group of experts in the application domain who can decide which PUV concepts it is acceptable to aggregate for a given application. Another approach, which has been developed as a use case for concept and data discovery and will be implemented as part of the EC/United States/Australian collaboration the Ocean Data Interoperability Platform, is to expose the well defined, but little publicised, semantic model which underpins each and every concept within the PUV. This will be done in a machine readable form, so that tools can be built to aggregate data and concepts by, for example, the measured parameter; the environmental sphere or compartment of the sampling; and the methodology of the analysis of the parameter. There is interesting work being developed by CSIRO which may be used in this approach. The importance of these data aggregations is growing as more data providers use terms from semantic resources to describe their data, and allows for aggregating data from numerous sources. This importance will grow as data become "born semantic", i.e. when semantics are embedded with data from the point of collection. In this presentation we introduce a brief history of the development of the PUV; the use cases for data aggregation and discovery outlined above; and the semantic model from which the PUV is built; and the ideas for embedding semantics in data from the point of collection.
NASA Astrophysics Data System (ADS)
Vreugdenhil, Mariette; de Jeu, Richard; Wagner, Wolfgang; Dorigo, Wouter; Hahn, Sebastian; Bloeschl, Guenter
2013-04-01
Vegetation and its water content affect active and passive microwave soil moisture retrievals and need to be taken into account in such retrieval methodologies. This study compares the vegetation parameterisation that is used in the TU-Wien soil moisture retrieval algorithm to other vegetation products, such as the Vegetation Optical Depth (VOD), Net Primary Production (NPP) and Leaf Area Index (LAI). When only considering the retrieval algorithm for active microwaves, which was developed by the TU-Wien, the effect of vegetation on the backscattering coefficient is described by the so-called slope [1]. The slope is the first derivative of the backscattering coefficient in relation to the incidence angle. Soil surface backscatter normally decreases quite rapidly with the incidence angle over bare or sparsely vegetated soils, whereas the contribution of dense vegetation is fairly uniform over a large range of incidence angles. Consequently, the slope becomes less steep with increasing vegetation. Because the slope is a derivate of noisy backscatter measurements, it is characterised by an even higher level of noise. Therefore, it is averaged over several years assuming that the state of the vegetation doesn't change inter-annually. The slope is compared to three dynamic vegetation products over Australia, the VOD, NPP and LAI. The VOD was retrieved from AMSR-E passive microwave data using the VUA-NASA retrieval algorithm and provides information on vegetation with a global coverage of approximately every two days [2]. LAI is defined as half the developed area of photosynthetically active elements of the vegetation per unit horizontal ground area. In this study LAI is used from the Geoland2 products derived from SPOT Vegetation*. The NPP is the net rate at which plants build up carbon through photosynthesis and is a model-based estimate from the BiosEquil model [3, 4]. Results show that VOD and slope correspond reasonably well over vegetated areas, whereas in arid areas, where the microwave signals mostly stem from the soil surface and deeper soil layers, they are negatively correlated. A second comparison of monthly values of both vegetation parameters to modelled NPP data shows that particularly over dry areas the VOD corresponds better to the NPP, with r=0.79 for VOD-NPP and r=-0.09 for slope-NPP. 1. Wagner, W., et al., A Study of Vegetation Cover Effects on ERS Scatterometer Data. IEEE Transactions on Geoscience and Remote Sensing, 1999. 37(2): p. 938-948. 2. Owe, M., R. de Jeu, and J. Walker, A methodology for surface soil moisture and vegetation optical depth retrieval using the microwave polarization difference index. Geoscience and Remote Sensing, IEEE Transactions on, 2001. 39(8): p. 1643-1654. 3. Raupach, M.R., et al., Balances of Water, Carbon, Nitrogen and Phosphorus in Australian Landscapes: (1) Project Description and Results, 2001, Sustainable Minerals Institute, CSIRO Land and Water. 4. Raupach, M.R., et al., Balances of Water, Carbon, Nitrogen and Phosporus in Australian Landscapes: (2) Model Formulation and Testing, 2001, Sustainable Minerals Institute, CSIRO Land and Water. * These products are the joint property of INRA, CNES and VITO under copyright of Geoland2. They are generated from the SPOT VEGETATION data under copyright CNES and distribution by VITO.
Status report of the end-to-end ASKAP software system: towards early science operations
NASA Astrophysics Data System (ADS)
Guzman, Juan Carlos; Chapman, Jessica; Marquarding, Malte; Whiting, Matthew
2016-08-01
The Australian SKA Pathfinder (ASKAP) is a novel centimetre radio synthesis telescope currently in the commissioning phase and located in the midwest region of Western Australia. It comprises of 36 x 12 m diameter reflector antennas each equipped with state-of-the-art and award winning Phased Array Feeds (PAF) technology. The PAFs provide a wide, 30 square degree field-of-view by forming up to 36 separate dual-polarisation beams at once. This results in a high data rate: 70 TB of correlated visibilities in an 8-hour observation, requiring custom-written, high-performance software running in dedicated High Performance Computing (HPC) facilities. The first six antennas equipped with first-generation PAF technology (Mark I), named the Boolardy Engineering Test Array (BETA) have been in use since 2014 as a platform to test PAF calibration and imaging techniques, and along the way it has been producing some great science results. Commissioning of the ASKAP Array Release 1, that is the first six antennas with second-generation PAFs (Mark II) is currently under way. An integral part of the instrument is the Central Processor platform hosted at the Pawsey Supercomputing Centre in Perth, which executes custom-written software pipelines, designed specifically to meet the ASKAP imaging requirements of wide field of view and high dynamic range. There are three key hardware components of the Central Processor: The ingest nodes (16 x node cluster), the fast temporary storage (1 PB Lustre file system) and the processing supercomputer (200 TFlop system). This High-Performance Computing (HPC) platform is managed and supported by the Pawsey support team. Due to the limited amount of data generated by BETA and the first ASKAP Array Release, the Central Processor platform has been running in a more "traditional" or user-interactive mode. But this is about to change: integration and verification of the online ingest pipeline starts in early 2016, which is required to support the full 300 MHz bandwidth for Array Release 1; followed by the deployment of the real-time data processing components. In addition to the Central Processor, the first production release of the CSIRO ASKAP Science Data Archive (CASDA) has also been deployed in one of the Pawsey Supercomputing Centre facilities and it is integrated to the end-to-end ASKAP data flow system. This paper describes the current status of the "end-to-end" data flow software system from preparing observations to data acquisition, processing and archiving; and the challenges of integrating an HPC facility as a key part of the instrument. It also shares some lessons learned since the start of integration activities and the challenges ahead in preparation for the start of the Early Science program.
Comparative A/B testing a mobile data acquisition app for hydrogeochemistry
NASA Astrophysics Data System (ADS)
Klump, Jens; Golodoniuc, Pavel; Reid, Nathan; Gray, David; Ross, Shawn
2015-04-01
In the context of a larger study on the Capricorn Orogen of Western Australia, the CSIRO Mineral Discovery Program is conducting a regional study of the hydrogeochemistry on water from agricultural and other bores. Over time, the sampling process was standardised and a form for capturing metadata and data from initial measurements was developed. In 2014 an extensive technology review was conducted with an aim to automate field data acquisition process. A prototype hydrogeochemistry data capture form was implemented as a mobile application for Windows Mobile devices. This version of the software was a standalone application with an interface to export data as CSV files. A second candidate version of the hydrogeochemistry data capture form was implemented as an Android mobile application in the FAIMS framework. FAIMS is a framework for mobile field data capture, originally developed by at the University of New South Wales for archaeological field data collection. A benefit of the FAIMS application was the ability to associate photographs taken with the device's embedded camera with the captured data. FAIMS also allows networked collaboration within a field team, using the mobile applications as asynchronous rich clients. The network infrastructure can be installed in the field ("FAIMS in a Box") to supply data synchronisation, backup and transfer. This aspect will be tested in the next field season. A benefit of the FAIMS application was the ability to associate photographs taken with the device's embedded camera with the captured data. Having two data capture applications available allowed us to conduct an A/B test, comparing two different implementations for the same task. Both applications were trialled in the field by different field crews and user feedback will be used to improve the usability of the app for the next field season. A key learning was that the ergonomics of the app is at paramount importance to gain the user acceptance. This extends from general fit with the standard procedures used in the field during data acquisition to self-descriptive and intuitive user interface features well aligned with the workflows and sequence of actions performed by a user that ultimately contributes to the implementation of a Collect-As-You-Go approach. In the Australian outback, issues such as absence of network connectivity, heat and sun glare may challenge the utility of tablet based applications in the field. Due to limitations of tablet use in the field we also consider the use of smart pens for data capture. A smart pen application based on Anoto forms and software by Formidable will be tested in the next field season.
A Highly Ordered Magnetic Field in a Crushed Pulsar Wind Nebula in G327.1-1.1
NASA Astrophysics Data System (ADS)
Ma, Yik Ki; Ng, Chi-Yung; Bucciantini, Niccolò; Gaensler, Bryan M.; Slane, Patrick O.; Temim, Tea
2015-01-01
A significant fraction of a pulsar's spin-down luminosity is in the form of a relativistic magnetized particle outflow known as a pulsar wind. Confinement of the wind by the ambient medium creates a synchrotron-emitting bubble called a pulsar wind nebula (PWN). Studies of PWNe is important for understanding the physics of relativistic shocks and particle acceleration. Simulations suggest that a PWN will be crushed by the reverse shock of its surrounding supernova remnant at an age of ~10^4 yr, resulting in a turbulent environment. However, given the short timescale of the interaction stage, only a few such systems are observed.We present radio polarization observations of the PWN in supernova remnant G327.1-1.1, taken with the Australia Telescope Compact Array. Previous works suggest that this system has recently interacted with the supernova reverse shock, providing a rare example for the study of magnetic field in a crushed PWN. We found a highly ordered magnetic field in the PWN, which is unexpected given the presumed turbulent interior of the nebula. This suggests that the magnetic pressure in the PWN could play an important role in the interaction with supernova reverse shock.The Australia Telescope Compact Array is part of the Australia Telescope National Facility which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO.YKM and CYN are supported by a ECS grant of the Hong Kong Government under HKU 709713P
NASA Astrophysics Data System (ADS)
Mabry, Jennifer C.; Lan, Tefang; Boucher, Christine; Burnard, Peter G.; Brennwald, Matthias S.; Langenfelds, Ray; Marty, Bernard
2015-10-01
The helium isotope composition of air might have changed since the industrial revolution due to the release of 4He-rich crustal helium during exploitation of fossil fuels. Thereby, variation of the atmospheric helium isotope ratio (3He/4He) has been proposed as a possible new atmospheric tracer of industrial activity. However, the magnitude of such change is debated, with possible values ranging from 0 to about 2 ‰ /yr (Sano et al., 1989; Hoffman and Nier, 1993; Pierson-Wickmann et al., 2001; Brennwald et al., 2013; Lupton and Evans, 2013). A new analytical facility for high precision (2‰, 2σ) analysis of the 3He/4He ratio of air has been developed at CRPG Nancy (France) capable of investigating permil level variations. Previously, Brennwald et al. (2013) analyzed a selection of air samples archived since 1978 at Cape Grim, Tasmania, by the Commonwealth Scientific and Industrial Research Organisation (CSIRO). They reported a mean temporal decrease of the 3He/4He ratio of 0.23-0.30‰/yr. Re-analysis of aliquots of the same samples using the new high-precision instrument showed no significant temporal decrease of the 3He/4He ratio (0.0095 ± 0.033‰ /yr, 2σ) in the time interval 1978-2011. These new data constrain the mean He content of globally produced natural gas to about 0.034% or less, which is about 3× lower than commonly quoted.
Study of aerosol effect on accelerated snow melting over the Tibetan Plateau during boreal spring
NASA Astrophysics Data System (ADS)
Lee, Woo-Seop; Bhawar, Rohini L.; Kim, Maeng-Ki; Sang, Jeong
2013-08-01
In the present study, a coupled atmosphere-ocean global climate model (CSIRO-Mk3.6) is used to investigate the role of aerosol forcing agents as drivers of snow melting trends in the Tibetan Plateau (TP) region. Anthropogenic aerosol-induced snow cover changes in a warming climate are calculated from the difference between historical run (HIST) and all forcing except anthropogenic aerosol (NoAA). Absorbing aerosols can influence snow cover by warming the atmosphere, reducing snow reflectance after deposition. The warming the rate of snow melt, exposing darker surfaces below to short-wave radiation sooner, and allowing them to heat up even faster in the Himalayas and TP. The results show a strong spring snow cover decrease over TP when absorbing anthropogenic aerosol forcing is considered, whereas snow cover fraction (SCF) trends in NoAA are weakly negative (but insignificant) during 1951-2005. The enhanced spring snow cover trends in HIST are due to overall effects of different forcing agents: When aerosol forcing (AERO) is considered, a significant reduction of SCF than average can be found over the western TP and Himalayas. The large decreasing trends in SCF over the TP, with the maximum reduction of SCF around 12-15% over the western TP and Himalayas slope. Also accelerated snow melting during spring is due to effects of aerosol on snow albedo, where aerosol deposition cause decreases snow albedo. However, the SCF change in the “NoAA” simulations was observed to be less.
Hendrie, Gilly A.; Golley, Rebecca K.; Noakes, Manny
2018-01-01
Population surveys have rarely identified dietary patterns associated with excess energy intake in relation to risk of obesity. This study uses self-reported food intake data from the validated Commonwealth Scientific and Industrial Research Organisation (CSIRO) Healthy Diet Score survey to examine whether apparent compliance with dietary guidelines varies by weight status. The sample of 185,951 Australian adults were majority female (71.8%), with 30.2%, 35.3% and 31.0% aged between 18–30, 31–50 and 51–70 years respectively. Using multinomial regression, in the adjusted model controlling for gender and age, individuals in the lowest quintile of diet quality were almost three times more likely to be obese than those in the highest quintile (OR 2.99, CI: 2.88:3.11; p < 0.001). The differential components of diet quality between normal and obese adults were fruit (difference in compliance score 12.9 points out of a possible 100, CI: 12.3:13.5; p < 0.001), discretionary foods (8.7 points, CI: 8.1:9.2; p < 0.001), and healthy fats (7.7 points, CI: 7.2:8.1; p < 0.001). Discretionary foods was the lowest scoring component across all gender and weight status groups, and are an important intervention target to improve diet quality. This study contributes to the evidence that diet quality is associated with health outcomes, including weight status, and will be useful in framing recommendations for obesity prevention and management. PMID:29439463
Downscaling an Eddy-Resolving Global Model for the Continental Shelf off South Eastern Australia
NASA Astrophysics Data System (ADS)
Roughan, M.; Baird, M.; MacDonald, H.; Oke, P.
2008-12-01
The Australian Bluelink collaboration between CSIRO, the Bureau of Meteorology and the Royal Australian Navy has made available to the research community the output of BODAS (Bluelink ocean data assimilation system), an ensemble optimal interpolation reanalysis system with ~10 km resolution around Australia. Within the Bluelink project, BODAS fields are assimilated into a dynamic ocean model of the same resolution to produce BRAN (BlueLink ReANalysis, a hindcast of water properties around Australia from 1992 to 2004). In this study, BODAS hydrographic fields are assimilated into a ~ 3 km resolution Princeton Ocean Model (POM) configuration of the coastal ocean off SE Australia. Experiments were undertaken to establish the optimal strength and duration of the assimilation of BODAS fields into the 3 km resolution POM configuration for the purpose of producing hindcasts of ocean state. It is shown that the resultant downscaling of Bluelink products is better able to reproduce coastal features, particularly velocities and hydrography over the continental shelf off south eastern Australia. The BODAS-POM modelling system is used to provide a high-resolution simulation of the East Australian Current over the period 1992 to 2004. One of the applications that we will present is an investigation of the seasonal and inter-annual variability in the dispersion of passive particles in the East Australian Current. The practical outcome is an estimate of the connectivity of estuaries along the coast of southeast Australia, which is relevant for the dispersion of marine pests.
Lei, Juncheng; Chen, Lian; Li, Hong
2017-08-01
The golden apple snail, Pomacea canaliculata, is one of the world's 100 most notorious invasive alien species. Knowledge about the critical climate variables that limit the global distribution range of the snail, as well as predictions of future species distributions under climate change, is very helpful for management of snail. In this study, the climatically suitable habitats for this kind of snail under current climate conditions were modeled by biomod2 and projected to eight future climate scenarios (2 time periods [2050s, 2080s] × 2 Representative Concentration Pathways [RCPs; RCP2.6, RCP8.5] × 2 atmospheric General Circulation Models [GCMs; Canadian Centre for Climate Modelling and Analysis (CCCMA), Commonwealth Scientific and Industrial Research Organisation (CSIRO)]). The results suggest that the lowest temperature of coldest month is the critical climate variable to restrict the global distribution range of P. canaliculata. It is predicted that the climatically suitable habitats for P. canaliculata will increase by an average of 3.3% in 2050s and 3.8% in 2080s for the RCP2.6 scenario, while they increase by an average of 8.7% in 2050s and 10.3% in 2080s for the RCP8.5 scenario. In general, climate change in the future may promote the global invasion of the invasive species. Therefore, it is necessary to take proactive measures to monitor and preclude the invasion of this species.
Data You May Like: A Recommender System for Research Data Discovery
NASA Astrophysics Data System (ADS)
Devaraju, A.; Davy, R.; Hogan, D.
2016-12-01
Various data portals been developed to facilitate access to research datasets from different sources. For example, the Data Publisher for Earth & Environmental Science (PANGAEA), the Registry of Research Data Repositories (re3data.org), and the National Geoscience Data Centre (NGDC). Due to data quantity and heterogeneity, finding relevant datasets on these portals may be difficult and tedious. Keyword searches based on specific metadata elements or multi-key indexes may return irrelevant results. Faceted searches may be unsatisfactory and time consuming, especially when facet values are exhaustive. We need a much more intelligent way to complement existing searching mechanisms in order to enhance user experiences of the data portals. We developed a recommender system that helps users to find the most relevant research datasets on the CSIRO's Data Access Portal (DAP). The system is based on content-based filtering. We computed the similarity of datasets based on data attributes (e.g., descriptions, fields of research, location, contributors, and provenance) and inference from transaction logs (e.g., the relations among datasets and between queries and datasets). We improved the recommendation quality by assigning weights to data similarities. The weight values are drawn from a survey involving data users. The recommender results for a given dataset are accessible programmatically via a web service. Taking both data attributes and user actions into account, the recommender system will make it easier for researchers to find and reuse data offered through the data portal.
NASA Technical Reports Server (NTRS)
Zhao, Feng; Yang, Xiaoyuan; Schull, Mithcell A.; Roman-Colon, Miguel O.; Yao, Tian; Wang, Zhuosen; Zhang, Qingling; Jupp, David L. B.; Lovell, Jenny L.; Culvenor, Darius;
2011-01-01
Effective leaf area index (LAI) retrievals from a scanning, ground-based, near-infrared (1064 nm) lidar that digitizes the full return waveform, the Echidna Validation Instrument (EVI), are in good agreement with those obtained from both hemispherical photography and the Li-Cor LAI-2000 Plant Canopy Analyzer. We conducted trials at 28 plots within six stands of hardwoods and conifers of varying height and stocking densities at Harvard Forest, Massachusetts, Bartlett Experimental Forest, New Hampshire, and Howland Experimental Forest, Maine, in July 2007. Effective LAI values retrieved by four methods, which ranged from 3.42 to 5.25 depending on the site and method, were not significantly different ( b0.1 among four methods). The LAI values also matched published values well. Foliage profiles (leaf area with height) retrieved from the lidar scans, although not independently validated, were consistent with stand structure as observed and as measured by conventional methods. Canopy mean top height, as determined from the foliage profiles, deviated from mean RH100 values obtained from the Lidar Vegetation Imaging Sensor (LVIS) airborne large-footprint lidar system at 27 plots by .0.91 m with RMSE=2.04 m, documenting the ability of the EVI to retrieve stand height. The Echidna Validation Instrument is the first realization of the Echidna lidar concept, devised by Australia's Commonwealth Scientific and Industrial Research Organization (CSIRO), for measuring forest structure using full-waveform, ground-based, scanning lidar.
Whitehead, P G; Wilby, R L; Butterfield, D; Wade, A J
2006-07-15
The impacts of climate change on nitrogen (N) in a lowland chalk stream are investigated using a dynamic modelling approach. The INCA-N model is used to simulate transient daily hydrology and water quality in the River Kennet using temperature and precipitation scenarios downscaled from the General Circulation Model (GCM) output for the period 1961-2100. The three GCMs (CGCM2, CSIRO and HadCM3) yield very different river flow regimes with the latter projecting significant periods of drought in the second half of the 21st century. Stream-water N concentrations increase over time as higher temperatures enhance N release from the soil, and lower river flows reduce the dilution capacity of the river. Particular problems are shown to occur following severe droughts when N mineralization is high and the subsequent breaking of the drought releases high nitrate loads into the river system. Possible strategies for reducing climate-driven N loads are explored using INCA-N. The measures include land use change or fertiliser reduction, reduction in atmospheric nitrate and ammonium deposition, and the introduction of water meadows or connected wetlands adjacent to the river. The most effective strategy is to change land use or reduce fertiliser use, followed by water meadow creation, and atmospheric pollution controls. Finally, a combined approach involving all three strategies is investigated and shown to reduce in-stream nitrate concentrations to those pre-1950s even under climate change.
NASA Astrophysics Data System (ADS)
Lenton, Andrew; Matear, Richard J.; Keller, David P.; Scott, Vivian; Vaughan, Naomi E.
2018-04-01
Atmospheric carbon dioxide (CO2) levels continue to rise, increasing the risk of severe impacts on the Earth system, and on the ecosystem services that it provides. Artificial ocean alkalinization (AOA) is capable of reducing atmospheric CO2 concentrations and surface warming and addressing ocean acidification. Here, we simulate global and regional responses to alkalinity (ALK) addition (0.25 PmolALK yr-1) over the period 2020-2100 using the CSIRO-Mk3L-COAL Earth System Model, under high (Representative Concentration Pathway 8.5; RCP8.5) and low (RCP2.6) emissions. While regionally there are large changes in alkalinity associated with locations of AOA, globally we see only a very weak dependence on where and when AOA is applied. On a global scale, while we see that under RCP2.6 the carbon uptake associated with AOA is only ˜ 60 % of the total, under RCP8.5 the relative changes in temperature are larger, as are the changes in pH (140 %) and aragonite saturation state (170 %). The simulations reveal AOA is more effective under lower emissions, therefore the higher the emissions the more AOA is required to achieve the same reduction in global warming and ocean acidification. Finally, our simulated AOA for 2020-2100 in the RCP2.6 scenario is capable of offsetting warming and ameliorating ocean acidification increases at the global scale, but with highly variable regional responses.
'Low-acid' sulfide oxidation using nitrate-enriched groundwater
NASA Astrophysics Data System (ADS)
Donn, Michael; Boxall, Naomi; Reid, Nathan; Meakin, Rebecca; Gray, David; Kaksonen, Anna; Robson, Thomas; Shiers, Denis
2016-04-01
Acid drainage (AMD/ARD) is undoubtedly one of the largest environmental, legislative and economic challenges facing the mining industry. In Australia alone, at least 60m is spent on AMD related issues annually, and the global cost is estimated to be in the order of tens of billions US. Furthermore, the challenge of safely and economically storing or treating sulfidic wastes will likely intensify because of the trend towards larger mines that process increasingly higher volumes of lower grade ores and the associated sulfidic wastes and lower profit margins. While the challenge of managing potentially acid forming (PAF) wastes will likely intensify, the industrial approaches to preventing acid production or ameliorating the effects has stagnated for decades. Conventionally, PAF waste is segregated and encapsulated in non-PAF tips to limit access to atmospheric oxygen. Two key limitations of the 'cap and cover' approach are: 1) the hazard (PAF) is not actually removed; only the pollutant linkage is severed; and, 2) these engineered structures are susceptible to physical failure in short-to-medium term, potentially re-establishing that pollutant linkage. In an effort to address these concerns, CSIRO is investigating a passive, 'low-acid' oxidation mechanism for sulfide treatment, which can potentially produce one quarter as much acidity compared with pyrite oxidation under atmospheric oxygen. This 'low-acid' mechanism relies on nitrate, rather than oxygen, as the primary electron accepter and the activity of specifically cultured chemolithoautotrophic bacteria and archaea communities. This research was prompted by the observation that, in deeply weathered terrains of Australia, shallow (oxic to sub-oxic) groundwater contacting weathering sulfides are commonly inconsistent with the geochemical conditions produced by ARD. One key characteristic of these aquifers is the natural abundance of nitrate on a regional scale, which becomes depleted around the sulfide bodies, and where pH remains neutral. The "low-acid" oxidation of sulfides with nitrate as an electron acceptor has been demonstrated at the laboratory scale. In 90-day microcosm respirometry experiments, we exposed a mixture of pulverized quartz and pyrite -rich ore to natural, high-nitrate groundwater and inoculated the microcosms with a culture of aerobic and anaerobic nitrate-dependent iron and sulfur-oxidising microorganisms, which were enriched from ore, groundwater and activated waste water. Incubations were performed under both oxic and anoxic conditions, in addition to abiotic controls. Initial results show that oxidation of the sulfides under nitrate-rich and microbially enhanced conditions does produce less acid than the same material under oxic conditions, and to some degree can match the models as long as oxygen ingress can be controlled. These results are the focus of further research into how this process can be enhanced and whether it can be applied in the field. Nitrate-driven oxidation of sulfides could potentially be used as a new approach to reduce acid generation and leaching of contaminants from waste dumps, in a passive or actively managed process designed to deplete and/or ameliorate (i.e. through surface passivation) the mineralogical hazard. Developing our understanding of biological aspects of these processes may also allow testing of longer-term "bio-caps" for various tailings and dump materials.
Laboratory scale micro-seismic monitoring of rock faulting and injection-induced fault reactivation
NASA Astrophysics Data System (ADS)
Sarout, J.; Dautriat, J.; Esteban, L.; Lumley, D. E.; King, A.
2017-12-01
The South West Hub CCS project in Western Australia aims to evaluate the feasibility and impact of geosequestration of CO2 in the Lesueur sandstone formation. Part of this evaluation focuses on the feasibility and design of a robust passive seismic monitoring array. Micro-seismicity monitoring can be used to image the injected CO2plume, or any geomechanical fracture/fault activity; and thus serve as an early warning system by measuring low-level (unfelt) seismicity that may precede potentially larger (felt) earthquakes. This paper describes laboratory deformation experiments replicating typical field scenarios of fluid injection in faulted reservoirs. Two pairs of cylindrical core specimens were recovered from the Harvey-1 well at depths of 1924 m and 2508 m. In each specimen a fault is first generated at the in situ stress, pore pressure and temperature by increasing the vertical stress beyond the peak in a triaxial stress vessel at CSIRO's Geomechanics & Geophysics Lab. The faulted specimen is then stabilized by decreasing the vertical stress. The freshly formed fault is subsequently reactivated by brine injection and increase of the pore pressure until slip occurs again. This second slip event is then controlled in displacement and allowed to develop for a few millimeters. The micro-seismic (MS) response of the rock during the initial fracturing and subsequent reactivation is monitored using an array of 16 ultrasonic sensors attached to the specimen's surface. The recorded MS events are relocated in space and time, and correlate well with the 3D X-ray CT images of the specimen obtained post-mortem. The time evolution of the structural changes induced within the triaxial stress vessel is therefore reliably inferred. The recorded MS activity shows that, as expected, the increase of the vertical stress beyond the peak led to an inclined shear fault. The injection of fluid and the resulting increase in pore pressure led first to a reactivation of the pre-existing fault. However, with increasing slip, a second conjugate fault progressively appeared, which ultimately accommodated all of the imposed vertical displacement. The inferred structural changes resemble fault branching and dynamic slip transfer processes seen in large-scale geology. This project was funded by the ANLEC R&D in partnership with the WA Government.
NASA Astrophysics Data System (ADS)
Chen, X.
2016-12-01
This study present a multi-scale approach combining Mode Decomposition and Variance Matching (MDVM) method and basic process of Point-by-Point Regression (PPR) method. Different from the widely applied PPR method, the scanning radius for each grid box, were re-calculated considering the impact from topography (i.e. mean altitudes and fluctuations). Thus, appropriate proxy records were selected to be candidates for reconstruction. The results of this multi-scale methodology could not only provide the reconstructed gridded temperature, but also the corresponding uncertainties of the four typical timescales. In addition, this method can bring in another advantage that spatial distribution of the uncertainty for different scales could be quantified. To interpreting the necessity of scale separation in calibration, with proxy records location over Eastern Asia, we perform two sets of pseudo proxy experiments (PPEs) based on different ensembles of climate model simulation. One consist of 7 simulated results by 5 models (BCC-CSM1-1, CSIRO-MK3L-1-2, HadCM3, MPI-ESM-P, and Giss-E2-R) of the "past1000" simulation from Coupled Model Intercomparison Project Phase 5. The other is based on the simulations of Community Earth System Model Last Millennium Ensemble (CESM-LME). The pseudo-records network were obtained by adding the white noise with signal-to-noise ratio (SNR) increasing from 0.1 to 1.0 to the simulated true state and the locations mainly followed the PAGES-2k network in Asia. Totally, 400 years (1601-2000) simulation was used for calibration and 600 years (1001-1600) for verification. The reconstructed results were evaluated by three metrics 1) root mean squared error (RMSE), 2) correlation and 3) reduction of error (RE) score. The PPE verification results have shown that, in comparison with ordinary linear calibration method (variance matching), the RMSE and RE score of PPR-MDVM are improved, especially for the area with sparse proxy records. To be noted, in some periods with large volcanic activities, the RMSE of MDVM get larger than VM for higher SNR cases. It should be inferred that the volcanic eruptions might blur the intrinsic characteristics of multi-scales variabilities of the climate system and the MDVM method would show less advantage in that case.
Reminiscences regarding Professor R.N. Christiansen
NASA Astrophysics Data System (ADS)
Swarup, Govind
2008-11-01
In this short paper I describe my initiation into the field of radio astronomy fifty years ago, under the guidance of Professor W.N. ('Chris') Christiansen, soon after I joined the C.S.I.R.O.'s Division of Radiophysics (RP) in Sydney, Australia, in 1953 under a 2-year Colombo Plan Fellowship. During the early 1950s Christiansen had developed a remarkable 21 cm interferometric grating array of 32 east-west aligned parabolic dishes and another array of 16 dishes in a north-south direction at Potts Hill. Christiansen and Warburton used these two arrays to scan the Sun strip-wise yielding radio brightness distribution at various position angles. During a three month period I assisted them in making a 2-dimensional map of the Sun by a complex Fourier transform process. In the second year of my Fellowship, Parthasarathy and I converted the 32-antenna east-west grating array to study solar radio emission at 60cm. During this work, I noticed that the procedure adopted by Christiansen for phase adjustment of the grating array was time consuming. Based on this experience, I later developed an innovative technique at Stanford in 1959 for phase adjustment of long transmission lines and paths in space. In a bid to improve on the method used by Christiansen to make a 2-dimensional map of the Sun from strip scans, I suggested to R.N. Bracewell in 1962 a revolutionary method for direct 2-dimensional imaging without Fourier transforms. Bracewell and Riddle developed the method for making a 2-dimensional map of the Moon using strip scans obtained with the 32 element interferometer at Stanford. The method has since revolutionized medical tomography. I describe these developments here to highlight my initial work with Christiansen and to show how new ideas often are developed by necessity and have their origin in prior experience! The 32 Potts Hill solar grating array dishes were eventually donated by the C.S.I.R.0. to India and were set up by me at Kalyan near Mumbai, forming the core of the first radio astronomy group in India. This group went on to construct two of the world's largest radio telescopes, the Ooty Radio Telescope and the Giant Metrewave Radio Telescope. Chris Christiansen was not only my guru but also a mentor and a friend for more than fifty years. I fondly remember his very warm personality.
Development of wildfires in Australia over the last century
NASA Astrophysics Data System (ADS)
Nieradzik, Lars Peter; Haverd, Vanessa; Briggs, Peter; Canadell, Josep G.; Smith, Ben
2017-04-01
Wildfires and their emissions are key biospheric processes in the modeling of the carbon cycle that still are insufficiently understood. In Australia, fire emissions constitute a large flux of carbon from the biosphere to the atmosphere of approximately 1.3 times larger than the annual fossil fuel emissions. In addition, fire plays a big role in determining the composition of vegetation which in turn affects land-atmosphere fluxes. Annualy, up to 4% of the vegetated land-surface area is burned which amounts to up to 3% of global NPP and results in the reslease of about 2 Pg carbon into the atmosphere. There are indications that burned area has decreased globally over recent decades but so far there is not a clear trend in the development in fire-intensity and fuel availability. Net emissions from wildfires are not generally included in global and regional carbon budgets, because it is assumed that gross fire emissions are in balance with post-fire carbon uptake by recovering vegetation. This is a valid assumption as long as climate and fire regimes are in equilibrium, but not when the climate and other drivers are changing. We present a study on the behaviour of wildfires on the Australian continent over the last century (1911 - 2012) introducing the novel fire model BLAZE (BLAZe induced biosphere-atmosphere flux Estimator) that has been designed to address the feedbacks between climate, fuel loads, and fires. BLAZE is used within the Australian land-surface model CABLE (Community Atmophere-Biosphere-Land Exchange model). The study shows two significant outcomes: A regional shift in fire patterns shift during this century due to fire suppression and greening effects as well as an increase of potential fire-line intensity (the risk that a fire becomes more intense), especially in regions where most of Australia's population resides. This strongly emphasises the need to further investigate fire dynamics under future climate scenarios. The fire model BLAZE has been developed at the CSIRO Oceans and Atmosphere, Canberra, Australia and will be part of the upcoming release of the dynamic global vegetation model LPJ-GUESS version 4.1 within the MERGE project at Lund University, Sweden. It will also be included in the EC-Earth ESM within the EU Horizon 2020 project CRESCENDO.
Nutritional adequacy of energy restricted diets for young obese women.
O'Connor, Helen; Munas, Zahra; Griffin, Hayley; Rooney, Kieron; Cheng, Hoi Lun; Steinbeck, Katharine
2011-01-01
Energy restricted meal plans may compromise nutrient intake. This study used diet modelling to assess the nutritional adequacy of energy restricted meal plans designed for weight management in young obese women. Diet modelling of 6000 kJ/d animal protein based meal plans was performed using Australian nutrient databases with adequacy compared to the Australian Nutrient Reference Values (NRVs) for women (19-30 years). One diet plan was based on the higher carbohydrate (HC) version of the Australian Guide to Healthy Eating for women 19-60 years. An alternative higher protein (HP) plan was adapted from the CSIRO Total Wellbeing Diet. Vegan and lacto-ovo versions of these plans were also modelled and compared to the appropriate vegetarian NRVs. Both animal protein diets met the estimated average requirement (EAR) or adequate intake (AI) for all nutrients analysed. The recommended dietary intake (RDI) was also satisfied, except for iron. HC met 75±30% and HP 81±31% of the iron RDI when red meat and iron fortified cereal were both included three days a week, and remained below the RDI even when red meat was increased to seven days. Iron for the modified vegan (57±5% HC; 66±4% HP) and lacto-ovo (48±6% HC; 59±7% HP) plans was below the RDI and zinc below the EAR for the vegan (76±8% HC; 84±9% HP) plans. The 6000 kJ/d animal protein meal plans met the RDI for all nutrients except iron. Iron and zinc failed to meet the vegetarian RDI and EAR respectively for the vegan plans.
Luo, Qunying; O'Leary, Garry; Cleverly, James; Eamus, Derek
2018-06-01
Climate change (CC) presents a challenge for the sustainable development of wheat production systems in Australia. This study aimed to (1) quantify the impact of future CC on wheat grain yield for the period centred on 2030 from the perspectives of wheat phenology, water use and water use efficiency (WUE) and (2) evaluate the effectiveness of changing sowing times and cultivars in response to the expected impacts of future CC on wheat grain yield. The daily outputs of CSIRO Conformal-Cubic Atmospheric Model for baseline and future periods were used by a stochastic weather generator to derive changes in mean climate and in climate variability and to construct local climate scenarios, which were then coupled with a wheat crop model to achieve the two research aims. We considered three locations in New South Wales, Australia, six times of sowing (TOS) and three bread wheat (Triticum aestivum L.) cultivars in this study. Simulation results show that in 2030 (1) for impact analysis, wheat phenological events are expected to occur earlier and crop water use is expected to decrease across all cases (the combination of three locations, six TOS and three cultivars), wheat grain yield would increase or decrease depending on locations and TOS; and WUE would increase in most of the cases; (2) for adaptation considerations, the combination of TOS and cultivars with the highest yield varied across locations. Wheat growers at different locations will require different strategies in managing the negative impacts or taking the opportunities of future CC.
NASA Astrophysics Data System (ADS)
Masupha, Teboho Elisa; Moeletsi, Mokhele Edmond
2018-06-01
Recurring droughts associated with global warming have raised major concern for the agricultural sector, particularly vulnerable small-scale farmers who rely on rain-fed farming such as in the Luvuvhu River catchment. The Standardized Precipitation Evapotranspiration Index (SPEI) and Water Requirement Satisfaction Index (WRSI) were calculated to assess drought on a 120-day maturing maize crop based on outputs of the CSIRO-Mk3.6.0 under RCP 4.5 emission scenario, for the period 1980/81-2089/90. Results by SPEI show that 40-54% of the agricultural seasons during the base period experienced mild drought conditions (SPEI 0 to -0.99), equivalent to a recurrence of once in two seasons. However, WRSI results clearly indicated that stations in the drier regions (annual rainfall <600 mm) of the catchment experienced mild drought (WRSI 70 - 79) corresponding to satisfactory crop performance every season. Results further showed overall mild to moderate droughts in the beginning of the near-future climate period (2020/21-2036/37) with SPEI values not decreasing below -1.5. These conditions are then expected to change during the far-future climate period (2055/56-2089/90), whereby results on the expected crop performance predicted significantly drier conditions (p < 0.05). This study provided information on how farmers in the area can prepare for future agricultural seasons, while there is sufficient time to implement strategies to reduce drought risk potential. Thus, integrated interventions could provide best options for improving livelihoods and building the capability of farmers to manage climate change-related stresses.
Ciechanowska, Magdalena; Łapot, Magdalena; Malewski, Tadeusz; Mateusiak, Krystyna; Misztal, Tomasz; Przekop, Franciszek
2011-01-01
There is no information in the literature regarding the effect of corticotropin-releasing hormone (CRH) on genes encoding gonadotrophin-releasing hormone (GnRH) and the GnRH receptor (GnRHR) in the hypothalamus or on GnRHR gene expression in the pituitary gland in vivo. Thus, the aim of the present study was to investigate, in follicular phase ewes, the effects of prolonged, intermittent infusion of small doses of CRH or its antagonist (α-helical CRH 9-41; CRH-A) into the third cerebral ventricle on GnRH mRNA and GnRHR mRNA levels in the hypothalamo-pituitary unit and on LH secretion. Stimulation or inhibition of CRH receptors significantly decreased or increased GnRH gene expression in the hypothalamus, respectively, and led to different responses in GnRHR gene expression in discrete hypothalamic areas. For example, CRH increased GnRHR gene expression in the preoptic area, but decreased it in the hypothalamus/stalk median eminence and in the anterior pituitary gland. In addition, CRH decreased LH secretion. Blockade of CRH receptors had the opposite effect on GnRHR gene expression. The results suggest that activation of CRH receptors in the hypothalamus of follicular phase ewes can modulate the biosynthesis and release of GnRH through complex changes in the expression of GnRH and GnRHR genes in the hypothalamo-anterior pituitary unit. © CSIRO 2011 Open Access
Foster, Scott D.; Griffin, David A.; Dunstan, Piers K.
2014-01-01
The physical climate defines a significant portion of the habitats in which biological communities and species reside. It is important to quantify these environmental conditions, and how they have changed, as this will inform future efforts to study many natural systems. In this article, we present the results of a statistical summary of the variability in sea surface temperature (SST) time-series data for the waters surrounding Australia, from 1993 to 2013. We partition variation in the SST series into annual trends, inter-annual trends, and a number of components of random variation. We utilise satellite data and validate the statistical summary from these data to summaries of data from long-term monitoring stations and from the global drifter program. The spatially dense results, available as maps from the Australian Oceanographic Data Network's data portal (http://www.cmar.csiro.au/geonetwork/srv/en/metadata.show?id=51805), show clear trends that associate with oceanographic features. Noteworthy oceanographic features include: average warming was greatest off southern West Australia and off eastern Tasmania, where the warming was around 0.6°C per decade for a twenty year study period, and insubstantial warming in areas dominated by the East Australian Current, but this area did exhibit high levels of inter-annual variability (long-term trend increases and decreases but does not increase on average). The results of the analyses can be directly incorporated into (biogeographic) models that explain variation in biological data where both biological and environmental data are on a fine scale. PMID:24988444
Khanna, Sankalp; Boyle, Justin; Good, Norm; Lind, James
2012-10-01
To investigate the effect of hospital occupancy levels on inpatient and ED patient flow parameters, and to simulate the impact of shifting discharge timing on occupancy levels. Retrospective analysis of hospital inpatient data and ED data from 23 reporting public hospitals in Queensland, Australia, across 30 months. Relationships between outcome measures were explored through the aggregation of the historic data into 21 912 hourly intervals. Main outcome measures included admission and discharge rates, occupancy levels, length of stay for admitted and emergency patients, and the occurrence of access block. The impact of shifting discharge timing on occupancy levels was quantified using observed and simulated data. The study identified three stages of system performance decline, or choke points, as hospital occupancy increased. These choke points were found to be dependent on hospital size, and reflect a system change from 'business-as-usual' to 'crisis'. Effecting early discharge of patients was also found to significantly (P < 0.001) impact overcrowding levels and improve patient flow. Modern hospital systems have the ability to operate efficiently above an often-prescribed 85% occupancy level, with optimal levels varying across hospitals of different size. Operating over these optimal levels leads to performance deterioration defined around occupancy choke points. Understanding these choke points and designing strategies around alleviating these flow bottlenecks would improve capacity management, reduce access block and improve patient outcomes. Effecting early discharge also helps alleviate overcrowding and related stress on the system. © 2012 CSIRO. EMA © 2012 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
NASA Astrophysics Data System (ADS)
Luo, Qunying; O'Leary, Garry; Cleverly, James; Eamus, Derek
2018-06-01
Climate change (CC) presents a challenge for the sustainable development of wheat production systems in Australia. This study aimed to (1) quantify the impact of future CC on wheat grain yield for the period centred on 2030 from the perspectives of wheat phenology, water use and water use efficiency (WUE) and (2) evaluate the effectiveness of changing sowing times and cultivars in response to the expected impacts of future CC on wheat grain yield. The daily outputs of CSIRO Conformal-Cubic Atmospheric Model for baseline and future periods were used by a stochastic weather generator to derive changes in mean climate and in climate variability and to construct local climate scenarios, which were then coupled with a wheat crop model to achieve the two research aims. We considered three locations in New South Wales, Australia, six times of sowing (TOS) and three bread wheat ( Triticum aestivum L .) cultivars in this study. Simulation results show that in 2030 (1) for impact analysis, wheat phenological events are expected to occur earlier and crop water use is expected to decrease across all cases (the combination of three locations, six TOS and three cultivars), wheat grain yield would increase or decrease depending on locations and TOS; and WUE would increase in most of the cases; (2) for adaptation considerations, the combination of TOS and cultivars with the highest yield varied across locations. Wheat growers at different locations will require different strategies in managing the negative impacts or taking the opportunities of future CC.
Bingham, John; Payne, Jean; Harper, Jennifer; Frazer, Leah; Eastwood, Sarah; Wilson, Susanne; Lowther, Sue; Lunt, Ross; Warner, Simone; Carr, Mary; Hall, Roy A; Durr, Peter A
2014-06-01
West Nile virus (WNV; family Flaviviridae; genus Flavivirus) group members are an important cause of viral meningoencephalitis in some areas of the world. They exhibit marked variation in pathogenicity, with some viral lineages (such as those from North America) causing high prevalence of severe neurological disease, whilst others (such as Australian Kunjin virus) rarely cause disease. The aim of this study was to characterize WNV disease in a mouse model and to elucidate the pathogenetic features that distinguish disease variation. Tenfold dilutions of five WNV strains (New York 1999, MRM16 and three horse isolates of WNV-Kunjin: Boort and two isolates from the 2011 Australian outbreak) were inoculated into mice by the intraperitoneal route. All isolates induced meningoencephalitis in different proportions of infected mice. WNVNY99 was the most pathogenic, the three horse isolates were of intermediate pathogenicity and WNVKUNV-MRM16 was the least, causing mostly asymptomatic disease with seroconversion. Infectivity, but not pathogenicity, was related to challenge dose. Using cluster analysis of the recorded clinical signs, histopathological lesions and antigen distribution scores, the cases could be classified into groups corresponding to disease severity. Metrics that were important in determining pathotype included neurological signs (paralysis and seizures), meningoencephalitis, brain antigen scores and replication in extra-neural tissues. Whereas all mice infected with WNVNY99 had extra-neural antigen, those infected with the WNV-Kunjin viruses only occasionally had antigen outside the nervous system. We conclude that the mouse model could be a useful tool for the assessment of pathotype for WNVs. © 2014 CSIRO.
Amarra, Ma Sofia V; Yee, Yeong Boon; Drewnowski, Adam
2008-01-01
Food consumption patterns in Asia are rapidly changing. Urbanization and changing lifestyles have diminished the consumption of traditional meals based on cereals, vegetables and root crops. These changes are accompa-nied by an increasing prevalence of chronic diseases among Asian populations. ILSI Southeast Asia and CSIRO, Australia jointly organized the Symposium on Understanding and Influencing Food Behaviours for Health, focusing on the use of consumer science to improve food behaviour. The goals of the Symposium were to present an understanding of Asian consumers and their food choices, examine the use of consumer research to modify food choices towards better health, illustrate how health programs and food regulations can be utilized effectively to promote healthier choices, and identify knowledge gaps regarding the promotion of healthy food behaviour in Asian populations. There is no difference in taste perception among Asians, and Asian preference for certain tastes is determined by exposure and familiarity largely dictated by culture and its underlying values and beliefs. Cross-cultural validity of consumer science theories and tools derived from western populations need to be tested in Asia. Information on consumption levels and substitution behaviours for foods and food products, obtained using consumer research methods, can guide the development of food regulations and programs that will enable individuals to make healthier choices. Existing knowledge gaps include consumer research techniques appropriate for use in Asian settings, diet-health relationships from consumption of traditional Asian diets, and methods to address the increasing prevalence of over- and undernutrition within the same households in Asia.
Development of hi-resolution regional climate scenarios in Japan by statistical downscaling
NASA Astrophysics Data System (ADS)
Dairaku, K.
2016-12-01
Climate information and services for Impacts, Adaptation and Vulnerability (IAV) Assessments are of great concern. To meet with the needs of stakeholders such as local governments, a Japan national project, Social Implementation Program on Climate Change Adaptation Technology (SI-CAT), launched in December 2015. It develops reliable technologies for near-term climate change predictions. Multi-model ensemble regional climate scenarios with 1km horizontal grid-spacing over Japan are developed by using CMIP5 GCMs and a statistical downscaling method to support various municipal adaptation measures appropriate for possible regional climate changes. A statistical downscaling method, Bias Correction Spatial Disaggregation (BCSD), is employed to develop regional climate scenarios based on CMIP5 RCP8.5 five GCMs (MIROC5, MRI-CGCM3, GFDL-CM3, CSIRO-Mk3-6-0, HadGEM2-ES) for the periods of historical climate (1970-2005) and near future climate (2020-2055). Downscaled variables are monthly/daily precipitation and temperature. File format is NetCDF4 (conforming to CF1.6, HDF5 compression). Developed regional climate scenarios will be expanded to meet with needs of stakeholders and interface applications to access and download the data are under developing. Statistical downscaling method is not necessary to well represent locally forced nonlinear phenomena, extreme events such as heavy rain, heavy snow, etc. To complement the statistical method, dynamical downscaling approach is also combined and applied to some specific regions which have needs of stakeholders. The added values of statistical/dynamical downscaling methods compared with parent GCMs are investigated.
NASA Astrophysics Data System (ADS)
Luo, Qunying; O'Leary, Garry; Cleverly, James; Eamus, Derek
2018-02-01
Climate change (CC) presents a challenge for the sustainable development of wheat production systems in Australia. This study aimed to (1) quantify the impact of future CC on wheat grain yield for the period centred on 2030 from the perspectives of wheat phenology, water use and water use efficiency (WUE) and (2) evaluate the effectiveness of changing sowing times and cultivars in response to the expected impacts of future CC on wheat grain yield. The daily outputs of CSIRO Conformal-Cubic Atmospheric Model for baseline and future periods were used by a stochastic weather generator to derive changes in mean climate and in climate variability and to construct local climate scenarios, which were then coupled with a wheat crop model to achieve the two research aims. We considered three locations in New South Wales, Australia, six times of sowing (TOS) and three bread wheat (Triticum aestivum L.) cultivars in this study. Simulation results show that in 2030 (1) for impact analysis, wheat phenological events are expected to occur earlier and crop water use is expected to decrease across all cases (the combination of three locations, six TOS and three cultivars), wheat grain yield would increase or decrease depending on locations and TOS; and WUE would increase in most of the cases; (2) for adaptation considerations, the combination of TOS and cultivars with the highest yield varied across locations. Wheat growers at different locations will require different strategies in managing the negative impacts or taking the opportunities of future CC.
Monitoring fossil fuel sources of methane in Australia
NASA Astrophysics Data System (ADS)
Loh, Zoe; Etheridge, David; Luhar, Ashok; Hibberd, Mark; Thatcher, Marcus; Noonan, Julie; Thornton, David; Spencer, Darren; Gregory, Rebecca; Jenkins, Charles; Zegelin, Steve; Leuning, Ray; Day, Stuart; Barrett, Damian
2017-04-01
CSIRO has been active in identifying and quantifying methane emissions from a range of fossil fuel sources in Australia over the past decade. We present here a history of the development of our work in this domain. While we have principally focused on optimising the use of long term, fixed location, high precision monitoring, paired with both forward and inverse modelling techniques suitable either local or regional scales, we have also incorporated mobile ground surveys and flux calculations from plumes in some contexts. We initially developed leak detection methodologies for geological carbon storage at a local scale using a Bayesian probabilistic approach coupled to a backward Lagrangian particle dispersion model (Luhar et al. JGR, 2014), and single point monitoring with sector analysis (Etheridge et al. In prep.) We have since expanded our modelling techniques to regional scales using both forward and inverse approaches to constrain methane emissions from coal mining and coal seam gas (CSG) production. The Surat Basin (Queensland, Australia) is a region of rapidly expanding CSG production, in which we have established a pair of carefully located, well-intercalibrated monitoring stations. These data sets provide an almost continuous record of (i) background air arriving at the Surat Basin, and (ii) the signal resulting from methane emissions within the Basin, i.e. total downwind methane concentration (comprising emissions including natural geological seeps, agricultural and biogenic sources and fugitive emissions from CSG production) minus background or upwind concentration. We will present our latest results on monitoring from the Surat Basin and their application to estimating methane emissions.
NASA Astrophysics Data System (ADS)
Watterson, I. G.
2010-05-01
Rainfall in southeastern Australia has declined in recent years, particularly during austral autumn over the state of Victoria. A recent study suggests that sea surface temperature (SST) variations in both the Indonesian Throughflow (ITF) region and in a meridional dipole in the central Indian Ocean have influenced Victorian late autumn rainfall since 1950. However, it remains unclear to what extent SSTs in these and other regions force such a teleconnection. Analysis of a 1080 year simulation by the climate model CSIRO Mk3.5 shows that the model Victorian rainfall is correlated rather realistically with SSTs but that part of the above relationships is due to the model ENSO. Furthermore, the remote patterns of pressure, rainfall, and land temperature greatly diminish when the data are lagged by 1 month, suggesting that the true forcing by the persisting SSTs is weak. In a series of simulations of the atmospheric Mk3.5 with idealized SST anomalies, raised SSTs to the east of Indonesia lower the simulated Australian rainfall, while those to the west raise it. A positive ITF anomaly lowers pressure over Australia, but with little effect on Victorian rainfall. The meridional dipole and SSTs to the west and southeast of Australia have little direct effect on southeastern Australia in the model. The results suggest that tropical SSTs predominate as an influence on Victorian rainfall. However, the SST indices appear to explain only a fraction of the observed trend, which in the case of decadal means remains within the range of unforced variability simulated by Mk3.5.
Mobile field data acquisition in geosciences
NASA Astrophysics Data System (ADS)
Golodoniuc, Pavel; Klump, Jens; Reid, Nathan; Gray, David
2016-04-01
The Discovering Australia's Mineral Resources Program of CSIRO is conducting a study to develop novel methods and techniques to reliably define distal footprints of mineral systems under regolith cover in the Capricorn Orogen - the area that lies between two well-known metallogenic provinces of Pilbara and Yilgarn Cratons in Western Australia. The multidisciplinary study goes beyond the boundaries of a specific discipline and aims at developing new methods to integrate heterogeneous datasets to gain insight into the key indicators of mineralisation. The study relies on large regional datasets obtained from previous hydrogeochemical, regolith, and resistate mineral studies around known deposits, as well as new data obtained from the recent field sampling campaigns around areas of interest. With thousands of water, vegetation, rock and soil samples collected over the past years, it has prompted us to look at ways to standardise field sampling procedures and review the data acquisition process. This process has evolved over the years (Golodoniuc et al., 2015; Klump et al., 2015) and has now reached the phase where fast and reliable collection of scientific data in remote areas is possible. The approach is backed by a unified discipline-agnostic platform - the Federated Archaeological Information Management System (FAIMS). FAIMS is an open source framework for mobile field data acquisition, developed at the University of New South Wales for archaeological field data collection. The FAIMS framework can easily be adapted to a diverse range of scenarios, different kinds of samples, each with its own peculiarities, integration with GPS, and the ability to associate photographs taken with the device embedded camera with captured data. Three different modules have been developed so far, dedicated to geochemical water, plant and rock sampling. All modules feature automatic date and position recording, and reproduce the established data recording workflows. The rock sampling module also features an interactive GIS component allowing to enter field observations as annotations to a map. The open communication protocols and file formats used by FAIMS modules allow easy integration with existing spatial data infrastructures and third-party applications, such as ArcGIS. The remoteness of the focus areas in the Capricorn region required reliable mechanisms for data replication and an added level of redundancy. This was achieved through the use of the FAIMS Server without adding a tightly coupled dependency on it - the mobile devices could continue to work independently in the case the server fails. To support collaborative fieldwork, "FAIMS on a Truck" offers networked collaboration within a field team using mobile applications as asynchronous rich clients. The framework runs on compatible Android devices (e.g., tablets, smart phones) with the network infrastructure supported by a FAIMS Server. The server component is installed in a field vehicle to provide data synchronisation between multiple mobile devices, backup and data transfer. The data entry process was streamlined and followed the workflow that field crews were accustomed to with added data validation capabilities. The use of a common platform allowed us to adopt the framework within multiple disciplines, improve data acquisition times, and reduce human-introduced errors. We continue to work with other research groups and continue to explore the possibilities to adopt the technology in other applications, e.g., agriculture.
The Remarkable Synchrotron Nebula Associated with PSR J1015-5719
NASA Astrophysics Data System (ADS)
Ng, Chi Yung; Bandiera, Rino; Hunstead, Richard; Johnston, Simon
2017-08-01
We report the discovery of a synchrotron nebula G283.1-0.59 associated with the young and energetic pulsar J1015-5719. Radio observations using the Molonglo Observatory Synthesis Telescope (MOST) and the Australia Telescope Compact Array (ATCA) at 36, 16, 6, and 3 cm reveal a complex morphology for the source. The pulsar is embedded in the "head" of the nebula with fan-shaped diffuse emission. This is connected to a circular bubble structure of 20" radius and followed by a collimated tail extending over 1'. Polarization measurements show a highly ordered magnetic field in the nebula. The intrinsic B-field wraps around the edge of the head and shows an azimuthal configuration near the pulsar, then switches direction quasi-periodically near the bubble and in the tail. Together with the flat radio spectrum observed, we suggest that this system is most plausibly a pulsar wind nebula (PWN), with the head as a bow shock that has a low Mach number and the bubble as a shell expanding in a dense environment, possibly due to flow instabilities. In addition, the bubble could act as a magnetic bottle trapping the relativistic particles. A comparison with other bow-shock PWNe with higher Mach numbers shows similar structure and B-field geometry, implying that pulsar velocity may not be the most critical factor in determining the properties of these systems.ATCA is part of the Australia Telescope National Facility which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. MOST is operated by The University of Sydney with support from the Australian Research Council and the Science Foundation for Physics within the University of Sydney. This work is supported by an ECS grant under HKU 709713P.
Interfacial modulus mapping of layered dental ceramics using nanoindentation
Bushby, Andrew J; P'ng, Ken MY; Wilson, Rory M
2016-01-01
PURPOSE The aim of this study was to test the modulus of elasticity (E) across the interfaces of yttria stabilized zirconia (YTZP) / veneer multilayers using nanoindentation. MATERIALS AND METHODS YTZP core material (KaVo-Everest, Germany) specimens were either coated with a liner (IPS e.max ZirLiner, Ivoclar-Vivadent) (Type-1) or left as-sintered (Type-2) and subsequently veneered with a pressable glass-ceramic (IPS e.max ZirPress, Ivoclar-Vivadent). A 5 µm (nominal tip diameter) spherical indenter was used with a UMIS CSIRO 2000 (ASI, Canberra, Australia) nanoindenter system to test E across the exposed and polished interfaces of both specimen types. The multiple point load – partial unload method was used for E determination. All materials used were characterized using Scanning Electron Microscopy (SEM) and X – ray powder diffraction (XRD). E mappings of the areas tested were produced from the nanoindentation data. RESULTS A significantly (P<.05) lower E value between Type-1 and Type-2 specimens at a distance of 40 µm in the veneer material was associated with the liner. XRD and SEM characterization of the zirconia sample showed a fine grained bulk tetragonal phase. IPS e-max ZirPress and IPS e-max ZirLiner materials were characterized as amorphous. CONCLUSION The liner between the YTZP core and the heat pressed veneer may act as a weak link in this dental multilayer due to its significantly (P<.05) lower E. The present study has shown nanoindentation using spherical indentation and the multiple point load - partial unload method to be reliable predictors of E and useful evaluation tools for layered dental ceramic interfaces. PMID:28018566
Narrowing the surface temperature range in CMIP5 simulations over the Arctic
NASA Astrophysics Data System (ADS)
Hao, Mingju; Huang, Jianbin; Luo, Yong; Chen, Xin; Lin, Yanluan; Zhao, Zongci; Xu, Ying
2018-05-01
Much uncertainty exists in reproducing Arctic temperature using different general circulation models (GCMs). Therefore, evaluating the performance of GCMs in reproducing Arctic temperature is critically important. In our study, 32 GCMs in the fifth phase of the Coupled Model Intercomparison Project (CMIP5) during the period 1900-2005 are used, and several metrics, i.e., bias, correlation coefficient ( R), and root mean square error (RMSE), are applied. The Cowtan data set is adopted as the reference data. The results suggest that the GCMs used can reasonably reproduce the Arctic warming trend during the period 1900-2005, as observed in the observational data, whereas a large variation of inter-model differences exists in modeling the Arctic warming magnitude. With respect to the reference data, most GCMs have large cold biases, whereas others have weak warm biases. Additionally, based on statistical thresholds, the models MIROC-ESM, CSIRO-Mk3-6-0, HadGEM2-AO, and MIROC-ESM-CHEM (bias ≤ ±0.10 °C, R ≥ 0.50, and RMSE ≤ 0.60 °C) are identified as well-performing GCMs. The ensemble of the four best-performing GCMs (ES4), with bias, R, and RMSE values of -0.03 °C, 0.72, and 0.39 °C, respectively, performs better than the ensemble with all 32 members, with bias, R, and RMSE values of -0.04 °C, 0.64, and 0.43 °C, respectively. Finally, ES4 is used to produce projections for the next century under the scenarios of RCP2.6, RCP4.5, and RCP8.0. The uncertainty in the projected temperature is greater in the higher emissions scenarios. Additionally, the projected temperature in the cold half year has larger variations than that in the warm half year.
Luedeling, Eike; Zhang, Minghua; Girvetz, Evan H
2009-07-16
Winter chill is one of the defining characteristics of a location's suitability for the production of many tree crops. We mapped and investigated observed historic and projected future changes in winter chill in California, quantified with two different chilling models (Chilling Hours, Dynamic Model). Based on hourly and daily temperature records, winter chill was modeled for two past temperature scenarios (1950 and 2000), and 18 future scenarios (average conditions during 2041-2060 and 2080-2099 under each of the B1, A1B and A2 IPCC greenhouse gas emissions scenarios, for the CSIRO-MK3, HadCM3 and MIROC climate models). For each scenario, 100 replications of the yearly temperature record were produced, using a stochastic weather generator. We then introduced and mapped a novel climatic statistic, "safe winter chill", the 10% quantile of the resulting chilling distributions. This metric can be interpreted as the amount of chilling that growers can safely expect under each scenario. Winter chill declined substantially for all emissions scenarios, with the area of safe winter chill for many tree species or cultivars decreasing 50-75% by mid-21st century, and 90-100% by late century. Both chilling models consistently projected climatic conditions by the middle to end of the 21st century that will no longer support some of the main tree crops currently grown in California, with the Chilling Hours Model projecting greater changes than the Dynamic Model. The tree crop industry in California will likely need to develop agricultural adaptation measures (e.g. low-chill varieties and dormancy-breaking chemicals) to cope with these projected changes. For some crops, production might no longer be possible.
Moo-Llanes, David; Ibarra-Cerdeña, Carlos N.; Rebollar-Téllez, Eduardo A.; Ibáñez-Bernal, Sergio; González, Camila; Ramsey, Janine M.
2013-01-01
Ecological niche models are useful tools to infer potential spatial and temporal distributions in vector species and to measure epidemiological risk for infectious diseases such as the Leishmaniases. The ecological niche of 28 North and Central American sand fly species, including those with epidemiological relevance, can be used to analyze the vector's ecology and its association with transmission risk, and plan integrated regional vector surveillance and control programs. In this study, we model the environmental requirements of the principal North and Central American phlebotomine species and analyze three niche characteristics over future climate change scenarios: i) potential change in niche breadth, ii) direction and magnitude of niche centroid shifts, iii) shifts in elevation range. Niche identity between confirmed or incriminated Leishmania vector sand flies in Mexico, and human cases were analyzed. Niche models were constructed using sand fly occurrence datapoints from Canada, USA, Mexico, Guatemala and Belize. Nine non-correlated bioclimatic and four topographic data layers were used as niche components using GARP in OpenModeller. Both B2 and A2 climate change scenarios were used with two general circulation models for each scenario (CSIRO and HadCM3), for 2020, 2050 and 2080. There was an increase in niche breadth to 2080 in both scenarios for all species with the exception of Lutzomyia vexator. The principal direction of niche centroid displacement was to the northwest (64%), while the elevation range decreased greatest for tropical, and least for broad-range species. Lutzomyia cruciata is the only epidemiologically important species with high niche identity with that of Leishmania spp. in Mexico. Continued landscape modification in future climate change will provide an increased opportunity for the geographic expansion of NCA sand flys' ENM and human exposure to vectors of Leishmaniases. PMID:24069478
Nidumolu, Uday; Crimp, Steven; Gobbett, David; Laing, Alison; Howden, Mark; Little, Stephen
2014-08-01
The Murray dairy region produces approximately 1.85 billion litres of milk each year, representing about 20 % of Australia's total annual milk production. An ongoing production challenge in this region is the management of the impacts of heat stress during spring and summer. An increase in the frequency and severity of extreme temperature events due to climate change may result in additional heat stress and production losses. This paper assesses the changing nature of heat stress now, and into the future, using historical data and climate change projections for the region using the temperature humidity index (THI). Projected temperature and relative humidity changes from two global climate models (GCMs), CSIRO MK3.5 and CCR-MIROC-H, have been used to calculate THI values for 2025 and 2050, and summarized as mean occurrence of, and mean length of consecutive high heat stress periods. The future climate scenarios explored show that by 2025 an additional 12-15 days (compared to 1971 to 2000 baseline data) of moderate to severe heat stress are likely across much of the study region. By 2050, larger increases in severity and occurrence of heat stress are likely (i.e. an additional 31-42 moderate to severe heat stress days compared with baseline data). This increasing trend will have a negative impact on milk production among dairy cattle in the region. The results from this study provide useful insights on the trends in THI in the region. Dairy farmers and the dairy industry could use these results to devise and prioritise adaptation options to deal with projected increases in heat stress frequency and severity.
NASA Astrophysics Data System (ADS)
Wang, Xunming; Yang, Yi; Dong, Zhibao; Zhang, Caixia
2009-06-01
Most areas of arid and semiarid China are covered by aeolian sand dunes, sand sheets, and desert steppes, and the existence of the nearly 80 million people who live in this region could be seriously jeopardized if climate change increases desertification. However, the expected trends in desertification during the 21st century are poorly understood. In the present study, we selected the ECHAM4 and HadCM3 global climate models (after comparing them with the results of the GFDL-R30, CGCM2, and CSIRO-Mk2b models) and used simulations of a dune mobility index under IPCC SRES climate scenarios A1FI, A2a, A2b, A2c, B1a, B2a, and B2b to estimate future trends in dune activity and desertification in China. Although uncertainties in climate predictions mean that there is still far to go before we can develop a comprehensive dune activity estimation system, HadCM3 simulations with most greenhouse forcing scenarios showed decreased desertification in most western region of arid and semiarid China by 2039, but increased desertification thereafter, whereas ECHAM4 simulation results showed that desertification will increase during this period. Inhabitants of thecentral region will benefit from reversed desertification from 2010 to 2099, whereas inhabitants of the eastern region will suffer from increased desertification from 2010 to 2099. From 2010 to 2039, most regions will not be significantly affected by desertification, but from 2040 to 2099, the environments of the western and eastern regions will deteriorate due to the significant effects of global warming (particularly the interaction between precipitation and potential evapotranspiration), leading to decreased livestock and grain yields and possibly threatening China's food security.
NASA Astrophysics Data System (ADS)
Lebourgeois, François; Pierrat, Jean-Claude; Perez, Vincent; Piedallu, Christian; Cecchini, Sébastien; Ulrich, Erwin
2010-09-01
After modeling the large-scale climate response patterns of leaf unfolding, leaf coloring and growing season length of evergreen and deciduous French temperate trees, we predicted the effects of eight future climate scenarios on phenological events. We used the ground observations from 103 temperate forests (10 species and 3,708 trees) from the French Renecofor Network and for the period 1997-2006. We applied RandomForest algorithms to predict phenological events from climatic and ecological variables. With the resulting models, we drew maps of phenological events throughout France under present climate and under two climatic change scenarios (A2, B2) and four global circulation models (HadCM3, CGCM2, CSIRO2 and PCM). We compared current observations and predicted values for the periods 2041-2070 and 2071-2100. On average, spring development of oaks precedes that of beech, which precedes that of conifers. Annual cycles in budburst and leaf coloring are highly correlated with January, March-April and October-November weather conditions through temperature, global solar radiation or potential evapotranspiration depending on species. At the end of the twenty-first century, each model predicts earlier budburst (mean: 7 days) and later leaf coloring (mean: 13 days) leading to an average increase in the growing season of about 20 days (for oaks and beech stands). The A2-HadCM3 hypothesis leads to an increase of up to 30 days in many areas. As a consequence of higher predicted warming during autumn than during winter or spring, shifts in leaf coloring dates appear greater than trends in leaf unfolding. At a regional scale, highly differing climatic response patterns were observed.
Validation and Inter-comparison Against Observations of GODAE Ocean View Ocean Prediction Systems
NASA Astrophysics Data System (ADS)
Xu, J.; Davidson, F. J. M.; Smith, G. C.; Lu, Y.; Hernandez, F.; Regnier, C.; Drevillon, M.; Ryan, A.; Martin, M.; Spindler, T. D.; Brassington, G. B.; Oke, P. R.
2016-02-01
For weather forecasts, validation of forecast performance is done at the end user level as well as by the meteorological forecast centers. In the development of Ocean Prediction Capacity, the same level of care for ocean forecast performance and validation is needed. Herein we present results from a validation against observations of 6 Global Ocean Forecast Systems under the GODAE OceanView International Collaboration Network. These systems include the Global Ocean Ice Forecast System (GIOPS) developed by the Government of Canada, two systems PSY3 and PSY4 from the French Mercator-Ocean Ocean Forecasting Group, the FOAM system from UK met office, HYCOM-RTOFS from NOAA/NCEP/NWA of USA, and the Australian Bluelink-OceanMAPS system from the CSIRO, the Australian Meteorological Bureau and the Australian Navy.The observation data used in the comparison are sea surface temperature, sub-surface temperature, sub-surface salinity, sea level anomaly, and sea ice total concentration data. Results of the inter-comparison demonstrate forecast performance limits, strengths and weaknesses of each of the six systems. This work establishes validation protocols and routines by which all new prediction systems developed under the CONCEPTS Collaborative Network will be benchmarked prior to approval for operations. This includes anticipated delivery of CONCEPTS regional prediction systems over the next two years including a pan Canadian 1/12th degree resolution ice ocean prediction system and limited area 1/36th degree resolution prediction systems. The validation approach of comparing forecasts to observations at the time and location of the observation is called Class 4 metrics. It has been adopted by major international ocean prediction centers, and will be recommended to JCOMM-WMO as routine validation approach for operational oceanography worldwide.
Moo-Llanes, David; Ibarra-Cerdeña, Carlos N; Rebollar-Téllez, Eduardo A; Ibáñez-Bernal, Sergio; González, Camila; Ramsey, Janine M
2013-01-01
Ecological niche models are useful tools to infer potential spatial and temporal distributions in vector species and to measure epidemiological risk for infectious diseases such as the Leishmaniases. The ecological niche of 28 North and Central American sand fly species, including those with epidemiological relevance, can be used to analyze the vector's ecology and its association with transmission risk, and plan integrated regional vector surveillance and control programs. In this study, we model the environmental requirements of the principal North and Central American phlebotomine species and analyze three niche characteristics over future climate change scenarios: i) potential change in niche breadth, ii) direction and magnitude of niche centroid shifts, iii) shifts in elevation range. Niche identity between confirmed or incriminated Leishmania vector sand flies in Mexico, and human cases were analyzed. Niche models were constructed using sand fly occurrence datapoints from Canada, USA, Mexico, Guatemala and Belize. Nine non-correlated bioclimatic and four topographic data layers were used as niche components using GARP in OpenModeller. Both B2 and A2 climate change scenarios were used with two general circulation models for each scenario (CSIRO and HadCM3), for 2020, 2050 and 2080. There was an increase in niche breadth to 2080 in both scenarios for all species with the exception of Lutzomyia vexator. The principal direction of niche centroid displacement was to the northwest (64%), while the elevation range decreased greatest for tropical, and least for broad-range species. Lutzomyia cruciata is the only epidemiologically important species with high niche identity with that of Leishmania spp. in Mexico. Continued landscape modification in future climate change will provide an increased opportunity for the geographic expansion of NCA sand flys' ENM and human exposure to vectors of Leishmaniases.
NASA Astrophysics Data System (ADS)
Nidumolu, Uday; Crimp, Steven; Gobbett, David; Laing, Alison; Howden, Mark; Little, Stephen
2014-08-01
The Murray dairy region produces approximately 1.85 billion litres of milk each year, representing about 20 % of Australia's total annual milk production. An ongoing production challenge in this region is the management of the impacts of heat stress during spring and summer. An increase in the frequency and severity of extreme temperature events due to climate change may result in additional heat stress and production losses. This paper assesses the changing nature of heat stress now, and into the future, using historical data and climate change projections for the region using the temperature humidity index (THI). Projected temperature and relative humidity changes from two global climate models (GCMs), CSIRO MK3.5 and CCR-MIROC-H, have been used to calculate THI values for 2025 and 2050, and summarized as mean occurrence of, and mean length of consecutive high heat stress periods. The future climate scenarios explored show that by 2025 an additional 12-15 days (compared to 1971 to 2000 baseline data) of moderate to severe heat stress are likely across much of the study region. By 2050, larger increases in severity and occurrence of heat stress are likely (i.e. an additional 31-42 moderate to severe heat stress days compared with baseline data). This increasing trend will have a negative impact on milk production among dairy cattle in the region. The results from this study provide useful insights on the trends in THI in the region. Dairy farmers and the dairy industry could use these results to devise and prioritise adaptation options to deal with projected increases in heat stress frequency and severity.
Kistner, Erica Jean
2017-12-08
The invasive brown marmorated stink bug, Halyomorpha halys (Stål; Hemiptera: Pentatomidae), has recently emerged as a harmful pest of horticultural crops in North America and Europe. Native to East Asia, this highly polyphagous insect is spreading rapidly worldwide. Climate change will add further complications to managing this species in terms of both geographic distribution and population growth. This study used CLIMEX to compare potential H. halys distribution under recent and future climate models using one emission scenario (A2) with two different global circulation models, CSIRO Mk3.0 and MIROC-H. Simulated changes in seasonal phenology and voltinism were examined. Under the possible future climate scenarios, suitable range in Europe expands northward. In North America, the suitable H. halys range shifts northward into Canada and contracts from its southern temperature range limits in the United States due to increased heat stress. Prolonged periods of warm temperatures resulted in longer H. halys growing seasons. However, future climate scenarios indicated that rising summer temperatures decrease H. halys growth potential compared to recent climatic conditions, which in turn, may reduce mid-summer crop damage. Climate change may increase the number of H. halys generations produced annually, thereby enabling the invasive insect to become multivoltine in the northern latitudes of North America and Europe where it is currently reported to be univoltine. These results indicate prime horticultural production areas in Europe, the northeastern United States, and southeastern Canada are at greatest risk from H. halys under both current and possible future climates. Published by Oxford University Press on behalf of Entomological Society of America 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Soil salinity assessment through satellite thermography for different irrigated and rainfed crops
NASA Astrophysics Data System (ADS)
Ivushkin, Konstantin; Bartholomeus, Harm; Bregt, Arnold K.; Pulatov, Alim; Bui, Elisabeth N.; Wilford, John
2018-06-01
The use of canopy thermography is an innovative approach for salinity stress detection in plants. But its applicability for landscape scale studies using satellite sensors is still not well investigated. The aim of this research is to test the satellite thermography soil salinity assessment approach on a study area with different crops, grown both in irrigated and rainfed conditions, to evaluate whether the approach has general applicability. Four study areas in four different states of Australia were selected to give broad representation of different crops cultivated under irrigated and rainfed conditions. The soil salinity map was prepared by the staff of Geoscience Australia and CSIRO Land and Water and it is based on thorough soil sampling together with environmental modelling. Remote sensing data was captured by the Landsat 5 TM satellite. In the analysis we used vegetation indices and brightness temperature as an indicator for canopy temperature. Applying analysis of variance and time series we have investigated the applicability of satellite remote sensing of canopy temperature as an approach of soil salinity assessment for different crops grown under irrigated and rainfed conditions. We concluded that in all cases average canopy temperatures were significantly correlated with soil salinity of the area. This relation is valid for all investigated crops, grown both irrigated and rainfed. Nevertheless, crop type does influence the strength of the relations. In our case cotton shows only minor temperature difference compared to other vegetation classes. The strongest relations between canopy temperature and soil salinity were observed at the moment of a maximum green biomass of the crops which is thus considered to be the best time for application of the approach.
SUPPLEMENTARY COMPARISON: APMP.EM-S6: Bilateral supplementary comparison of resistance
NASA Astrophysics Data System (ADS)
Charoensook, Ajchara; Jassadajin, Chaiwat; Chen, Henry; Ricketts, Brian
2004-01-01
A bilateral supplementary comparison of resistance, APMP.EM-S6, was conducted between the National Institute of Metrology Thailand (NIMT) and the CSIRO National Measurement Laboratory of Australia (NML). The comparison covered seven values of resistance, 0.1 Ω, 1 Ω, 100 Ω, 10 kΩ, 100 kΩ, 1 MΩ and 100 MΩ. The 0.1 Ω resistor was a YEW type 2792, the 100 MΩ resistor was an IET type SRC and the other five resistors were Fluke type 742A. The resistors were supplied by NIMT, and NML was the pilot laboratory for the comparison. The measurements for the comparison were made between December 2003 and April 2004. The resistors were measured on three separate occasions by NIMT and between each of these occasions the resistors were sent to NML for measurement. The resistors of nominal values 0.1 Ω, 1 Ω, 100 Ω and 10 kΩ were measured as four-terminal resistors while the 100 kΩ, 1 MΩ and 100 MΩ resistors were measured as two-terminal resistors. The quantity En (the absolute value of the difference between the values obtained at the two laboratories divided by the root sum square of the expanded uncertainties of the two values) was calculated for each resistance value used in the comparison. In all cases En was less than 0.5. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the APMP, according to the provisions of the Mutual Recognition Arrangement (MRA).
High-resolution photo-mosaic time-series imagery for monitoring human use of an artificial reef.
Wood, Georgina; Lynch, Tim P; Devine, Carlie; Keller, Krystle; Figueira, Will
2016-10-01
Successful marine management relies on understanding patterns of human use. However, obtaining data can be difficult and expensive given the widespread and variable nature of activities conducted. Remote camera systems are increasingly used to overcome cost limitations of conventional labour-intensive methods. Still, most systems face trade-offs between the spatial extent and resolution over which data are obtained, limiting their application. We trialed a novel methodology, CSIRO Ruggedized Autonomous Gigapixel System (CRAGS), for time series of high-resolution photo-mosaic (HRPM) imagery to estimate fine-scale metrics of human activity at an artificial reef located 1.3 km from shore. We compared estimates obtained using the novel system to those produced with a web camera that concurrently monitored the site. We evaluated the effect of day type (weekday/weekend) and time of day on each of the systems and compared to estimates obtained from binocular observations. In general, both systems delivered similar estimates for the number of boats observed and to those obtained by binocular counts; these results were also unaffected by the type of day (weekend vs. weekday). CRAGS was able to determine additional information about the user type and party size that was not possible with the lower resolution webcam system. However, there was an effect of time of day as CRAGS suffered from poor image quality in early morning conditions as a result of fixed camera settings. Our field study provides proof of concept of use of this new cost-effective monitoring tool for the remote collection of high-resolution large-extent data on patterns of human use at high temporal frequency.
Data Convergence - An Australian Perspective
NASA Astrophysics Data System (ADS)
Allen, S. S.; Howell, B.
2012-12-01
Coupled numerical physical, biogeochemical and sediment models are increasingly being used as integrators to help understand the cumulative or far field effects of change in the coastal environment. This reliance on modeling has forced observations to be delivered as data streams ingestible by modeling frameworks. This has made it easier to create near real-time or forecasting models than to try to recreate the past, and has lead in turn to the conversion of historical data into data streams to allow them to be ingested by the same frameworks. The model and observation frameworks under development within Australia's Commonwealth and Industrial Research Organisation (CSIRO) are now feeding into the Australian Ocean Data Network's (AODN's) MARine Virtual Laboratory (MARVL) . The sensor, or data stream, brokering solution is centred around the "message" and all data flowing through the gateway is wrapped as a message. Messages consist of a topic and a data object and their routing through the gateway to pre-processors and listeners is determined by the topic. The Sensor Message Gateway (SMG) method is allowing data from different sensors measuring the same thing but with different temporal resolutions, units or spatial coverage to be ingested or visualized seamlessly. At the same time the model output as a virtual sensor is being explored, this again being enabled by the SMG. It is only for two way communications with sensor that rigorous adherence to standards is needed, by accepting existing data in less than ideal formats, but exposing them though the SMG we can move a step closer to the Internet Of Things by creating an Internet of Industries where each vested interest can continue with business as usual, contribute to data convergence and adopt more open standards when investment seems appropriate to that sector or business.Architecture Overview
Using simulation to improve wildlife surveys: Wintering mallards in Mississippi, USA
Pearse, A.T.; Reinecke, K.J.; Dinsmore, S.J.; Kaminski, R.M.
2009-01-01
Wildlife conservation plans generally require reliable data about population abundance and density. Aerial surveys often can provide these data; however, associated costs necessitate designing and conducting surveys efficiently. We developed methods to simulate population distributions of mallards (Anas platyrhynchos) wintering in western Mississippi, USA, by combining bird observations from three previous strip-transect surveys and habitat data from three sets of satellite images representing conditions when surveys were conducted. For each simulated population distribution, we compared 12 primary survey designs and two secondary design options by using coefficients of variation (CV) of population indices as the primary criterion for assessing survey performance. In all, 3 of the 12 primary designs provided the best precision (CV???11.7%) and performed equally well (WR08082E1d.gif diff???0.6%). Features of the designs that provided the largest gains in precision were optimal allocation of sample effort among strata and configuring the study area into five rather than four strata, to more precisely estimate mallard indices in areas of consistently high density. Of the two secondary design options, we found including a second observer to double the size of strip transects increased precision or decreased costs, whereas ratio estimation using auxiliary habitat data from satellite images did not increase precision appreciably. We recommend future surveys of mallard populations in our study area use the strata we developed, optimally allocate samples among strata, employ PPS or EPS sampling, and include two observers when qualified staff are available. More generally, the methods we developed to simulate population distributions from prior survey data provide a cost-effective method to assess performance of alternative wildlife surveys critical to informing management decisions, and could be extended to account for effects of detectability on estimates of true abundance. ?? 2009 CSIRO.
NASA Astrophysics Data System (ADS)
Koe, Lawrence C. C.; Arellano, Avelino F.; McGregor, John L.
The 1997 Indonesia forest fires was an environmental disaster of exceptional proportions. Such a disaster caused massive transboundary air pollution and indiscriminate destruction of biodiversity in the world. The immediate consequence of the fires was the production of large amounts of haze in the region, causing visibility and health problems within Southeast Asia. Furthermore, fires of these magnitudes are potential contributors to global warming and climate change due to the emission of large amounts of greenhouse gases and other pyrogenic products.The long-range transport of fire-related haze in the region is investigated using trajectories from the CSIRO Division of Atmospheric Research Limited Area Model (DARLAM). Emission scenarios were constructed for hotspot areas in Sumatra and Kalimantan for the months of September and October 1997 to determine the period and fire locations most critical to Singapore. This study also examines some transport issues raised from field observations. Results show that fires in the coastal areas of southeast Sumatra and southwest Kalimantan can be potential contributors to transboundary air pollution in Singapore. Singapore was directly affected by haze from these areas whereas Kuala Lumpur was heavily affected by the haze coming from Sumatra. In most cases, Singapore was more affected by fires from Kalimantan than was Kuala Lumpur. This was mainly a result of the shifting of monsoons. The transition of monsoons resulted in weaker low-level winds and shifted convergence zones near to the southeast of Peninsular Malaysia. In addition to severe drought and massive fire activity in 1997, the timing of the monsoon transition has a strong influence on haze transport in the region.
The UltraBattery-A new battery design for a new beginning in hybrid electric vehicle energy storage
NASA Astrophysics Data System (ADS)
Cooper, A.; Furakawa, J.; Lam, L.; Kellaway, M.
The UltraBattery, developed by CSIRO Energy Technology in Australia, is a hybrid energy storage device which combines an asymmetric super-capacitor and a lead-acid battery in single unit cells. This takes the best from both technologies without the need for extra, expensive electronic controls. The capacitor enhances the power and lifespan of the lead-acid battery as it acts as a buffer during high-rate discharging and charging, thus enabling it to provide and absorb charge rapidly during vehicle acceleration and braking. The initial performance of the prototype UltraBatteries was evaluated according to the US FreedomCAR targets and was shown to meet or exceed these in terms of power, available energy, cold cranking and self-discharge set for both minimum and maximum power-assist hybrid electric vehicles (HEVs). Other laboratory cycling tests showed a fourfold improvement over previous state-of-the-art lead-acid batteries under the RHOLAB test profile and better life than commercial nickel/metal hydride (NiMH) cells used in a Honda Insight when tested under the EUCAR HEV profile. As a result of this work, a set of twelve 12 V modules was built by The Furukawa Battery Co., Ltd. in Japan and were fitted into a Honda Insight instead of the NiMH battery by Provector Ltd. The battery pack was fitted with full monitoring and control capabilities and the car was tested at Millbrook Proving Ground under a General Motors road test simulation cycle for an initial target of 50 000 miles which was extended to 100 000 miles. This was completed on 15th January 2008 without any battery problems. Furthermore, the whole test was completed without the need for any conditioning or equalisation of the battery pack.
NASA Astrophysics Data System (ADS)
Trudinger, Cathy; Etheridge, David; Sturges, William; Vollmer, Martin; Miller, Benjamin; Worton, David; Rigby, Matt; Krummel, Paul; Martinerie, Patricia; Witrant, Emmanuel; Rayner, Peter; Battle, Mark; Blunier, Thomas; Fraser, Paul; Laube, Johannes; Mani, Frances; Mühle, Jens; O'Doherty, Simon; Schwander, Jakob; Steele, Paul
2015-04-01
Perfluorocarbons are very potent and long-lived greenhouse gases in the atmosphere, released predominantly during aluminium production, electronic chip manufacture and refrigeration. Mühle et al. (2010) presented records of the concentration and inferred emissions of CF4 (PFC-14), C2F6 (PFC-116) and C3F8 (PFC-218) from the 1970s up to 2008, using measurements from the Cape Grim Air Archive and a suite of tanks with old Northern Hemisphere air, and the AGAGE in situ network. Mühle et al. (2010) also estimated pre-industrial concentrations of these compounds from a small number of polar firn and ice core samples. Here we present measurements of air from polar firn at four sites (DSSW20K, EDML, NEEM and South Pole) and from air bubbles trapped in ice at two sites (DE08 and DE08-2), along with recent atmospheric measurements to give a continuous record of concentration from preindustrial levels up to the present. We estimate global emissions (with uncertainties) consistent with the concentration records. The uncertainty analysis takes into account uncertainties in characterisation of the age of air in firn and ice by the use of two different (independently-calibrated) firn models (the CSIRO and LGGE-GIPSA firn models). References Mühle, J., A.L. Ganesan, B.R. Miller, P.K. Salameh, C.M. Harth, B.R. Greally, M. Rigby, L.W. Porter, L. P. Steele, C.M. Trudinger, P.B. Krummel, S. O'Doherty, P.J. Fraser, P.G. Simmonds, R.G. Prinn, and R.F. Weiss, Perfluorocarbons in the global atmosphere: tetrafluoromethane, hexafluoroethane, and octafluoropropane, Atmos. Chem. Phys., 10, 5145-5164, doi:10.5194/acp-10-5145-2010, 2010.
Collection and Analysis of Firn Air from the South Pole, 2001
NASA Astrophysics Data System (ADS)
Butler, J. H.; Montzka, S. A.; Battle, M.; Clarke, A. D.; Mondeel, D. J.; Lind, J. A.; Hall, B. D.; Elkins, J. W.
2001-12-01
In January 2001, we collected an archive of 20th century air from the firn (snowpack) at the South Pole. Samples were collected into separate pairs of 3L glass flasks for measurements of O2/N2 (Bowdoin/Princeton) and carbon cycle gases (CMDL); individual 3L stainless steel and glass flasks for measurements of halocarbons, N2O, SF6, and COS; large (33L) stainless steel canisters for maintaining an archive of air for future analyses; and a few canisters each for measurement of 14CH4 (NIWA/CSIRO) and very low-level analyses of SF6 (SIO). Although it was hoped to obtain air dating back to the turn of the century, the analyses suggest that the earliest date was 1925 for CO2 and the mid- to late teens for heavier gases such as methyl bromide or methyl chloride. This talk will compare the analyses of halocarbons in these recently collected samples to those of air in flasks sampled at the South Pole in 1995. We also will present some results for compounds not measured in the 1995 South Pole samples owing to a paucity of air. Measurements made of the same gases in the firn air at both ends of this six-year interval, along with real-time atmospheric measurements of the same gases, are useful in evaluating assumptions about diffusion in the firn and may allow for the direct calculation of diffusion coefficients at low temperatures. This, in turn, would improve age estimates for firn air samples. New measurements will add to our existing histories established for the 20th century from analyses of firn air samples collected in both Greenland and Antarctica.
Ceccarelli, Soledad; Rabinovich, Jorge E
2015-11-01
We analyzed the possible effects of global climate change on the potential geographic distribution in Venezuela of five species of triatomines (Eratyrus mucronatus (Stal, 1859), Panstrongylus geniculatus (Latreille, 1811), Rhodnius prolixus (Stål, 1859), Rhodnius robustus (Larrousse, 1927), and Triatoma maculata (Erichson, 1848)), vectors of Trypanosoma cruzi, the etiological agent of Chagas disease. To obtain the future potential geographic distributions, expressed as climatic niche suitability, we modeled the presences of these species using two IPCC (Intergovernmental Panel on Climate Change) future emission scenarios of global climate change (A1B and B1), the Global Climate model CSIRO Mark 3.0, and three periods of future projections (years 2020, 2060, and 2080). After estimating with the MaxEnt software the future climatic niche suitability for each species, scenario, and period of future projections, we estimated a series of indexes of Venezuela's vulnerability at the county, state, and country level, measured as the number of people exposed due to the changes in the geographical distribution of the five triatomine species analyzed. Despite that this is not a measure of the risk of Chagas disease transmission, we conclude that possible future effects of global climate change on the Venezuelan population vulnerability show a slightly decreasing trend, even taking into account future population growth; we can expect fewer locations in Venezuela where an average Venezuelan citizen would be exposed to triatomines in the next 50-70 yr. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Lebourgeois, François; Pierrat, Jean-Claude; Perez, Vincent; Piedallu, Christian; Cecchini, Sébastien; Ulrich, Erwin
2010-09-01
After modeling the large-scale climate response patterns of leaf unfolding, leaf coloring and growing season length of evergreen and deciduous French temperate trees, we predicted the effects of eight future climate scenarios on phenological events. We used the ground observations from 103 temperate forests (10 species and 3,708 trees) from the French Renecofor Network and for the period 1997-2006. We applied RandomForest algorithms to predict phenological events from climatic and ecological variables. With the resulting models, we drew maps of phenological events throughout France under present climate and under two climatic change scenarios (A2, B2) and four global circulation models (HadCM3, CGCM2, CSIRO2 and PCM). We compared current observations and predicted values for the periods 2041-2070 and 2071-2100. On average, spring development of oaks precedes that of beech, which precedes that of conifers. Annual cycles in budburst and leaf coloring are highly correlated with January, March-April and October-November weather conditions through temperature, global solar radiation or potential evapotranspiration depending on species. At the end of the twenty-first century, each model predicts earlier budburst (mean: 7 days) and later leaf coloring (mean: 13 days) leading to an average increase in the growing season of about 20 days (for oaks and beech stands). The A2-HadCM3 hypothesis leads to an increase of up to 30 days in many areas. As a consequence of higher predicted warming during autumn than during winter or spring, shifts in leaf coloring dates appear greater than trends in leaf unfolding. At a regional scale, highly differing climatic response patterns were observed.
Auscope: Australian Earth Science Information Infrastructure using Free and Open Source Software
NASA Astrophysics Data System (ADS)
Woodcock, R.; Cox, S. J.; Fraser, R.; Wyborn, L. A.
2013-12-01
Since 2005 the Australian Government has supported a series of initiatives providing researchers with access to major research facilities and information networks necessary for world-class research. Starting with the National Collaborative Research Infrastructure Strategy (NCRIS) the Australian earth science community established an integrated national geoscience infrastructure system called AuScope. AuScope is now in operation, providing a number of components to assist in understanding the structure and evolution of the Australian continent. These include the acquisition of subsurface imaging , earth composition and age analysis, a virtual drill core library, geological process simulation, and a high resolution geospatial reference framework. To draw together information from across the earth science community in academia, industry and government, AuScope includes a nationally distributed information infrastructure. Free and Open Source Software (FOSS) has been a significant enabler in building the AuScope community and providing a range of interoperable services for accessing data and scientific software. A number of FOSS components have been created, adopted or upgraded to create a coherent, OGC compliant Spatial Information Services Stack (SISS). SISS is now deployed at all Australian Geological Surveys, many Universities and the CSIRO. Comprising a set of OGC catalogue and data services, and augmented with new vocabulary and identifier services, the SISS provides a comprehensive package for organisations to contribute their data to the AuScope network. This packaging and a variety of software testing and documentation activities enabled greater trust and notably reduced barriers to adoption. FOSS selection was important, not only for technical capability and robustness, but also for appropriate licensing and community models to ensure sustainability of the infrastructure in the long term. Government agencies were sensitive to these issues and AuScope's careful selection has been rewarded by adoption. In some cases the features provided by the SISS solution are now significantly in advance of COTS offerings which will create expectations that can be passed back from users to their preferred vendors. Using FOSS, AuScope has addressed the challenge of data exchange across organisations nationally. The data standards (e.g. GeosciML) and platforms that underpin AuScope provide important new datasets and multi-agency links independent of underlying software and hardware differences. AuScope has created an infrastructure, a platform of technologies and the opportunity for new ways of working with and integrating disparate data at much lower cost. Research activities are now exploiting the information infrastructure to create virtual laboratories for research ranging from geophysics through water and the environment. Once again the AuScope community is making heavy use of FOSS to provide access to processing software and Cloud computing and HPC. The successful use of FOSS by AuScope, and the efforts made to ensure it is suitable for adoption, have resulted in the SISS being selected as a reference implementation for a number of Australian Government initiatives beyond AuScope in environmental information and bioregional assessments.
Water Catchment and Storage Monitoring
NASA Astrophysics Data System (ADS)
Bruenig, Michael; Dunbabin, Matt; Moore, Darren
2010-05-01
Sensors and Sensor Networks technologies provide the means for comprehensive understanding of natural processes in the environment by radically increasing the availability of empirical data about the natural world. This step change is achieved through a dramatic reduction in the cost of data acquisition and many orders of magnitude increase in the spatial and temporal granularity of measurements. Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) is undertaking a strategic research program developing wireless sensor network technology for environmental monitoring. As part of this research initiative, we are engaging with government agencies to densely monitor water catchments and storages, thereby enhancing understanding of the environmental processes that affect water quality. In the Gold Coast hinterland in Queensland, Australia, we are building sensor networks to monitor restoration of rainforest within the catchment, and to monitor methane flux release and water quality in the water storages. This poster will present our ongoing work in this region of eastern Australia. The Springbrook plateau in the Gold Coast hinterland lies within a World Heritage listed area, has uniquely high rainfall, hosts a wide range of environmental gradients, and forms part of the catchment for Gold Coast's water storages. Parts of the plateau are being restored from agricultural grassland to native rainforest vegetation. Since April 2008, we have had a 10-node, multi-hop sensor network deployed there to monitor microclimate variables. This network will be expanded to 50-nodes in February 2010, and to around 200-nodes and 1000 sensors by mid-2011, spread over an area of approximately 0.8 square kilometers. The extremely dense microclimate sensing will enhance knowledge of the environmental factors that enhance or inhibit the regeneration of native rainforest. The final network will also include nodes with acoustic and image sensing capability for monitoring higher level parameters such as fauna diversity. The regenerating rainforest environment presents a number of interesting challenges for wireless sensor networks related to energy harvesting and to reliable low-power wireless communications through dense and wet vegetation. Located downstream from the Springbrook plateau, the Little Nerang and Hinze dams are the two major water supply storages for the Gold Coast region. In September 2009 we fitted methane, light, wind, and sonar sensors to our autonomous electric boat platform and successfully demonstrated autonomous collection of methane flux release data on Little Nerang Dam. Sensor and boat status data were relayed back to a human operator on the shore of the dam via a small network of our Fleck™ nodes. The network also included 4 floating nodes each fitted with a string of 6 temperature sensors for profiling temperature at different water depths. We plan to expand the network further during 2010 to incorporate floating methane nodes, additional temperature sensing nodes, as well as land-based microclimate nodes. The overall monitoring system will provide significant data to understand the connected catchment-to-storage system and will provide continuous data to monitor and understand change trends within this world heritage area.
Hatzinger, P.B.; Böhlke, J.K.; Sturchio, N.C.; Gu, B.; Heraty, L.J.; Borden, R.C.
2009-01-01
Environmental context. Perchlorate (ClO4-) and nitrate (NO3-) are common co-contaminants in groundwater, with both natural and anthropogenic sources. Each of these compounds is biodegradable, so in situ enhanced bioremediation is one alternative for treating them in groundwater. Because bacteria typically fractionate isotopes during biodegradation, stable isotope analysis is increasingly used to distinguish this process from transport or mixing-related decreases in contaminant concentrations. However, for this technique to be useful in the field to monitor bioremediation progress, isotope fractionation must be quantified under relevant environmental conditions. In the present study, we quantify the apparent in situ fractionation effects for stable isotopes in ClO4- (Cl and O) and NO3- (N and O) resulting from biodegradation in an aquifer. Abstract. An in situ experiment was performed in a shallow alluvial aquifer in Maryland to quantify the fractionation of stable isotopes in perchlorate (Cl and O) and nitrate (N and O) during biodegradation. An emulsified soybean oil substrate that was previously injected into this aquifer provided the electron donor necessary for biological perchlorate reduction and denitrification. During the field experiment, groundwater extracted from an upgradient well was pumped into an injection well located within the in situ oil barrier, and then groundwater samples were withdrawn for the next 30 h. After correction for dilution (using Br- as a conservative tracer of the injectate), perchlorate concentrations decreased by 78% and nitrate concentrations decreased by 82% during the initial 8.6 h after the injection. The observed ratio of fractionation effects of O and Cl isotopes in perchlorate (18O/37Cl) was 2.6, which is similar to that observed in the laboratory using pure cultures (2.5). Denitrification by indigenous bacteria fractionated O and N isotopes in nitrate at a ratio of ???0.8 (18O/15N), which is within the range of values reported previously for denitrification. However, the magnitudes of the individual apparent in situ isotope fractionation effects for perchlorate and nitrate were appreciably smaller than those reported in homogeneous closed systems (0.2 to 0.6 times), even after adjustment for dilution. These results indicate that (1) isotope fractionation factor ratios (18O/37Cl, 18O/15N) derived from homogeneous laboratory systems (e.g. pure culture studies) can be used qualitatively to confirm the occurrence of in situ biodegradation of both perchlorate and nitrate, but (2) the magnitudes of the individual apparent values cannot be used quantitatively to estimate the in situ extent of biodegradation of either anion. ?? CSIRO 2009.
The many facets of extragalactic radio surveys: towards new scientific challenges
NASA Astrophysics Data System (ADS)
2015-10-01
Radio continuum surveys are a powerful tool to detect large number of objects over a wide range of redshifts and obtain information on the intensity, polarization and distribution properties of radio sources across the sky. They are essential to answer to fundamental questions of modern astrophysics. Radio astronomy is in the midst of a transformation. Developments in high-speed digital signal processing and broad-band optical fibre links between antennas have enabled significant upgrades of the existing radio facilities (e-MERLIN, JVLA, ATCA-CABB, eEVN, APERTIF), and are leading to next-generation radio telescopes (LOFAR, MWA, ASKAP, MeerKAT). All these efforts will ultimately lead to the realization of the Square Kilometre Array (SKA), which, owing to advances in sensitivity, field-of-view, frequency range and spectral resolution, will yield transformational science in many astrophysical research fields. The purpose of this meeting is to explore new scientific perspectives offered by modern radio surveys, focusing on synergies allowed by multi-frequency, multi-resolution observations. We will bring together researchers working on wide aspects of the physics and evolution of extra-galactic radio sources, from star-forming galaxies to AGNs and clusters of galaxies, including their role as cosmological probes. The organization of this conference has been inspired by the recent celebration of the 50th anniversary of the Northern Cross Radio Telescope in Medicina (BO), whose pioneering B2 and B3 surveys provided a significant contribution to radio astronomical studies for many decades afterwards. The conference was organized by the Istituto di Radioastronomia (INAF), and was held at the CNR Research Area in Bologna, on 20-23 October 2015. This Conference has received support from the following bodies and funding agencies: National Institute for Astrophysics (INAF), ASTRON, RadioNet3 (through the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 283393) and the Ministry of Foreign Affairs and International Cooperation, Directorate General for the Country Promotion (under the Bilateral Grant Agreement ZA14GR02 - Mapping the Universe on the Pathway to SKA). Scientific Organizing Committee: I. Prandoni (INAF-IRA) co-chair R. Morganti (ASTRON) co-chair P. Best (ROE) A. Bonafede (Hamburg Univ.) R. Braun (SKA Org) L. Feretti (INAF-IRA) M. Jarvis (Western Cape/Oxford Univ.) E. Murphy (Caltech) R. Norris (CSIRO) M. Perez-Torres (IAA) L. Saripalli (Raman) T. Venturi (INAF-IRA) Local Organizing Committee: R. Cassano (co-chair) I. Prandoni (co-chair) A. Casoni D. Guidetti R. Lico R. Ricci M. Stagni
NASA Astrophysics Data System (ADS)
Ventouras, Spiros; Lawrence, Bryan; Woolf, Andrew; Cox, Simon
2010-05-01
The Metadata Objects for Linking Environmental Sciences (MOLES) model has been developed within the Natural Environment Research Council (NERC) DataGrid project [NERC DataGrid] to fill a missing part of the ‘metadata spectrum'. It is a framework within which to encode the relationships between the tools used to obtain data, the activities which organised their use, and the datasets produced. MOLES is primarily of use to consumers of data, especially in an interdisciplinary context, to allow them to establish details of provenance, and to compare and contrast such information without recourse to discipline-specific metadata or private communications with the original investigators [Lawrence et al 2009]. MOLES is also of use to the custodians of data, providing an organising paradigm for the data and metadata. The work described in this paper is a high-level view of the structure and content of a recent major revision of MOLES (v3.3) carried out as part of a NERC DataGrid extension project. The concepts of MOLES v3.3 are rooted in the harmonised ISO model [Harmonised ISO model] - particularly in metadata standards (ISO 19115, ISO 19115-2) and the ‘Observations and Measurements' conceptual model (ISO 19156). MOLES exploits existing concepts and relationships, and specialises information in these standards. A typical sequence of data capturing involves one or more projects under which a number of activities are undertaken, using appropriate tools and methods to produce the datasets. Following this typical sequence, the relevant metadata can be partitioned into the following main sections - helpful in mapping onto the most suitable standards from the ISO 19100 series. • Project section • Activity section (including both observation acquisition and numerical computation) • Observation section (metadata regarding the methods used to obtained the data, the spatial and temporal sampling regime, quality etc.) • Observation collection section The key concepts in MOLES v3.3 are: a) the result of an observation is defined uniquely from the property (of a feature-of-interest), the sampling-feature (carrying the targeted property values), the procedure used to obtain the result and the time (discrete instant or period) at which the observation takes place. b) an ‘Acquisition' and a ‘Computation' can serve as the basis for describing any observation process chain (procedure). The ‘Acquisition' uses an instrument - sensor or human being - to produce the results and is associated with field trips, flights, cruises etc., whereas the ‘Computation' class involves specific processing steps. A process chain may consist of any combination of ‘Acquisitions' and/or ‘Computations' occurring in parallel or in any order during the data capturing sequence. c) The results can be organised in collections with significantly more flexibility than if one used the original project alone d) the structure of individual observation collections may be domain-specific, in general; however we are investigating the use of CSML (Climate Science Modelling Language) for atmospheric data The model has been tested as a desk exercise by constructing object models for scenarios from various disciplines. References NERC DATAGRID: http://ndg.nerc.ac.uk LAWRENCE ET. AL. ,Information in environmental data grids, Phil. Trans. R. Soc. A, March 2009 vol. 367 no. 1890 1003-1014 ISO HARMONISED MODEL: All relevant ISO standards for geographic metadata from the TC211 series (eg. ISO 19xxx), and is harmonised within a formal UML description in the ‘HollowWorld' packages available at https://www.seegrid.csiro.au/twiki/bin/view/AppSchemas/HollowWorld
Colditz, I G; Paull, D R; Lee, C; Fisher, A D
2010-12-01
To assess the effects on physiology and behaviour of intradermal injection of sodium lauryl sulfate (SLS) as an alternative to mulesing. Three groups of Merino lambs were studied: Control (n = 10), SLS (n = 11) and Mulesed (n = 11). The SLS group received SLS (7% w/v) and benzyl alcohol (20 mg/mL) in phosphate buffer, and the Mulesed group received 6 mL topical local anaesthetic as a wound dressing. Haematology, cortisol, beta-endorphin and haptoglobin concentrations, rectal temperatures, body weight and behaviours were monitored for up to 42 days post treatments. SLS treatment induced mild swelling followed by thin scab formation. Fever (>40°C) was observed at 12 and 24 h, cortisol concentration was elevated on days 1 and 2, haptoglobin concentration was highly elevated on days 2-7, white blood cell count was elevated on days 2 and 4 post treatment, but average daily gain was not affected. Fever at 12 h was significantly higher in the SLS than in the Mulesed group, whereas maximum temperature, temperature area under the curve (AUC), occurrence of fever, cortisol profile, cortisol AUC, white blood cell counts and haptoglobin concentrations until day 7 were comparable. The behaviours of normal standing, total standing and total lying were modified for 2 days by SLS treatment, but changes were less marked and of shorter duration than in the Mulesed group. On day 1, the SLS group spent <5% of time in total abnormal behaviours compared with 18% in the Mulesed group. The SLS group tended to spend more time in abnormal behaviours on day 1 than the Controls. The behaviour of the SLS group was similar to that of the unmulesed Controls and their physiological responses were intermediate between the Mulesed lambs receiving post-surgical analgesia and the Controls. © 2010 CSIRO. Australian Veterinary Journal © 2010 Australian Veterinary Association.
NASA Astrophysics Data System (ADS)
Mazzoni, A.; Heggy, E.; Scabbia, G.
2017-12-01
Water scarcity in the Arabian Peninsula and North Africa is accentuated by forecasted climatic variability, decreasing precipitation volumes and projected population growth, urbanization and economic development, increasing water demand. These factors impose uncertainties on food security and socio-economic stability in the region. We develop a water-budget model combining hydrologic, climatic and economic data to quantify water deficit volumes and groundwater depletion rates for the main aquifer systems in the area, taking into account three different climatic scenarios, and calculated from the precipitation forecast elaborated in the CSIRO, ECHAM4 and HADCM3 global circulation models from 2016 to 2050 over 1-year intervals. Water demand comprises water requirements for each economic sector, derived from data such as population, GDP, cropland cover and electricity production, and is based upon the five different SSPs. Conventional and non-conventional water resource supply data are retrieved from FAO Aquastat and institutional databases. Our results suggest that in the next 35 years, in North Africa, only Egypt and Libya will exhibit severe water deficits with respectively 44% and 89.7% of their current water budgets by 2050 (SSP2-AVG climatic scenario), while all the countries in the Arabian Peninsula will be subjected to water stress; the majority of small-size aquifers in the Arabian Peninsula will reach full depletion by 2050. In North Africa, the fossil aquifers' volume loss will be 1-15% by 2050, and total depletion within 200-300 years. Our study suggests that (1) anthropogenic drivers on water resources are harsher than projected climatic variability; (2) the estimated water deficit will induce substantial rise in domestic food production's costs, causing higher dependency on food imports; and (3) projected water deficits will most strongly impact the nations with the lowest GDPP, namely Egypt, Yemen and Libya.
Projected carbon stocks in the conterminous USA with land use and variable fire regimes.
Bachelet, Dominique; Ferschweiler, Ken; Sheehan, Timothy J; Sleeter, Benjamin M; Zhu, Zhiliang
2015-12-01
The dynamic global vegetation model (DGVM) MC2 was run over the conterminous USA at 30 arc sec (~800 m) to simulate the impacts of nine climate futures generated by 3GCMs (CSIRO, MIROC and CGCM3) using 3 emission scenarios (A2, A1B and B1) in the context of the LandCarbon national carbon sequestration assessment. It first simulated potential vegetation dynamics from coast to coast assuming no human impacts and naturally occurring wildfires. A moderate effect of increased atmospheric CO2 on water use efficiency and growth enhanced carbon sequestration but did not greatly influence woody encroachment. The wildfires maintained prairie-forest ecotones in the Great Plains. With simulated fire suppression, the number and impacts of wildfires was reduced as only catastrophic fires were allowed to escape. This greatly increased the expansion of forests and woodlands across the western USA and some of the ecotones disappeared. However, when fires did occur, their impacts (both extent and biomass consumed) were very large. We also evaluated the relative influence of human land use including forest and crop harvest by running the DGVM with land use (and fire suppression) and simple land management rules. From 2041 through 2060, carbon stocks (live biomass, soil and dead biomass) of US terrestrial ecosystems varied between 155 and 162 Pg C across the three emission scenarios when potential natural vegetation was simulated. With land use, periodic harvest of croplands and timberlands as well as the prevention of woody expansion across the West reduced carbon stocks to a range of 122-126 Pg C, while effective fire suppression reduced fire emissions by about 50%. Despite the simplicity of our approach, the differences between the size of the carbon stocks confirm other reports of the importance of land use on the carbon cycle over climate change. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Luo, S.
2016-12-01
Radiation field and cloud properties over the Southern Ocean area generated by the Australian Community Climate and Earth System Simulator (ACCESS) are evaluated using multiple-satellite products from the Fast Longwave And Shortwave radiative Fluxes (FLASHFlux) project and NASA/GEWEX surface radiation budget (SRB) data. The cloud properties are also evaluated using the observational simulator package COSP, a synthetic brightness temperature model (SBTM) and cloud liquid-water path data (UWisc) from the University of Wisconsin satellite retrievals. All of these evaluations are focused on the Southern Ocean area in an effort to understand the reasons behind the short-wave radiation biases at the surface. It is found that the model overestimates the high-level cloud fraction and frequency of occurrence of small ice-water content and underestimates the middle and low-level cloud fraction and water content. In order to improve the modelled radiation fields over the Southern Ocean area, two main modifications have been made to the physical schemes in the ACCESS model. Firstly the autoconversion rate at which the cloud water is converted into rain and the accretion rate in the warm rain scheme have been modified, which increases the cloud liquid-water content in warm cloud layers. Secondly, the scheme which determines the fraction of supercooled liquid water in mixed-phase clouds in the parametrization of cloud optical properties has been changed to use one derived from CALIPSO data which provides larger liquid cloud fractions and thus higher optical depths than the default scheme. Sensitivity tests of these two schemes in ACCESS climate runs have shown that applying either can lead to a reduction of the solar radiation reaching the surface and reduce the short-wave radiation biases.
The greening of the Himalayas and Tibetan Plateau under climate change
NASA Astrophysics Data System (ADS)
Lamsal, Pramod; Kumar, Lalit; Shabani, Farzin; Atreya, Kishor
2017-12-01
The possible disruption of climate change (CC) on the ecological, economic and social segments of human interest has made this phenomenon a major issue over the last couple of decades. Mountains are fragile ecosystems, projected to endure a higher impact from the increased warming. This study presents modelled CC projections with respect to the suitability for the growth of nine near-treeline plant species of the Himalayas and Tibetan Plateau through niche modelling technique using CLIMEX and estimates their potential future distribution and the extent of greening in the region. Two global climate models, CSIRO-MK 3.0 (CS) and MIROCH-H (MR) were used under IPCC A1B and A2 emission scenarios for the year 2050 and 2100. The results indicate that climatic suitability of the nine species expands towards higher elevations into areas that are currently unsuitable while currently suitable areas in many regions become climatically unsuitable in the future. The total climatically suitable area for the nine species at current time is around 1.09 million km2, with an additional 0.68 and 0.35 million km2 becoming suitable by 2050 and 2100 respectively. High elevation belts, especially those lying above 3500 m, will see more climatically suitable areas for the nine species in the future. Cold stress is the main factor limiting current distribution and its decrease will affect the overall expansion of climatic suitability in the region. Impacts on nature conservation and water and food security could be expected from such shift of climatic suitability in the region. The species includes (i) Abies spectabilis, (ii) Acer campbellii, (iii) Betula utilis, (iv) Juniperus indica, (v) Quercus semecarpifolia, (vi) Tsuga dumosa, (vii) Rhododendron campanulatum, (viii) Ephedra gerardiana, and (ix) Cassiope fastigiata. The species list from top to bottom are (i) Abies spectabilis, (ii) Acer campbellii, (iii) Betula utilis, (iv) Juniperus indica, (v) Quercus semecarpifolia, (vi) Tsuga dumosa, (vii) Rhododendron campanulatum, (viii) Ephedra gerardiana, and (ix) Cassiope fastigiata.
Reed, R.N.; Hart, K.M.; Rodda, G.H.; Mazzotti, F.J.; Snow, R.W.; Cherkiss, M.; Rozar, R.; Goetz, S.
2011-01-01
Context. Invasive Burmese pythons (Python molurus bivittatus) are established over thousands of square kilometres of southern Florida, USA, and consume a wide range of native vertebrates. Few tools are available to control the python population, and none of the available tools have been validated in the field to assess capture success as a proportion of pythons available to be captured. Aims. Our primary aim was to conduct a trap trial for capturing invasive pythons in an area east of Everglades National Park, where many pythons had been captured in previous years, to assess the efficacy of traps for population control.Wealso aimed to compare results of visual surveys with trap capture rates, to determine capture rates of non-target species, and to assess capture rates as a proportion of resident pythons in the study area. Methods.Weconducted a medium-scale (6053 trap nights) experiment using two types of attractant traps baited with live rats in the Frog Pond area east of Everglades National Park.Wealso conducted standardised and opportunistic visual surveys in the trapping area. Following the trap trial, the area was disc harrowed to expose pythons and allow calculation of an index of the number of resident pythons. Key results. We captured three pythons and 69 individuals of various rodent, amphibian, and reptile species in traps. Eleven pythons were discovered during disc harrowing operations, as were large numbers of rodents. Conclusions. The trap trial captured a relatively small proportion of the pythons that appeared to be present in the study area, although previous research suggests that trap capture rates improve with additional testing of alternative trap designs. Potential negative impacts to non-target species were minimal. Low python capture rates may have been associated with extremely high local prey abundances during the trap experiment. Implications. Results of this trial illustrate many of the challenges in implementing and interpreting results from tests of control tools for large cryptic predators such as Burmese pythons. ?? CSIRO 2011.
A Research Coordination Network for Ecological Applications of Terrestrial Laser Scanning
NASA Astrophysics Data System (ADS)
Condon, T. D.; Strahler, A. H.
2016-12-01
Enhancing the development of terrestrial laser scanning for ecological applications is the objective of a Research Coordination Network (RCN) now funded by the US National Science Foundation. The activity has two primary goals: (1) development of a low-cost lidar scanner that will provide accurate estimates of above-ground forest biomass for carbon modeling and monitoring procedures; and (2) development of a range of new ecological applications for TLS, based on rapid forest structure measurements and 3-D reconstructions of forest plots and stands. The network, first constituted in 2015, presently includes 69 participants, including researchers, professors, postdocs, and students at 32 institutions from Australia, Belgium, Canada, China, Finland, Netherlands, Switzerland, United Kingdom, and the United States. It is led by a Steering Committee of 15 researchers from 12 of these institutions. A primary activity of the TLSRCN is to facilitate communication of TLS developments and applications both within the group and to the broader scientific community at meetings and workshops. In 2015, RCN participants presented 27 papers and posters at international meetings and forums, including the Annual Conference of the Remote Sensing and Photogrammetry Society of the UK, SilviLaser 2015, and the Fall Meeting of the AGU. Within the group, bimonthly telecons allow the exchange of recent research developments and planning for group meetings and international conference presentations. Encouraging collaborative publications is also a focus of the RCN; 9 of 11 journal papers published in 2015 that reported TLS research by participants also combined authors from more than one research group participating in the network. The TLSRCN is supported by NSF Grant DBI-1455636 to Boston University, Alan Strahler Principal Investigator. Information for researchers interested in joining the network is available on the TLSRCN web site, tlsrcn.bu.edu. The image below shows a stand of Himalayan cedars at the Australian National Arboretum in Canberra. Acquired by the experimental Dual Wavelength Echidna Lidar, it is produced from the intensity information recorded from the 1556 nm laser (courtesy Michael Schaefer, CSIRO).
NASA Astrophysics Data System (ADS)
Adegoke, J.; Engelbrecht, F.; Vezhapparambu, S.
2012-12-01
The conformal-cubic atmospheric model (CCAM) is employed in this study as a flexible downscaling tool at the climate-change time scale. In the downscaling procedure, the sea-ice and bias-corrected SSTs of 6 CGCMs (CSIRO Mk 3.5, GFDL2.1, GFDL2.0, HadCM2, ECHAM5 and Miroc-Medres) from AR4 of the IPCC were first used as lower-boundary forcing in CCAM simulations performed at a quasi-uniform resolution (about 200 km in the horizontal), which were subsequently downscaled to a resolution of about 60 km over southern and tropical Africa. All the simulations were for the A2 scenario of the Special Report on Emission Scenarios (SRES), and for the period 1961-2100. The SST biases were derived by comparing the simulated and observed present-day climatol¬ogy of SSTs for 1979-1999 for each month of the year; the same monthly bias corrections were applied for the duration of the simulations. CCAM ensemble projected change in annual average temperature and Rainfall for 2071-2100 vs 1961-1990 for tropical Africa will be presented and discussed. In summary, a robust signal of drastic increases in surface temperature (more than 3 degrees C for the period 2071-2100 relative to 1961-1990) is projected across the domain. Temperature increases as large as 5 degrees C are projected over the subtropical regions in the north of the domain. Increase in rainfall over tropical Africa (for the period 2071-2100 relative to 1961-1990) is projected across the domain. This is consistent with an increase in moisture in a generally warmer atmosphere. There is a robust signal of drying along the West African coast - however, the CMIP3 CGCM projections indicate a wide range of possible rainfall futures over this region The projections of East Africa becoming wetter is robust across the CCAM ensemble, consistent with the CGCM projections of CMIP3 and AR4.
Masses and activity of AB Doradus B a/b. The age of the AB Dor quadruple system revisited
NASA Astrophysics Data System (ADS)
Wolter, U.; Czesla, S.; Fuhrmeister, B.; Robrade, J.; Engels, D.; Wieringa, M.; Schmitt, J. H. M. M.
2014-10-01
We present a multiwavelength study of the close binary AB Dor Ba/b (Rst137B). Our study comprises astrometric orbit measurements, optical spectroscopy, X-ray and radio observations. Using all available adaptive optics images of AB Dor B taken with VLT/NACO from 2004 to 2009, we tightly constrain its orbital period to 360.6 ± 1.5 days. We present the first orbital solution of Rst 137B and estimate the combined mass of AB Dor Ba+b as 0.69+0.02-0.24 M⊙, slightly exceeding previous estimates based on IR photometry. Our determined orbital inclination of Rst 137B is close to the axial inclination of AB Dor A inferred from Doppler imaging. Our VLT/UVES spectra yield high rotational velocities of ≥30 km s-1 for both components Ba and Bb, in accord with previous measurements, which corresponds to rotation periods significantly shorter than one day. Our combined spectral model, using PHOENIX spectra, yields an effective temperature of 3310 ± 50 K for the primary and approximately 60 K less for the secondary. The optical spectra presumably cover a chromospheric flare and show that at least one component of Rst 137B is significantly active. Activity and weak variations are also found in our simultaneous XMM-Newton observations, while our ATCA radio data yield constant fluxes at the level of previous measurements. Using evolutionary models, our newly determined stellar parameters confirm that the age of Rst 137B is between 50 and 100 Myr. Based on observations collected at the European Southern Observatory, Paranal, Chile, 383.D-1002(A) and the ESO Science Archive Facility. Using data obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member states and NASA. Using data obtained with the Australia Telescope Compact Array (ATCA) operated by the Commonwealth Scientific and Industrial Research Organisation (CSIRO).
The Early Development of Indian Radio Astronomy: A Personal Perspective
NASA Astrophysics Data System (ADS)
Swarup, Govind
In this chapter I recall my initiation into the field of radio astronomy during 1953-1955 at CSIRO, Australia; the transfer of thirty-two 6-feet (1.8-m) diameter parabolic dishes from Potts Hill, Sydney, to India in 1958; and their erection at Kalyan, near Bombay (Mumbai), in 1963-1965. The Kalyan Radio Telescope was the first modern radio telescope built in India. This led to the establishment of a very active radio astronomy group at the Tata Institute of Fundamental Research, which subsequently built two world-class radio telescopes during the last 50 years and also contributed to the development of an indigenous microwave antenna industry in India. The Ooty Radio Telescope, built during 1965-1970, has an ingenious design which takes advantage of India's location near the Earth's Equator. The long axis of this 530-m × 30-m parabolic cylinder was made parallel to the Equator, by placing it on a hill with the same slope as the geographic latitude ( 11°), thus allowing it to track celestial sources continuously for 9.5 h every day. By utilizing lunar occultations, the telescope was able to measure the angular sizes of a large number of faint radio galaxies and quasars with arc-second resolution for the first time. Subsequently, during the 1990s, the group set up the Giant Metrewave Radio Telescope (GMRT) near Pune in western India, in order to investigate certain astrophysical phenomena which are best studied at decimetre and metre wavelengths. The GMRT is an array of 30 fully steerable 45-m diameter parabolic dishes, which operates at several frequencies below 1.43 GHz. These efforts have also contributed to the international proposal to construct the Square Kilometre Array (SKA). This chapter is a revised version of Swarup (Journal of Astronomical History and Heritage, 9: 21-33, 2006).
Using Feedback from Data Consumers to Capture Quality Information on Environmental Research Data
NASA Astrophysics Data System (ADS)
Devaraju, A.; Klump, J. F.
2015-12-01
Data quality information is essential to facilitate reuse of Earth science data. Recorded quality information must be sufficient for other researchers to select suitable data sets for their analysis and confirm the results and conclusions. In the research data ecosystem, several entities are responsible for data quality. Data producers (researchers and agencies) play a major role in this aspect as they often include validation checks or data cleaning as part of their work. It is possible that the quality information is not supplied with published data sets; if it is available, the descriptions might be incomplete, ambiguous or address specific quality aspects. Data repositories have built infrastructures to share data, but not all of them assess data quality. They normally provide guidelines of documenting quality information. Some suggests that scholarly and data journals should take a role in ensuring data quality by involving reviewers to assess data sets used in articles, and incorporating data quality criteria in the author guidelines. However, this mechanism primarily addresses data sets submitted to journals. We believe that data consumers will complement existing entities to assess and document the quality of published data sets. This has been adopted in crowd-source platforms such as Zooniverse, OpenStreetMap, Wikipedia, Mechanical Turk and Tomnod. This paper presents a framework designed based on open source tools to capture and share data users' feedback on the application and assessment of research data. The framework comprises a browser plug-in, a web service and a data model such that feedback can be easily reported, retrieved and searched. The feedback records are also made available as Linked Data to promote integration with other sources on the Web. Vocabularies from Dublin Core and PROV-O are used to clarify the source and attribution of feedback. The application of the framework is illustrated with the CSIRO's Data Access Portal.
NASA Technical Reports Server (NTRS)
Meyers, Gary
1992-01-01
The background and goals of Indian Ocean thermal sampling are discussed from the perspective of a national project which has research goals relevant to variation of climate in Australia. The critical areas of SST variation are identified. The first goal of thermal sampling at this stage is to develop a climatology of thermal structure in the areas and a description of the annual variation of major currents. The sampling strategy is reviewed. Dense XBT sampling is required to achieve accurate, monthly maps of isotherm-depth because of the high level of noise in the measurements caused by aliasing of small scale variation. In the Indian Ocean ship routes dictate where adequate sampling can be achieved. An efficient sampling rate on available routes is determined based on objective analysis. The statistical structure required for objective analysis is described and compared at 95 locations in the tropical Pacific and 107 in the tropical Indian Oceans. XBT data management and quality control methods at CSIRO are reviewed. Results on the mean and annual variation of temperature and baroclinic structure in the South Equatorial Current and Pacific/Indian Ocean Throughflow are presented for the region between northwest Australia and Java-Timor. The mean relative geostrophic transport (0/400 db) of Throughflow is approximately 5 x 106 m3/sec. A nearly equal volume transport is associated with the reference velocity at 400 db. The Throughflow feeds the South Equatorial Current, which has maximum westward flow in August/September, at the end of the southeasterly Monsoon season. A strong semiannual oscillation in the South Java Current is documented. The results are in good agreement with the Semtner and Chervin (1988) ocean general circulation model. The talk concludes with comments on data inadequacies (insufficient coverage, timeliness) particular to the Indian Ocean and suggestions on the future role that can be played by Data Centers, particularly with regard to quality control of data as research bodies are replaced by operational bodies in the Global Ocean Observing System.
NASA Astrophysics Data System (ADS)
Baird, Mark E.; Everett, Jason D.; Suthers, Iain M.
2011-03-01
The research vessel Warreen obtained 1742 planktonic samples along the continental shelf and slope of southeast Australia from 1938-42, representing the earliest spatially and temporally resolved zooplankton data from Australian marine waters. In this paper, Warreen observations along the southeast Australian seaboard from 28°S to 38°S are interpreted based on synoptic meteorological and oceanographic conditions and ocean climatologies. Meteorological conditions are based on the NOAA-CIRES 20th Century Reanalysis Project; oceanographic conditions use Warreen hydrological observations, and the ocean climatology is the CSIRO Atlas of Regional Seas. The Warreen observations were undertaken in waters on average 0.45 °C cooler than the climatological average, and included the longest duration El Niño of the 20th century. In northern New South Wales (NSW), week time-scale events dominate zooplankton response. In August 1940 an unusual winter upwelling event occurred in northern NSW driven by a stronger than average East Australian Current (EAC) and anomalous northerly winds that resulted in high salp and larvacean abundance. In January 1941 a strong upwelling event between 28° and 33°S resulted in a filament of upwelled water being advected south and alongshore, which was low in zooplankton biovolume. In southern NSW a seasonal cycle in physical and planktonic characteristics is observed. In January 1941 the poleward extension of the EAC was strong, advecting more tropical tunicate species southward. Zooplankton abundance and distribution on the continental shelf and slope are more dependent on weekly to monthly timescales on local oceanographic and meteorological conditions than continental-scale interannual trends. The interpretation of historical zooplankton observations of the waters off southeast Australia for the purpose of quantifying anthropogenic impacts will be improved with the use of regional hindcasts of synoptic ocean and atmospheric weather that can explain some of the physically forced natural variability.
NASA Astrophysics Data System (ADS)
Marx-Zimmer, M.; Herbstmeier, U.; Dickey, J. M.; Zimmer, F.; Staveley-Smith, L.; Mebold, U.
2000-02-01
The cool atomic interstellar medium of the Large Magellanic Cloud (LMC) seems to be quite different from that in the Milky Way. In a series of three papers we study the properties of the cool atomic hydrogen in the LMC (Paper I), its relation to molecular clouds using SEST-CO-observations (Paper II) and the cooling mechanism of the atomic gas based on ISO-[\\CII]-investigations (Paper III). In this paper we present the results of a third 21 cm absorption line survey toward the LMC carried out with the Australia Telescope Compact Array (ATCA). 20 compact continuum sources, which are mainly in the direction of the supergiant shell LMC 4, toward the surroundings of 30 Doradus and toward the eastern steep \\HI\\ boundary, have been chosen from the 1.4 GHz snapshot continuum survey of Marx et al. We have identified 20 absorption features toward nine of the 20 sources. The properties of the cool \\HI\\ clouds are investigated and are compared for the different regions of the LMC taking the results of Dickey et al. (survey 2) into account. We find that the cool \\HI\\ gas in the LMC is either unusually abundant compared to the cool atomic phase of the Milky Way or the gas is clearly colder (\\Tc\\ ~ 30 K) than that in our Galaxy (\\Tc\\ ~ 60 K). The properties of atomic clouds toward 30 Doradus and LMC 4 suggest a higher cooling rate in these regions compared to other parts of the LMC, probably due to an enhanced pressure near the shock fronts of LMC 4 and 30 Doradus. The detected cool atomic gas toward the eastern steep \\HI\\ boundary might be the result of a high compression of gas at the leading edge. The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO.
Mineral Physicochemistry based Geoscience Products for Mapping the Earth's Surface and Subsurface
NASA Astrophysics Data System (ADS)
Laukamp, C.; Cudahy, T.; Caccetta, M.; Haest, M.; Rodger, A.; Western Australian Centre of Excellence3D Mineral Mapping
2011-12-01
Mineral maps derived from remotes sensing data can be used to address geological questions about mineral systems important for exploration and mining. This paper focuses on the application of geoscience-tuned multi- and hyperspectral sensors (e.g. ASTER, HyMap) and the methods to routinely create meaningful higher level geoscience products from these data sets. The vision is a 3D mineral map of the earth's surface and subsurface. Understanding the physicochemistry of rock forming minerals and the related diagnostic absorption features in the visible, near, mid and far infrared is a key for mineral mapping. For this, reflectance spectra obtained with lab based visible and infrared spectroscopic (VIRS) instruments (e.g. Bruker Hemisphere Vertex 70) are compared to various remote and proximal sensing techniques. Calibration of the various sensor types is a major challenge with any such comparisons. The spectral resolution of the respective instruments and the band positions are two of the main factors governing the ability to identify mineral groups or mineral species and compositions of those. The routine processing method employed by the Western Australian Centre of Excellence for 3D Mineral Mapping (http://c3dmm.csiro.au) is a multiple feature extraction method (MFEM). This method targets mineral specific absorption features rather than relying on spectral libraries or the need to find pure endmembers. The principle behind MFEM allows us to easily compare hyperspectral surface and subsurface data, laying the foundation for a seamless and accurate 3-dimensional mineral map. The advantage of VIRS techniques for geoscientific applications is the ability to deliver quantitative mineral information over multiple scales. For example, C3DMM is working towards a suite of ASTER-derived maps covering the Australian continent, scheduled for publication in 2012. A suite of higher level geoscience products of Western Australia (e.g. AlOH group abundance and composition) are now available. The multispectral satellite data can be integrated with hyperspectral airborne and drill core data (e.g. HyLogging), which is demonstrated by various case studies ranging from Channel Iron Deposits in the Hamersley Basin (WA) to various Australian orogenic Au deposits. Comparison with airborne and field hyperspectral or lab-based VIRS, as well as independent analyses such as XRD and geochemistry, enables us to deliver cross-calibrated geoscience products derived from the whole suite of geoscience tuned multi- and hyperspectral technologies. Kaolin crystallinity and hematite-goethite ratio for characterization of regolith, and Tschermak substitution in white micas for mapping of chemical gradients associated with hydrothermal ore deposits are a few of the multiple examples where 3D mineral maps can help to resolve geological questions.
Virtual Labs (Science Gateways) as platforms for Free and Open Source Science
NASA Astrophysics Data System (ADS)
Lescinsky, David; Car, Nicholas; Fraser, Ryan; Friedrich, Carsten; Kemp, Carina; Squire, Geoffrey
2016-04-01
The Free and Open Source Software (FOSS) movement promotes community engagement in software development, as well as provides access to a range of sophisticated technologies that would be prohibitively expensive if obtained commercially. However, as geoinformatics and eResearch tools and services become more dispersed, it becomes more complicated to identify and interface between the many required components. Virtual Laboratories (VLs, also known as Science Gateways) simplify the management and coordination of these components by providing a platform linking many, if not all, of the steps in particular scientific processes. These enable scientists to focus on their science, rather than the underlying supporting technologies. We describe a modular, open source, VL infrastructure that can be reconfigured to create VLs for a wide range of disciplines. Development of this infrastructure has been led by CSIRO in collaboration with Geoscience Australia and the National Computational Infrastructure (NCI) with support from the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service (ANDS). Initially, the infrastructure was developed to support the Virtual Geophysical Laboratory (VGL), and has subsequently been repurposed to create the Virtual Hazards Impact and Risk Laboratory (VHIRL) and the reconfigured Australian National Virtual Geophysics Laboratory (ANVGL). During each step of development, new capabilities and services have been added and/or enhanced. We plan on continuing to follow this model using a shared, community code base. The VL platform facilitates transparent and reproducible science by providing access to both the data and methodologies used during scientific investigations. This is further enhanced by the ability to set up and run investigations using computational resources accessed through the VL. Data is accessed using registries pointing to catalogues within public data repositories (notably including the NCI National Environmental Research Data Interoperability Platform), or by uploading data directly from user supplied addresses or files. Similarly, scientific software is accessed through registries pointing to software repositories (e.g., GitHub). Runs are configured by using or modifying default templates designed by subject matter experts. After the appropriate computational resources are identified by the user, Virtual Machines (VMs) are spun up and jobs are submitted to service providers (currently the NeCTAR public cloud or Amazon Web Services). Following completion of the jobs the results can be reviewed and downloaded if desired. By providing a unified platform for science, the VL infrastructure enables sophisticated provenance capture and management. The source of input data (including both collection and queries), user information, software information (version and configuration details) and output information are all captured and managed as a VL resource which can be linked to output data sets. This provenance resource provides a mechanism for publication and citation for Free and Open Source Science.
Radio Telescopes "Save the Day," Produce Data on Titan's Winds
NASA Astrophysics Data System (ADS)
2005-02-01
In what some scientists termed "a surprising, almost miraculous turnabout," radio telescopes, including major facilities of the National Science Foundation's National Radio Astronomy Observatory (NRAO), have provided data needed to measure the winds encountered by the Huygens spacecraft as it descended through the atmosphere of Saturn's moon Titan last month -- measurements feared lost because of a communication error between Huygens and its "mother ship" Cassini. The Green Bank Telescope The Robert C. Byrd Green Bank Telescope CREDIT: NRAO/AUI/NSF (Click on image for GBT gallery) A global network of radio telescopes, including the NRAO's Robert C. Byrd Green Bank Telescope (GBT) in West Virginia and eight of the ten antennas of the Very Long Baseline Array (VLBA), recorded the radio signal from Huygens during its descent on January 14. Measurements of the frequency shift caused by the craft's motion, called Doppler shift, are giving planetary scientists their first direct information about Titan's winds. "When we began working with our international partners on this project, we thought our telescopes would be adding to the wind data produced by the two spacecraft themselves. Now, with the ground-based telescopes providing the only information about Titan's winds, we are extremely proud that our facilities are making such a key contribution to our understanding of this fascinating planetary body," said Dr. Fred K.Y. Lo, Director of the National Radio Astronomy Observatory (NRAO). Early analysis of the radio-telescope data shows that Titan's wind flows from west to east, in the direction of the moon's rotation, at all altitudes. The highest wind speed, nearly 270 mph, was measured at an altitude of about 75 miles. Winds are weak near Titan's surface and increase in speed slowly up to an altitude of about 37 miles, where the spacecraft encountered highly-variable winds that scientists think indicate a region of vertical wind shear. The ground-based Doppler measurements were carried out and processed jointly by scientists from the NASA Jet Propulsion Laboratory (JPL, USA), and the Joint Institute for VLBI in Europe (JIVE, The Netherlands) working within an international Doppler Wind Experiment team. The GBT made the first detection of Huygens' radio signal during the descent, and gave flight controllers and scientists the first indication that the spacecraft's parachute had deployed and that it was "alive" after entering Titan's atmosphere. The radio-telescope measurements also indicated changes in Huygens' speed when it exchanged parachutes and when it landed on Titan's surface. The original plan for gauging Titan's winds called for measuring the Doppler shift in the probe's signal frequency both by Cassini and by ground-based radio telescopes in the U.S., Australia, Japan and China. Cassini was best positioned to gain information on the east-west component of the winds, and the ground-based telescopes were positioned to help learn about the north-south wind component. Unfortunately, the communications error lost all the wind data from Cassini. The VLBA The VLBA CREDIT: NRAO/AUI/NSF (Click on image for VLBA gallery) "I've never felt such exhilarating highs and dispiriting lows than those experienced when we first detected the signal from the GBT, indicating 'all's well,' and then discovering that we had no signal at the operations center, indicating 'all's lost.' The truth, as we have now determined, lies somewhat closer to the former than the latter." said Michael Bird of the University of Bonn. In addition to measuring the motion-generated frequency shift of Huygens' radio signal, radio telescopes also were used to make extremely precise measurements of the probe's position (to within three-quarters of a mile, or one kilometer) during its descent. This experiment used the VLBA antennas, along with others employing the technique of Very Long Baseline Interferometry (VLBI). Combination of the Doppler and VLBI data will eventually provide a three-dimensional record of motion for the Huygens Probe during its mission at Titan. Huygens was built by the European Space Agency. The radio astronomy support of the Huygens mission is coordinated by JIVE and JPL and involves the National Radio Astronomy Observatory (Green Bank, WV and Socorro, NM), the Netherlands Foundation for Research in Astronomy (ASTRON, The Netherlands), the University of Bonn (Germany), Helsinki University of Technology (Espoo, Finland), the MERLIN National Facility (Jodrell Bank, UK), the Onsala Space Observatory (Sweden), the NASA Jet Propulsion Laboratory (Pasadena, CA), the CSIRO Australia Telescope National Facility (ATNF, Sydney, Australia), the University of Tasmania (Hobart, Australia), the National Astronomical Observatories of China, the Shanghai Astronomical Observatory (Shanghai and Urumqi, China) and the National Institute of Information and Communications Technologies (Kashima Space Research Center, Japan). The Joint Institute for VLBI in Europe is hosted by ASTRON and funded by the national research councils, national facilities and institutes of The Netherlands (NOW), the United Kingdom (PPARC), Italy (CNR), Sweden (Onsala Space Observatory, National Facility), Spain (IGN) and Germany (MPIfR). The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. The Cassini-Huygens mission is a cooperation between NASA, ESA and ASI, the Italian space agency. The Jet Propulsion Laboratory (JPL), a division of the California Institute of Technology in Pasadena, is managing the mission for NASA's Office of Space Science, Washington DC. JPL designed, developed and assembled the Cassini orbiter while ESA operated the Huygens atmospheric probe. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.
NASA Astrophysics Data System (ADS)
Wyborn, L. A.
2007-12-01
The Information Age in Science is being driven partly by the data deluge as exponentially growing volumes of data are being generated by research. Such large volumes of data cannot be effectively processed by humans and efficient and timely processing by computers requires development of specific machine readable formats. Further, as key challenges in earth and space sciences, such as climate change, hazard prediction and sustainable development resources require a cross disciplinary approach, data from various domains will need to be integrated from globally distributed sources also via machine to machine formats. However, it is becoming increasingly apparent that the existing standards can be very domain specific and most existing data transfer formats require human intervention. Where groups from different communities do try combine data across the domain/discipline boundaries much time is spent reformatting and reorganizing the data and it is conservatively estimated that this can take 80% of a project's time and resources. Four different types of standards are required for machine to machine interaction: systems, syntactic, schematic and semantic. Standards at the systems (WMS, WFS, etc) and at the syntactic level (GML, Observation and Measurement, SensorML) are being developed through international standards bodies such as ISO, OGC, W3C, IEEE etc. In contrast standards at the schematic level (e.g., GeoSciML, LandslidesML, WaterML, QuakeML) and at the semantic level (ie ontologies and vocabularies) are currently developing rapidly, in a very uncoordinated way and with little governance. As the size of the community that can machine read each others data depends on the size of the community that has developed the schematic or semantic standards, it is essential that to achieve global integration of earth and space science data, the required standards need to be developed through international collaboration using accepted standard proceedures. Once developed the standards also require some form of governance to maintain and then extend the standard as the science evolves to meet new challenges. A standard that does have some governance is GeoSciML, a data transfer standard for geoscience map data. GeoSciML is currently being developed by a consortium of 7 countries under the auspices of the Commission for the Management of and Application of Geoscience Information (CGI), a commission of the International Union of Geological Sciences. Perhaps other `ML' or ontology and vocabulary development `teams' need to look to their international domain specific specialty societies for endorsement and governance. But the issue goes beyond Earth and Space Sciences, as increasingly cross and intra disciplinary science requires machine to machine interaction with other science disciplines such as physics, chemistry and astronomy. For example, for geochemistry do we develop GeochemistryML or do we extend the existing Chemical Markup Language? Again, the question is who will provide the coordination of the development of the required schematic and semantic standards that underpin machine to machine global integration of science data. Is this a role for ICSU or CODATA or who? In order to address this issue, Geoscience Australia and CSIRO established the Solid Earth and Environmental Grid Community website to enable communities to `advertise' standards development and to provide a community TWIKI where standards can be developed in a globally `open' environment.
The Resilience Assessment Framework: a common indicator for land management?
NASA Astrophysics Data System (ADS)
Cowie, Annette; Metternicht, Graciela; O'Connell, Deborah
2015-04-01
At the Rio+20 conference in June 2013, the United Nations Convention to Combat Desertification (UNCCD), the Convention on Biological Diversity (CBD), and the United Nations Framework Convention on Climate Change (UNFCCC) reinforced their mutual interests in building linkages between biodiversity conservation, sustainable land management, and climate change mitigation and adaptation. The UNCCD sees building resilience of agro-ecosystems as a common interest that could strengthen linkages between the conventions and deliver synergies in progressing goals of each of the conventions. Furthermore, enhancing resilience of productive agro-ecosystems is fundamental to food security and sustainable development, and thus aligns with the Sustainable Development Goals (SDGs). The Global Environment Facility (GEF) shares the interest of the conventions in building resilience in agro-ecosystems. Indicators of resilience are required for monitoring progress in these endeavors, application of a common indicator between the UNCCD, UNFCCC and CBD as a measure of both land-based adaptation and ecosystem resilience, could strengthen links between the conventions and increase attention to the broad benefits of improved land management. Consequently, the Scientific and Technical Advisory Panel (STAP) to the GEF commissioned the Commonwealth Scientific and Industrial Research Organisation (CSIRO) to produce a report reviewing the conceptual basis for resilience, and proposing an indicator approach that could meet the needs of the Conventions and the GEF for an indicator of agro-ecosystem resilience and land-based adaption. The paper presents a synthesis of scientific understanding of resilience in agro-ecosystems, reviews indicators that have been proposed, and, having concluded that none of the extant indicator approaches adequately assesses resilience of agro-ecosystems, proposes a new approach to the assessment of resilience. Recognizing that no single indicator of resilience is applicable to all agro-ecosystems, and that involvement of stakeholders is critical to discerning the critical variables to be assessed, the proposed framework uses an iterative participatory approach to characterise the system, considering also interactions across and within scales; identify the controlling variables, and assess proximity to thresholds, and adaptive capacity. The framework consists of four elements: Element A: System description; Element B Assessing the system; Element C Adaptive governance and management; Element D Participatory process. Element D is intended as a cross-cutting element, applying across Elements A to C, although Elements A and B can be applied as a desktop activity in a preliminary assessment. The results of the assessment are synthesised in "Resilience action indicators", that summarise the state of the system with respect to the need to adapt or transform. The presentation will summarise the framework and the responses of expert reviewers who identified strengths of the approach, and challenges for implementation, particularly at program and national scales. The presentation will emphasise the conceptual basis for the approach, and the role of scientists in testing, refining and operationalizing the approach.
NASA Astrophysics Data System (ADS)
Mukherjee, Sandipan; Hazra, Anupam; Kumar, Kireet; Nandi, Shyamal K.; Dhyani, Pitamber P.
2017-09-01
In view of a significant lacuna in the Himalaya-specific knowledge of forthcoming expected changes in the rainfall climatology, this study attempts to assess the expected changes in the Indian summer monsoon rainfall (ISMR) pattern exclusively over the Indian Himalayan Region (IHR) during 2020-2070 in comparison to a baseline period of 1970-2005 under two different warming scenarios, i.e., representative concentration pathways 4.5 and 8.5 (RCP 4.5 and RCP 8.5). Five climate model products from the Commonwealth Scientific and Industrial Research Organization initiated Coordinated Regional Climate Downscaling Experiment of World Climate Research Programme over south Asia region are used for this purpose. Among the several different features of ISMR, this study attempts to investigate expected changes in the average summer monsoon rainfall and percent monthly rainfall to the total monsoon seasonal rainfall using multimodel averages. Furthermore, this study attempts to identify the topographical ranges which are expected to be mostly affected by the changing average monsoon seasonal rainfall over IHR. Results from the multimodel average analysis indicate that the rainfall climatology is expected to increase by >0.75 mm/day over the foothills of northwest Himalaya during 2020-2070, whereas the rainfall climatology is expected to decrease for the flood plains of Brahmaputra under a warmer climate. The monthly percent rainfall of June is expected to rise by more than 1% over the northwestern Himalaya during 2020-2040 (although insignificant at p value <0.05), whereas the same for August and September is expected to decrease over the eastern Himalaya under a warmer climate. In terms of rainfall changes along the altitudinal gradient, this study indicates that the two significant rainfall regions, one at around 900 m and the other around 2000 m of the northwestern Himalaya are expected to see positive changes (>1%) in rainfall climatology during 2020-2070, whereas regions more than 1500 m in eastern Himalaya are expected to experience inconsistent variation in rainfall climatology under a warmer climate scenario.
Brown, Stephen K; Mahoney, K John; Cheng, Min
2004-01-01
Pollutant emissions from unflued gas heaters were assessed in CSIRO's Room Dynamic Environmental Chamber. This paper describes the chamber assessment procedure and presents findings for major commercial heaters that are nominally "low-emission". The chamber was operated at controlled conditions of temperature, humidity, ventilation and air mixing, representative of those encountered in typical indoor environments. A fixed rate of heat removal from the chamber air ensured that the heaters operated at constant heating rates, typically approximately 6 MJ/h which simulated operation of a heater after warm-up in an insulated dwelling in south-east Australia. The pollutants assessed were nitrogen dioxide, carbon monoxide, formaldehyde, VOCs and respirable suspended particulates. One type of heater was lower emitting for nitrogen dioxide, but emitted greater amounts of carbon monoxide and formaldehyde (the latter becoming significant to indoor air quality). When operated with low line pressure or slight misalignment of the gas burner, this heater became a hazardous source of these pollutants. Emissions from the heaters changed little after continuous operation for up to 2 months. Unflued gas heaters have been popular as primary heating sources in Australian homes for many years due to their ease of installation and energy efficiency, with approximately 600,000 now installed in housing and schools. However, with concerns over potential health impacts to occupants, manufacturers have reduced the nitrogen dioxide emissions from unflued gas heaters in Australia over recent years. They have done so with a target level for nitrogen dioxide in indoor air of 300 p.p.b. This is somewhat higher than the ambient air (and WHO) guideline of 110 p.p.b. Several studies of child respiratory health show an impact of unflued gas combustion products. A full characterization of the combustion products is needed under conditions that simulate heater operation in practice-this study was undertaken to provide such characterization. Key findings are that the focus needs to be on total gas emissions (not just nitrogen dioxide), and that heater installation can be very sensitive to small faults which lead to very high levels of toxic pollutants. These findings have influenced current government proposals for emission limits for these heaters.
Microarray image analysis: background estimation using quantile and morphological filters.
Bengtsson, Anders; Bengtsson, Henrik
2006-02-28
In a microarray experiment the difference in expression between genes on the same slide is up to 103 fold or more. At low expression, even a small error in the estimate will have great influence on the final test and reference ratios. In addition to the true spot intensity the scanned signal consists of different kinds of noise referred to as background. In order to assess the true spot intensity background must be subtracted. The standard approach to estimate background intensities is to assume they are equal to the intensity levels between spots. In the literature, morphological opening is suggested to be one of the best methods for estimating background this way. This paper examines fundamental properties of rank and quantile filters, which include morphological filters at the extremes, with focus on their ability to estimate between-spot intensity levels. The bias and variance of these filter estimates are driven by the number of background pixels used and their distributions. A new rank-filter algorithm is implemented and compared to methods available in Spot by CSIRO and GenePix Pro by Axon Instruments. Spot's morphological opening has a mean bias between -47 and -248 compared to a bias between 2 and -2 for the rank filter and the variability of the morphological opening estimate is 3 times higher than for the rank filter. The mean bias of Spot's second method, morph.close.open, is between -5 and -16 and the variability is approximately the same as for morphological opening. The variability of GenePix Pro's region-based estimate is more than ten times higher than the variability of the rank-filter estimate and with slightly more bias. The large variability is because the size of the background window changes with spot size. To overcome this, a non-adaptive region-based method is implemented. Its bias and variability are comparable to that of the rank filter. The performance of more advanced rank filters is equal to the best region-based methods. However, in order to get unbiased estimates these filters have to be implemented with great care. The performance of morphological opening is in general poor with a substantial spatial-dependent bias.
Large-Scale Data Collection Metadata Management at the National Computation Infrastructure
NASA Astrophysics Data System (ADS)
Wang, J.; Evans, B. J. K.; Bastrakova, I.; Ryder, G.; Martin, J.; Duursma, D.; Gohar, K.; Mackey, T.; Paget, M.; Siddeswara, G.
2014-12-01
Data Collection management has become an essential activity at the National Computation Infrastructure (NCI) in Australia. NCI's partners (CSIRO, Bureau of Meteorology, Australian National University, and Geoscience Australia), supported by the Australian Government and Research Data Storage Infrastructure (RDSI), have established a national data resource that is co-located with high-performance computing. This paper addresses the metadata management of these data assets over their lifetime. NCI manages 36 data collections (10+ PB) categorised as earth system sciences, climate and weather model data assets and products, earth and marine observations and products, geosciences, terrestrial ecosystem, water management and hydrology, astronomy, social science and biosciences. The data is largely sourced from NCI partners, the custodians of many of the national scientific records, and major research community organisations. The data is made available in a HPC and data-intensive environment - a ~56000 core supercomputer, virtual labs on a 3000 core cloud system, and data services. By assembling these large national assets, new opportunities have arisen to harmonise the data collections, making a powerful cross-disciplinary resource.To support the overall management, a Data Management Plan (DMP) has been developed to record the workflows, procedures, the key contacts and responsibilities. The DMP has fields that can be exported to the ISO19115 schema and to the collection level catalogue of GeoNetwork. The subset or file level metadata catalogues are linked with the collection level through parent-child relationship definition using UUID. A number of tools have been developed that support interactive metadata management, bulk loading of data, and support for computational workflows or data pipelines. NCI creates persistent identifiers for each of the assets. The data collection is tracked over its lifetime, and the recognition of the data providers, data owners, data generators and data aggregators are updated. A Digital Object Identifier is assigned using the Australian National Data Service (ANDS). Once the data has been quality assured, a DOI is minted and the metadata record updated. NCI's data citation policy establishes the relationship between research outcomes, data providers, and the data.
Du, Ning; Fan, Jintu; Chen, Shuo; Liu, Yang
2008-07-21
Although recent investigations [Ryan, M.G., Yoder, B.J., 1997. Hydraulic limits to tree height and tree growth. Bioscience 47, 235-242; Koch, G.W., Sillett, S.C.,Jennings, G.M.,Davis, S.D., 2004. The limits to tree height. Nature 428, 851-854; Niklas, K.J., Spatz, H., 2004. Growth and hydraulic (not mechanical) constraints govern the scaling of tree height and mass. Proc. Natl Acad. Sci. 101, 15661-15663; Ryan, M.G., Phillips, N., Bond, B.J., 2006. Hydraulic limitation hypothesis revisited. Plant Cell Environ. 29, 367-381; Niklas, K.J., 2007. Maximum plant height and the biophysical factors that limit it. Tree Physiol. 27, 433-440; Burgess, S.S.O., Dawson, T.E., 2007. Predicting the limits to tree height using statistical regressions of leaf traits. New Phytol. 174, 626-636] suggested that the hydraulic limitation hypothesis (HLH) is the most plausible theory to explain the biophysical limits to maximum tree height and the decline in tree growth rate with age, the analysis is largely qualitative or based on statistical regression. Here we present an integrated biophysical model based on the principle that trees develop physiological compensations (e.g. the declined leaf water potential and the tapering of conduits with heights [West, G.B., Brown, J.H., Enquist, B.J., 1999. A general model for the structure and allometry of plant vascular systems. Nature 400, 664-667]) to resist the increasing water stress with height, the classical HLH and the biochemical limitations on photosynthesis [von Caemmerer, S., 2000. Biochemical Models of Leaf Photosynthesis. CSIRO Publishing, Australia]. The model has been applied to the tallest trees in the world (viz. Coast redwood (Sequoia sempervirens)). Xylem water potential, leaf carbon isotope composition, leaf mass to area ratio at different heights derived from the model show good agreements with the experimental measurements of Koch et al. [2004. The limits to tree height. Nature 428, 851-854]. The model also well explains the universal trend of declining growth rate with age.
Obliquity-driven expansion of North Atlantic sea ice controls structure of the last glacial
NASA Astrophysics Data System (ADS)
Turney, Chris; Thomas, Zoe; Hutchinson, David; Bradshaw, Corey; Brook, Barry; England, Matthew; Fogwill, Christopher; Jones, Richard; Palmer, Jonathan; Hughen, Konrad; Cooper, Alan
2015-04-01
North Atlantic late-Pleistocene climate was characterised by a series of abrupt climate changes, the most extreme of which were the Dansgaard-Oeschger (D-O) events; millennial-scale oscillations that switched rapidly between cold and warm atmospheric conditions of up to Δ16°C, most strongly expressed during the period 60-30 ka. Time series analysis of palaeoclimate ice core records is one of the best ways to detect threshold behaviour in the climate system; however, some of these techniques can be age model dependent. Spectral analysis of a new Greenland-Cariaco GICC05 age model (GICC05-CB), generated by combining the GICC05 and Cariaco ∂18O chronologies, reveals a change in the dominant periodicities at ~31 ka, consistent with the cessation of the D-O events. While the GICC05-CB has the same ∂18O structure as GICC05, the different periodicity profile reveals a change in the climate system at 31 ka. Stability analysis of the ∂18O time series over the last 60 ka determines the number of states the climate experienced over time, and reveals a bifurcation in the climate system at 31 ka, switching from a bistable to a monostable state. Early warning signals of this bifurcation are also detected starting 10,000 years before the shift in the form of increasing autocorrelation and variance. This is consistent with the climate system experiencing a slow forcing towards a critical threshold. These signals are found in both the GICC05-CB and GICC05 chronologies, though the timing of the bifurcation point varies slightly. We suggest that this bifurcation is linked to a minima in obliquity, causing greatly expanded sea ice in the Labrador sea. Modelling runs from the CSIRO Mk3L Earth-system model indicates that extensive sea ice cover is established in the Labrador Sea and North Pacific at the obliquity minima centred on 28.5 ka. This expanded sea ice is thus responsible for shifting the Northern Hemisphere westerlies southwards and reducing the strength of the AMOC, preventing the establishment of the cold state from 31 ka.
NASA Astrophysics Data System (ADS)
Kuleshov, Y.; Jones, D.; Spillman, C. M.
2012-04-01
Climate change and climate extremes have a major impact on Australia and Pacific Island countries. Of particular concern are tropical cyclones and extreme ocean temperatures, the first being the most destructive events for terrestrial systems, while the latter has the potential to devastate ocean ecosystems through coral bleaching. As a practical response to climate change, under the Pacific-Australia Climate Change Science and Adaptation Planning program (PACCSAP), we are developing enhanced web-based information tools for providing seasonal forecasts for climatic extremes in the Western Pacific. Tropical cyclones are the most destructive weather systems that impact on coastal areas. Interannual variability in the intensity and distribution of tropical cyclones is large, and presently greater than any trends that are ascribable to climate change. In the warming environment, predicting tropical cyclone occurrence based on historical relationships, with predictors such as sea surface temperatures (SSTs) now frequently lying outside of the range of past variability meaning that it is not possible to find historical analogues for the seasonal conditions often faced by Pacific countries. Elevated SSTs are the primary trigger for mass coral bleaching events, which can lead to widespread damage and mortality on reef systems. Degraded coral reefs present many problems, including long-term loss of tourism and potential loss or degradation of fisheries. The monitoring and prediction of thermal stress events enables the support of a range of adaptive and management activities that could improve reef resilience to extreme conditions. Using the climate model POAMA (Predictive Ocean-Atmosphere Model for Australia), we aim to improve accuracy of seasonal forecasts of tropical cyclone activity and extreme SSTs for the regions of Western Pacific. Improved knowledge of extreme climatic events, with the assistance of tailored forecast tools, will help enhance the resilience and adaptive capacity of Australia and Pacific Island Countries under climate change. Acknowledgement The research discussed in this paper was conducted with the support of the PACCSAP supported by the AusAID and Department of Climate Change and Energy Efficiency and delivered by the Bureau of Meteorology and CSIRO.
Effects of harvest and climate on population dynamics of northern bobwhites in south Florida
Rolland, V.; Hostetler, J.A.; Hines, T.C.; Johnson, F.A.; Percival, H.F.; Oli, M.K.
2011-01-01
Context Hunting-related (hereafter harvest) mortality is assumed to be compensatory in many exploited species. However, when harvest mortality is additive, hunting can lead to population declines, especially on public land where hunting pressure can be intense. Recent studies indicate that excessive hunting may have contributed to the decline of a northern bobwhite (Colinus virginianus) population in south Florida. Aims This study aimed to estimate population growth rates to determine potential and actual contribution of vital rates to annual changes in population growth rates, and to evaluate the role of harvest and climatic variables on bobwhite population decline. Methods We used demographic parameters estimated from a six-year study to parameterise population matrix models and conduct prospective and retrospective perturbation analyses. Key results The stochastic population growth rate (?? S=0.144) was proportionally more sensitive to adult winter survival and survival of fledglings, nests and broods from first nesting attempts; the same variables were primarily responsible for annual changes in population growth rate. Demographic parameters associated with second nesting attempts made virtually no contribution to population growth rate. All harvest scenarios consistently revealed a substantial impact of harvest on bobwhite population dynamics. If the lowest harvest level recorded in the study period (i.e. 0.08 birds harvested per day per km2 in 2008) was applied, S would increase by 32.1%. Winter temperatures and precipitation negatively affected winter survival, and precipitation acted synergistically with harvest in affecting winter survival. Conclusions Our results suggest that reduction in winter survival due to overharvest has been an important cause of the decline in our study population, but that climatic factors might have also played a role. Thus, for management actions to be effective, assessing the contribution of primary (e.g. harvesting) but also secondary factors (e.g. climate) to population decline may be necessary. Implications Reducing hunting pressure would be necessary for the recovery of the bobwhite population at our study site. In addition, an adaptive harvest management strategy that considers weather conditions in setting harvest quota would help reverse the population decline further. ?? 2011 CSIRO.
Decadal trends in regional CO2 fluxes estimated from atmospheric inversions
NASA Astrophysics Data System (ADS)
Saeki, T.; Patra, P. K.
2016-12-01
Top-down approach (or atmospheric inversion) using atmospheric transport models and CO2 observations are an effective way to optimize surface fluxes at subcontinental scales and monthly time intervals. We used the CCSR/NIES/FRCGC AGCM-based Chemistry Transport Model (JAMSTEC's ACTM) and atmospheric CO2 concentrations at NOAA, CSIRO, JMA, NIES, NIES-MRI sites from Obspack GLOBALVIEW-CO2 data product (2013) for estimating CO2 fluxes for the period of 1990-2011. Carbon fluxes were estimated for 84 partitions (54 lands + 30 oceans) of the globe by using a Bayesian synthesis inversion framework. A priori fluxes are (1) atmosphere-ocean exchange from Takahashi et al. (2009), (2) 3-hourly terrestrial biosphere fluxes (annually balanced) from CASA model, and (3) fossil fuel fluxes from CDIAC global totals and EDGAR4.2 spatial distributions. Four inversion cases have been tested with 1) 21 sites (sites which have real data fraction of 90 % or more for 1989-2012), 2) 21 sites + CONTRAIL data, 3) 66 sites (over 70 % coverage), and 4) 157 sites. As a result of time-dependent inversions, mean total flux (excluding fossil fuel) for the period 1990-2011 is estimated to be -3.09 ±0.16 PgC/yr (mean and standard deviation of the four cases), where land (incl. biomass burning and land use change) and ocean absorb an average rate of -1.80 ±0.18 and -1.29 ±0.08 PgC/yr, respectively. The average global total sink from 1991-2000 to 2001-2010 increases by about 0.5 PgC/yr, mainly due to the increase in northern and tropical land sinks (Africa, Boreal Eurasia, East Asia and Europe), while ocean sinks show no clear trend. Inversion with CONTRAIL data estimates large positive flux anomalies in late 1997 associated with the 1997/98 El-Nino, while inversion without CONTARIL data between Japan and Australia fails to estimate such large anomalies. Acknowledgements. This work is supported by the Environment Research and Technology Development Fund (2-1401) of the Ministry of the Environment, Japan. We thank all measurement groups for submitting CO2 concentration data to the obspack-GLOBALVIEW product.
Nuin, Maider; Alfaro, Begoña; Cruz, Ziortza; Argarate, Nerea; George, Susie; Le Marc, Yvan; Olley, June; Pin, Carmen
2008-10-31
Kinetic models were developed to predict the microbial spoilage and the sensory quality of fresh fish and to evaluate the efficiency of a commercial time-temperature integrator (TTI) label, Fresh Check(R), to monitor shelf life. Farmed turbot (Psetta maxima) samples were packaged in PVC film and stored at 0, 5, 10 and 15 degrees C. Microbial growth and sensory attributes were monitored at regular time intervals. The response of the Fresh Check device was measured at the same temperatures during the storage period. The sensory perception was quantified according to a global sensory indicator obtained by principal component analysis as well as to the Quality Index Method, QIM, as described by Rahman and Olley [Rahman, H.A., Olley, J., 1984. Assessment of sensory techniques for quality assessment of Australian fish. CSIRO Tasmanian Regional Laboratory. Occasional paper n. 8. Available from the Australian Maritime College library. Newnham. Tasmania]. Both methods were found equally valid to monitor the loss of sensory quality. The maximum specific growth rate of spoilage bacteria, the rate of change of the sensory indicators and the rate of change of the colour measurements of the TTI label were modelled as a function of temperature. The temperature had a similar effect on the bacteria, sensory and Fresh Check kinetics. At the time of sensory rejection, the bacterial load was ca. 10(5)-10(6) cfu/g. The end of shelf life indicated by the Fresh Check label was close to the sensory rejection time. The performance of the models was validated under fluctuating temperature conditions by comparing the predicted and measured values for all microbial, sensory and TTI responses. The models have been implemented in a Visual Basic add-in for Excel called "Fish Shelf Life Prediction (FSLP)". This program predicts sensory acceptability and growth of spoilage bacteria in fish and the response of the TTI at constant and fluctuating temperature conditions. The program is freely available at http://www.azti.es/muestracontenido.asp?idcontenido=980&content=15&nodo1=30&nodo2=0.
Found: The Original 1945 Records of Australian Radio Astronomy
NASA Astrophysics Data System (ADS)
Goss, Miller; Ekers, Ron; Sim, Helen
2015-08-01
In July 2014, we found the original records of the first published Australian radio astronomy observations. These were obtained by Joseph L. Pawsey and Ruby Payne-Scott in early October 1945. The observations gave strong evidence of a million degree corona as well as frequent radio bursts.These observations followed earlier detections of the radio sun by Stanley Hey, George Southworth, Grote Reber and Elizabeth Alexander. The latter observations (the "Norfolk Island Effect" of March 1945) were the immediate motivation for the campaign carried out by Pawsey and Payne-Scott.These observations formed the basis for a number of pioneering publications: the 9 February 1946 Nature paper of Pawsey, Payne-Scott and McCready which was submitted on the last date on which data was obtained on 23 October 1945, the major publication of the initial Australian radio solar publication in the Proceedings of the Royal Society of London in August 1947 and Pawsey's presentation of the radio properties of the million degree corona in the Nature of 2 November 1946. Contemporaneously with these publications, D. F.Martyn was involved in an independent theoretical study of the properties of the solar corona.(Ginzburg and Shklovsky were also involved in this era in a study of the properties of the corona.) The back-to-back Martyn and Pawsey Nature papers were the first that described the radio properties of the hot corona, due to free-free emission. The division of the observed emission into "bursting" and "quiet" modes was challenging for the novice radio astronomers.These historical records had been recognized by Paul Wild in 1968, who instructed the CSIRO Division of Radiophysics secretary to E.("Taffy") G. Bowen, Ms. Sally Atkinson, to submit these to the Australian Academy of Science. Wild characterized these documents as "of considerable historical interest". Apparently the transmission of the documents was not done; a thorough search of the Australian Academy Library in August 2014 failed to locate them. The original papers were only found in Ms. Atkinson's files after her death on 13 November 2012 in Sydney.
NASA Astrophysics Data System (ADS)
Wyborn, L. A.; Evans, B. J. K.; Pugh, T.; Lescinsky, D. T.; Foster, C.; Uhlherr, A.
2014-12-01
The National Computational Infrastructure (NCI) at the Australian National University (ANU) is a partnership between CSIRO, ANU, Bureau of Meteorology (BoM) and Geoscience Australia. Recent investments in a 1.2 PFlop Supercomputer (Raijin), ~ 20 PB data storage using Lustre filesystems and a 3000 core high performance cloud have created a hybrid platform for higher performance computing and data-intensive science to enable large scale earth and climate systems modelling and analysis. There are > 3000 users actively logging in and > 600 projects on the NCI system. Efficiently scaling and adapting data and software systems to petascale infrastructures requires the collaborative development of an architecture that is designed, programmed and operated to enable users to interactively invoke different forms of in-situ computation over complex and large scale data collections. NCI makes available major and long tail data collections from both the government and research sectors based on six themes: 1) weather, climate and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology and 6) astronomy, bio and social. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. Collections are the operational form for data management and access. Similar data types from individual custodians are managed cohesively. Use of international standards for discovery and interoperability allow complex interactions within and between the collections. This design facilitates a transdisciplinary approach to research and enables a shift from small scale, 'stove-piped' science efforts to large scale, collaborative systems science. This new and complex infrastructure requires a move to shared, globally trusted software frameworks that can be maintained and updated. Workflow engines become essential and need to integrate provenance, versioning, traceability, repeatability and publication. There are also human resource challenges as highly skilled HPC/HPD specialists, specialist programmers, and data scientists are required whose skills can support scaling to the new paradigm of effective and efficient data-intensive earth science analytics on petascale, and soon to be exascale systems.
Data Recommender: An Alternative Way to Discover Open Scientific Datasets
NASA Astrophysics Data System (ADS)
Klump, J. F.; Devaraju, A.; Williams, G.; Hogan, D.; Davy, R.; Page, J.; Singh, D.; Peterson, N.
2017-12-01
Over the past few years, institutions and government agencies have adopted policies to openly release their data, which has resulted in huge amounts of open data becoming available on the web. When trying to discover the data, users face two challenges: an overload of choice and the limitations of the existing data search tools. On the one hand, there are too many datasets to choose from, and therefore, users need to spend considerable effort to find the datasets most relevant to their research. On the other hand, data portals commonly offer keyword and faceted search, which depend fully on the user queries to search and rank relevant datasets. Consequently, keyword and faceted search may return loosely related or irrelevant results, although the results may contain the same query. They may also return highly specific results that depend more on how well metadata was authored. They do not account well for variance in metadata due to variance in author styles and preferences. The top-ranked results may also come from the same data collection, and users are unlikely to discover new and interesting datasets. These search modes mainly suits users who can express their information needs in terms of the structure and terminology of the data portals, but may pose a challenge otherwise. The above challenges reflect that we need a solution that delivers the most relevant (i.e., similar and serendipitous) datasets to users, beyond the existing search functionalities on the portals. A recommender system is an information filtering system that presents users with relevant and interesting contents based on users' context and preferences. Delivering data recommendations to users can make data discovery easier, and as a result may enhance user engagement with the portal. We developed a hybrid data recommendation approach for the CSIRO Data Access Portal. The approach leverages existing recommendation techniques (e.g., content-based filtering and item co-occurrence) to produce similar and serendipitous data recommendations. It measures the relevance between datasets based on their properties, and search and download patterns. We evaluated the recommendation approach in a user study, and the obtained user judgments revealed the ability of the approach to accurately quantify the relevance of the datasets.
Observations of Sea Surface Mean Square Slope During the Southern Ocean Waves Experiment
NASA Technical Reports Server (NTRS)
Walsh, E. J.; Vandemark, D. C.; Wright, C. W.; Banner, M. L.; Chen, W.; Swift, R. N.; Scott, J. F.; Hines, D. E.; Jensen, J.; Lee, S.;
2001-01-01
For the Southern Ocean Waves Experiment (SOWEX), conducted in June 1992 out of Hobart, Tasmania, the NASA Scanning Radar Altimeter (SRA) was shipped to Australia and installed on a CSIRO Fokker F-27 research aircraft instrumented to make comprehensive surface layer measurements of air-sea interaction fluxes. The SRA sweeps a radar beam of P (two-way) half-power width across the aircraft ground track over a swath equal to 0.8 of the aircraft height, simultaneously measuring the backscattered power at its 36 GHz (8.3 mm) operating frequency and the range to the sea surface at 64 cross-track positions. In realtime, the slant ranges are multiplied by the cosine of the off-nadir incidence angles (including the effect of aircraft roll attitude) to determine the vertical distances from the aircraft to the sea surface. These distances are subtracted from the aircraft height to produce a sea-surface elevation map, which is displayed on a monitor in the aircraft to enable real-time assessments of data quality and wave properties. The sea surface mean square slope (mss), which is predominantly caused by the short waves, was determined from the backscattered power falloff with incidence angle measured by the SRA in the plane normal to the aircraft heading. On each flight, data were acquired at 240 m altitude while the aircraft was in a 7 degree roll attitude, interrogating off-nadir incidence angles from -15 degrees through nadir to +29 degrees. The aircraft turned azimuthally through 810 degrees in this attitude, mapping the azimuthal dependence of the backscattered power falloff with incidence angle. Two sets of turning data were acquired on each day, before and after the aircraft measured wind stress at low altitude (12 meters to 65 meters). Wave topography and backscattered power for mss were also acquired during those level flight segments whenever the aircraft altitude was above the SRA minimum range of 35 m. Data were collected over a wide range of wind and sea conditions, from quiescent to gale force winds with 9 meter wave height.
A combination dielectric and acoustic laboratory instrument for petrophysics
NASA Astrophysics Data System (ADS)
Josh, Matthew
2017-12-01
Laboratory testing of rock samples is the primary method for establishing the physics models which relate the rock properties (i.e. porosity, fluid permeability, pore-fluid and saturation) essential to evaluating a hydrocarbon reservoir, to the physical properties (resistivity, nuclear magnetic resonance, dielectric permittivity and acoustic properties) which can be measured with borehole logging instrumentation. Rock samples usually require machining to produce a suitable geometry for each test as well as specific sample preparation, e.g. multiple levels of saturation and chemical treatments, and this leads to discrepancies in the condition of the sample between different tests. Ideally, multiphysics testing should occur on one sample simultaneously so that useful correlations between data sets can be more firmly established. The world’s first dielectric and acoustic combination cell has been developed at CSIRO, so that a sample may be machined and prepared, then measured to determine the dielectric and acoustic properties simultaneously before atmospheric conditions in the laboratory affect the level of hydration in the sample. The dielectric measurement is performed using a conventional three-terminal parallel plate capacitor which can operate from 40 Hz up to 110 MHz, with modified electrodes incorporating a 4 MHz P-wave piezo crystal. Approximately 10 (acoustic P-) wavelengths interact with a typical (10 mm thick) sample so that the user may reliably ‘pick’ the P-wave arrival times with acceptable resolution. Experimental evidence indicates that the instrument is able to resolve 0.25 mm thickness in a Teflon sample test piece. For a number of engineering materials including Teflon and glass and also for a geological samples (Donnybrook sandstone from Western Australia) there is a perfectly linear relationship between both capacitance and P-wave arrival time with sample thickness. Donnybrook sandstone has a consistently linear increase in dielectric permittivity and P-wave velocity with saturation consistent with the Gassmann-Hill prediction. Both the dielectric permittivity and P-wave velocity are faster parallel to the bedding plane than orthogonal to the bedding plane in a shale from the Cooper Basin, Australia.
NASA Astrophysics Data System (ADS)
Wyborn, Lesley; Evans, Ben; Foster, Clinton; Pugh, Timothy; Uhlherr, Alfred
2015-04-01
Digital geoscience data and information are integral to informing decisions on the social, economic and environmental management of natural resources. Traditionally, such decisions were focused on regional or national viewpoints only, but it is increasingly being recognised that global perspectives are required to meet new challenges such as predicting impacts of climate change; sustainably exploiting scarce water, mineral and energy resources; and protecting our communities through better prediction of the behaviour of natural hazards. In recent years, technical advances in scientific instruments have resulted in a surge in data volumes, with data now being collected at unprecedented rates and at ever increasing resolutions. The size of many earth science data sets now exceed the computational capacity of many government and academic organisations to locally store and dynamically access the data sets; to internally process and analyse them to high resolutions; and then to deliver them online to clients, partners and stakeholders. Fortunately, at the same time, computational capacities have commensurately increased (both cloud and HPC): these can now provide the capability to effectively access the ever-growing data assets within realistic time frames. However, to achieve this, data and computing need to be co-located: bandwidth limits the capacity to move the large data sets; the data transfers are too slow; and latencies to access them are too high. These scenarios are driving the move towards more centralised High Performance (HP) Infrastructures. The rapidly increasing scale of data, the growing complexity of software and hardware environments, combined with the energy costs of running such infrastructures is creating a compelling economic argument for just having one or two major national (or continental) HP facilities that can be federated internationally to enable earth and environmental issues to be tackled at global scales. But at the same time, if properly constructed, these infrastructures can also service very small-scale research projects. The National Computational Infrastructure (NCI) at the Australian National University (ANU) has built such an HP infrastructure as part of the Australian Government's National Collaborative Research Infrastructure Strategy. NCI operates as a formal partnership between the ANU and the three major Australian National Government Scientific Agencies: the Commonwealth Scientific and Industrial Research Organisation (CSIRO), the Bureau of Meteorology and Geoscience Australia. The government partners agreed to explore the new opportunities offered within the partnership with NCI, rather than each running their own separate agenda independently. The data from these national agencies, as well as from collaborating overseas organisations (e.g., NASA, NOAA, USGS, CMIP, etc.) are either replicated to, or produced at, NCI. By co-locating and harmonising these vast data collections within the integrated HP computing environments at NCI, new opportunities have arisen for Data-intensive Interdisciplinary Science at scales and resolutions not hitherto possible. The new NCI infrastructure has also enabled the blending of research by the university sector with the more operational business of government science agencies, with the fundamental shift being that researchers from both sectors work and collaborate within a federated data and computational environment that contains both national and international data collections.
KEY COMPARISON: Final report on CCPR K1-a: Spectral irradiance from 250 nm to 2500 nm
NASA Astrophysics Data System (ADS)
Woolliams, Emma R.; Fox, Nigel P.; Cox, Maurice G.; Harris, Peter M.; Harrison, Neil J.
2006-01-01
The CCPR K1-a key comparison of spectral irradiance (from 250 nm to 2500 nm) was carried out to meet the requirements of the Mutual Recognition Arrangement by 13 participating national metrology institutes (NMIs). Because of the fragile nature of the tungsten halogen lamps used as comparison artefacts, the comparison was arranged as a star comparison with three lamps per participant. NPL (United Kingdom) piloted the comparison and, by measuring all lamps, provided a link between participants' measurements. The other participants were BNM-INM (France), CENAM (Mexico), CSIRO (Australia), HUT (Finland), IFA-CSIC (Spain), MSL-IRL (New Zealand), NIM (China), NIST (United States of America), NMIJ (Japan), NRC (Canada), PTB (Germany) and VNIIOFI (Russian Federation). Before the analysis was completed and the results known, the pilot discussed with each participant which lamp measurements should be included as representative of their comparison. As a consequence of this check, at least one measurement was excluded from one third of the lamps because of changes due to transportations. The comparison thus highlighted the difficulty regarding the availability of suitable transfer standards for the dissemination of spectral irradiance. The use of multiple lamps and multiple measurements ensured sufficient redundancy that all participants were adequately represented. In addition, during this pre-draft A phase all participants had the opportunity to review the uncertainty budgets and methods of all other participants. This new process helped to ensure that all submitted results and their associated uncertainties were evaluated in a consistent manner. The comparison was analysed using a model-based method which regarded each lamp as having a stable spectral irradiance and the measurements made by an NMI as systematically influenced by a factor that applies to all that NMI's measurements. The aim of the analysis was to estimate the systematic factor for each NMI. Across the spectral region (250 nm to 2500 nm) there were 44 wavelengths at which a comparison was made. These were treated entirely independently and thus the report describes 44 comparisons. For wavelengths from 250 nm to 800 nm (apart from 300 nm) all participants had unilateral degrees of equivalence (DoEs) with values consistent with their uncertainties for a coverage level k = 2. At all other wavelengths (apart from 1400 nm) all participants achieved consistency at the k = 4 level for the unilateral DoEs and the vast majority within k = 3. The results are a significant improvement over those of the previous comparison in 1990, especially considering that the declared uncertainties of most participants have been substantially improved over the intervening decade. These results are evidence of the value of the effort devoted to the development of improved spectral scales (and of the evaluation of their uncertainty) by many NMIs in recent years. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCPR, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
NASA Astrophysics Data System (ADS)
Kazandjiev, V.; Georgieva, V.; Moteva, M.; Marinova, T.; Dimitrov, P.
2010-09-01
The farming is one of the most important branches that bring the increase to the gross internal production in Bulgaria. At the same time, the agriculture is the only branch, as in home, so in world scale in which the made as well direct production spending and investing regenerating (or not) only in the frameworks to one vegetative season. In addition on this, development of the intensive farming without using the most advanced technologies such as irrigation, automation, selection - for obtaining stable cultivars and hybrids, permanent weather monitoring and agroclimatic zoning and integrated and biochemical protection to the cultures and plantations had not possible. Analysis of long-term meteorological data from different regions shows clear tendencies to warming and drying for the period of contemporary climate (1971-2000) as well in Bulgaria. Hydro-meteorological conditions in the country are worsened. The most entire estimate is made from the Intergovernmental Panel for Climate Change (IPCC) 2007. Most of authors proven that the last decades are really warmest for last century, even for the entire period of the most instrumental observations. The causes for global warming was long time debatable, but the last investigations prove it anthropogenetic derive. The main goal of the paper is framing in conditions of the expected climate changes in our country for period 2020-2050-2070 and the most likely impacts on the agriculture with inspection padding to the consequences in them and making physical conditions for development of proof farming in production regions of the country. By the means of the systematized database of meteorological and agrometeorological data which we have at disposition for the period of this survey (1971-2000); Provide assignment of the expected climatic changes according to the scenarios in the centers for observing and investigations of climatic changes in Europe, US., Canada and Australia (ECHAM 4, HadCM 2, CGCM 1, CSIRO-MK2 Bs and GFDL-Rs15) for the periods until 2020-2050-2070. Recover the growth, development and the productivity of the agricultural crops by means of the simulation models as WOFOST, DSSAT and calculation the reference evapotranspiration by CROPWAT model for the production conditions of the country and in correspondence with expected climatic changes; Actualization of existing agroclimatic zoning in Bulgaria for growing main for agriculture field crops, fruits, vegetables, vineyards and forage herbs. Was determinate regions for irrigation and appropriate crops and low-favored for agriculture regions with connection of expected changes 2020-2050-2070. It was investigated relations between the biological (stages of phenological development and yields) and agroclimatic (temperatures, precipitations, soil moisture content, balance of NPK in soils etc.); Find of resources indices and hydrothermal indices for agroclimatic conditions and their applicability. Start process of structuring of agricultural production in dependence from the real and potential resources of the six regions of the country further to the expected climatic changes in 2020-2050-2070. Finally was prepared recommendations for agroclimatic zoning in the practices on the state administration and MAF, investing policy for concentration of National and European funds for farming and insurance companies at determining the their insurance policy.
NASA Astrophysics Data System (ADS)
Lereboullet, A.-L.; Beltrando, G.; Bardsley, D. K.
2012-04-01
The wine industry is very sensitive to extreme weather events, especially to temperatures above 35°C and drought. In a context of global climate change, Mediterranean climate regions are predicted to experience higher variability in rainfall and temperatures and an increased occurrence of extreme weather events. Some viticultural systems could be particularly at risk in those regions, considering their marginal position in the growth climatic range of Vitis vinifera, the long commercial lifespan of a vineyard, the high added-value of wine and the volatile nature of global markets. The wine industry, like other agricultural systems, is inserted in complex networks of climatic and non-climatic (other physical, economical, social and legislative) components, with constant feedbacks. We use a socio-ecosystem approach to analyse the adaptation of two Mediterranean viticultural systems to recent and future increase of extreme weather events. The present analysis focuses on two wine regions with a hot-summer Mediterranean climate (CSb type in the Köppen classification): Côtes-du-Roussillon in southern France and McLaren Vale in southern Australia. Using climate data from two synoptic weather stations, Perpignan (France) and Adelaide (Australia), with time series running from 1955 to 2010, we highlight changes in rainfall patterns and an increase in the number of days with Tx >35°c since the last three decades in both regions. Climate models (DRIAS project data for France and CSIRO Mk3.5 for Australia) project similar trends in the future. To date, very few projects have focused on an international comparison of the adaptive capacity of viticultural systems to climate change with a holistic approach. Here, the analysis of climate data was complemented by twenty in-depth semi-structured interviews with key actors of the two regional wine industries, in order to analyse adaptation strategies put in place regarding recent climate evolution. This mixed-methods approach allows for a comprehensive assessment of adaptation capacity of the two viticultural systems to future climate change. The strategies of grape growers and wine producers focus on maintaining optimal yields and a constant wine style adapted to markets in a variable and uncertain climate. Their implementation and efficiency depend strongly on non-climatic factors. Thus, adaptation capacity to recent and future climate change depends strongly on adaptation to other non-climatic changes.
González, Camila; Paz, Andrea; Ferro, Cristina
2014-01-01
Visceral leishmaniasis (VL) is caused by the trypanosomatid parasite Leishmania infantum (=Leishmania chagasi), and is epidemiologically relevant due to its wide geographic distribution, the number of annual cases reported and the increase in its co-infection with HIV. Two vector species have been incriminated in the Americas: Lutzomyia longipalpis and Lutzomyia evansi. In Colombia, L. longipalpis is distributed along the Magdalena River Valley while L. evansi is only found in the northern part of the Country. Regarding the epidemiology of the disease, in Colombia the incidence of VL has decreased over the last few years without any intervention being implemented. Additionally, changes in transmission cycles have been reported with urban transmission occurring in the Caribbean Coast. In Europe and North America climate change seems to be driving a latitudinal shift of leishmaniasis transmission. Here, we explored the spatial distribution of the two known vector species of L. infantum in Colombia and projected its future distribution into climate change scenarios to establish the expansion potential of the disease. An updated database including L. longipalpis and L. evansi collection records from Colombia was compiled. Ecological niche models were performed for each species using the Maxent software and 13 Worldclim bioclimatic coverages. Projections were made for the pessimistic CSIRO A2 scenario, which predicts the higher increase in temperature due to non-emission reduction, and the optimistic Hadley B2 Scenario predicting the minimum increase in temperature. The database contained 23 records for L. evansi and 39 records for L. longipalpis, distributed along the Magdalena River Valley and the Caribbean Coast, where the potential distribution areas of both species were also predicted by Maxent. Climate change projections showed a general overall reduction in the spatial distribution of the two vector species, promoting a shift in altitudinal distribution for L. longipalpis and confining L. evansi to certain regions in the Caribbean Coast. Altitudinal shifts have been reported for cutaneous leishmaniasis vectors in Colombia and Peru. Here, we predict the same outcome for VL vectors in Colombia. Changes in spatial distribution patterns could be affecting local abundances due to climatic pressures on vector populations thus reducing the incidence of human cases. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
OZFLUX: Water, Energy, and Carbon Cycles in Australian Terrestrial Systems
NASA Astrophysics Data System (ADS)
Leuning, R.; Cleugh, H. A.; Finnigan, J. J.; Wang, Y.; Barrett, D. J.; Zegelin, S.
2001-12-01
The paper introduces the OZFLUX network which is being established to study several Australian ecosystems, discusses the analysis of eddy covariance data from tower-based flux stations, and then examines use of the flux data and a SVAT model within an atmospheric transport model to estimate regional fluxes. Lack of energy closure by eddy covariance measurements is commonly observed for Euroflux and Ameriflux installations. Reasons for the underestimates of H+ λ E may result from the way water vapor concentrations are determined using closed-path infrared gas analyzers. A comparison of open- and closed-path analyzers show that energy closure to better than 95% can be achieved with both systems when water vapor concentrations are expressed as mixing ratios in dry air, along with careful choice of the coordinate framework and the averaging periods used to calculate fluxes. Water, energy and carbon dioxide fluxes for two ecosystems are compared: 1) a 40 m tall, cool temperate Eucalyptus forest in SE Australia, and 2) a seasonally dry, tropical savanna woodland with sparsely arrayed, 10 m tall, Eucalyptus trees growing in a C4 grassland, in northern Queensland. Peak carbon dioxide uptake by the tall forest in the southern winter (T < 5\\deg C) is -10 μ mol~ m-2 s-1 compared to -2 μ mol~ m-2 s-1 for the savannah (T > 20 \\deg C), while evapotranspiration fluxes are similar (200~ W m-2). The differences arise because grasses in the savannah are dormant at this time. Seasonal carbon uptake is greatest in the summer for the temperate forest, and during the summer rainfall period from November to March for the savannah when grasses are actively growing. Fluxes measured at the two sites were used to test and parameterize the CSIRO Biosphere Model (CBM), which forms the lower boundary of a large-scale atmospheric transport model (DARLAM). We discuss the estimation of key parameters for CBM using ecological data on net primary production, and explain how, using a multiple-constraint approach, we may use DARLAM to estimate net fluxes at regional and continental scales. This involves constraining model predictions of fluxes and 4-D concentration fields, with measurements of fluxes, atmospheric carbon dioxide concentrations from a sparse network of towers, and surface radiances measured remotely.
An Innovative Technology to Support Independent Living: The Smarter Safer Homes Platform.
Karunanithi, Mohanraj; Zhang, Qing
2018-01-01
Australian population aged over 65 years is 14% (3.3 million) and this expected to increase to 21% by 2053 (8.3 million), of which 1.9% to 4.2% is attributed to Australians over 85 years. With increase in ageing, there is high prevalence in long-term health conditions and more likely multiple visits to the doctors or the hospitals, particularly when one's functional condition declines. This adds burden to the already stretched health system such as the overcrowding of emergency departments in hospitals. This is partly due to many ageing patients with high care needs occupying significant number of hospital beds because they are waiting for entry to the limited placements in residential care. To address this increase in ageing population and its impact in the society, the Australian government has funded aged care reforms for initiatives for older community stay at home longer. Recently, this was implemented through consumer directed age care reform. Advances in information and communication technologies, particularly in the advancement of lifestyle technologies and its increased use, show promise in the uptake of telehealth approach to support older people to live longer in their homes. In 2011, CSIRO took the initiative to a develop consumer designed innovative platform that would assist and support the older community in their functional ability and health for day to day living in their home environment. This platform was called the Smarter Safer Homes technology. The Smarter Safer Homes platform infers the Activities of Daily Living information from a passive sensor-enabled environment and correlates the information with home-based health monitoring measurements. The use of sensors enables the information to be captured in an unobtrusive manner. This information is then provided to the individual in the household through an iPad application while information can also be shared with formal and informal carers. The platform has undergone a few pilot studies to explore an objective and individualised approach to Activities of daily living based on an individual's profile and its applicability in multi-resident home setting in individual's in regional Queensland. Furthermore, the platform is being validated in a clinical study for its application in the aged care service in various geographical settings such as in urban and remote communities. This paper describes the platform, outcomes of pilot studies, and its future application.
A Test of the Optimality Approach to Modelling Canopy gas Exchange by Natural Vegetation
NASA Astrophysics Data System (ADS)
Schymanski, S. J.; Sivapalan, M.; Roderick, M. L.; Beringer, J.; Hutley, L. B.
2005-12-01
Natural vegetation has co-evolved with its environment over a long period of time and natural selection has led to a species composition that is most suited for the given conditions. Part of this adaptation is the vegetation's water use strategy, which determines the amount and timing of water extraction from the soil. Knowing that water extraction by vegetation often accounts for over 90% of the annual water balance in some places, we need to understand its controls if we want to properly model the hydrologic cycle. Water extraction by roots is driven by transpiration from the canopy, which in turn is an inevitable consequence of CO2 uptake for photosynthesis. Photosynthesis provides plants with their main building material, carbohydrates, and with the energy necessary to thrive and prosper in their environment. Therefore we expect that natural vegetation would have evolved an optimal water use strategy to maximise its `net carbon profit' (the difference between carbon acquired by photosynthesis and carbon spent on maintenance of the organs involved in its uptake). Based on this hypothesis and on an ecophysiological gas exchange and photosynthesis model (Cowan and Farquhar 1977; von Caemmerer 2000), we model the optimal vegetation for a site in Howard Springs (N.T., Australia) and compare the modelled fluxes with measurements by Beringer, Hutley et al. (2003). The comparison gives insights into theoretical and real controls on transpiration and photosynthesis and tests the optimality approach to modelling gas exchange of natural vegetation with unknown properties. The main advantage of the optimality approach is that no assumptions about the particular vegetation on a site are needed, which makes it very powerful for predicting vegetation response to long-term climate- or land use change. Literature: Beringer, J., L. B. Hutley, et al. (2003). "Fire impacts on surface heat, moisture and carbon fluxes from a tropical savanna in northern Australia." International Journal of Wildland Fire 12(3-4): 333-340. - Cowan, I. R. and G. D. Farquhar (1977). Stomatal Function in Relation to Leaf Metabolism and Environment. Integration of activity in the higher plant. D. H. Jennings. Cambridge, Cambridge University Press: 471-505. - von Caemmerer, S. (2000). Biochemical Models of Leaf Photosynthesis. Collingwood, CSIRO Publishing.
The Impact of Urban Growth and Climate Change on Heat Stress in an Australian City
NASA Astrophysics Data System (ADS)
Chapman, S.; Mcalpine, C. A.; Thatcher, M. J.; Salazar, A.; Watson, J. R.
2017-12-01
Over half of the world's population lives in urban areas. Most people will therefore be exposed to climate change in an urban environment. One of the climate risks facing urban residents is heat stress, which can lead to illness and death. Urban residents are at increased risk of heat stress due to the urban heat island effect. The urban heat island is a modification of the urban environment and increases temperatures on average by 2°C, though the increase can be much higher, up to 8°C when wind speeds and cloud cover are low. The urban heat island is also expected to increase in the future due to urban growth and intensification, further exacerbating urban heat stress. Climate change alters the urban heat island due to changes in weather (wind speed and cloudiness) and evapotranspiration. Future urban heat stress will therefore be affected by urban growth and climate change. The aim of this study was to examine the impact of urban growth and climate change on the urban heat island and heat stress in Brisbane, Australia. We used CCAM, the conformal cubic atmospheric model developed by the CSIRO, to examine temperatures in Brisbane using scenarios of urban growth and climate change. We downscaled the urban climate using CCAM, based on bias corrected Sea Surface Temperatures from the ACCESS1.0 projection of future climate. We used Representative Concentration Pathway (RCP) 8.5 for the periods 1990 - 2000, 2049 - 2060 and 2089 - 2090 with current land use and an urban growth scenario. The present day climatology was verified using weather station data from the Australian Bureau of Meteorology. We compared the urban heat island of the present day with the urban heat island with climate change to determine if climate change altered the heat island. We also calculated heat stress using wet-bulb globe temperature and apparent temperature for the climate change and base case scenarios. We found the urban growth scenario increased present day temperatures by 0.5°C in the inner city and by 6°C during a period of hot days. The scenarios of future temperature are ongoing and will show how heat stress will change in Brisbane when both urban growth and climate change are considered.
NASA Astrophysics Data System (ADS)
Schaaf, C.; Paynter, I.; Saenz, E. J.; Li, Z.; Strahler, A. H.; Peri, F.; Erb, A.; Raumonen, P.; Muir, J.; Howe, G.; Hewawasam, K.; Martel, J.; Douglas, E. S.; Chakrabarti, S.; Cook, T.; Schaefer, M.; Newnham, G.; Jupp, D. L. B.; van Aardt, J. A.; Kelbe, D.; Romanczyk, P.; Faulring, J.
2014-12-01
Terrestrial lidars are increasingly being deployed in a variety of ecosystems to calibrate and validate large scale airborne and spaceborne estimates of forest structure and biomass. While these lidars provide a wealth of high resolution information on canopy structure and understory vegetation, they tend to be expensive, slow scanning and somewhat ponderous to deploy. Therefore, frequent deployments and characterization of larger areas of a hectare or more can still be challenging. This suggests a role for low cost, ultra-portable, rapid scanning (but lower resolution) instruments -- particularly in scanning extreme environments and as a way to augment and extend strategically placed scans from the more highly capable lidars. The Canopy Biomass Lidar (CBL) is an inexpensive, highly portable, fast-scanning (33 seconds), time-of-flight, terrestrial laser scanning (TLS) instrument, built in collaboration with RIT, by U Mass Boston. The instrument uses a 905nm SICK time of flight laser with a 0.25o resolution and 30m range. The higher resolution, full-waveform Dual Wavelength Echidna® Lidar (DWEL), developed by Boston University, U Mass Lowell and U Mass Boston, builds on the Australian CSIRO single wavelength, full-waveform Echidna® Validation Instrument (EVI), but utilizes two simultaneous laser pulses at 1064 and 1548 nm to separate woody returns from those of foliage at a range of up to 100m range. The UMass Boston CBL has been deployed in rangelands (San Joaquin Experimental Range, CA), high altitude conifers (Sierra National Forest, CA), mixed forests (Harvard Forest LTER MA), tropical forests (La Selva and Sirena Biological Stations, Costa Rica), eucalypts (Karawatha, Brisbane TERN, Australia), and woodlands (Alice Holt Forest, UK), frequently along-side the DWEL, as well as in more challenging environments such as mangrove forests (Corcovado National Park, Costa Rica) and Massachusetts salt marshes and eroding bluffs (Plum Island LTER, and UMass Boston Nantucket Field Station). Multiple hemispherical point clouds can be combined to generate detailed reconstructions of ecosystem biomass and structure. By combining these scans and reconstructions, the strengths of the DWEL can be coupled with the speed and portability of the CBL to extrapolate comprehensive structure information to larger areas.
NASA Astrophysics Data System (ADS)
Dougherty, K.; Sarkissian, J.
2002-01-01
The recent Australian film, The Dish, highlighted the role played by the Parkes Radio Telescope in tracking and communicating with the Apollo 11 mission. However the events depicted in this film represent only a single snapshot of the role played by Australian radio astronomy and space tracking facilities in the exploration of the Solar System. In 1960, NASA established its first deep space tracking station outside the United States at Island Lagoon, near Woomera in South Australia. From 1961 until 1972, this station was an integral part of the Deep Space Network, responsible for tracking and communicating with NASA's interplanetary spacecraft. It was joined in 1965 by the Tidbinbilla tracking station, located near Canberra in eastern Australia, a major DSN facility that is still in operation today. Other NASA tracking facilities (for the STADAN and Manned Space Flight networks) were also established in Australia during the 1960s, making this country home to the largest number of NASA tracking facilities outside the United States. At the same time as the Island Lagoon station was being established in South Australia, one of the world's major radio telescope facilities was being established at Parkes, in western New South Wales. This 64-metre diameter dish, designed and operated by the Commonwealth Scientific and Industrial Research Organisation (CSIRO), was also well-suited for deep space tracking work: its design was, in fact, adapted by NASA for the 64-metre dishes of the Deep Space Network. From Mariner II in 1962 until today, the Parkes Radio Telescope has been contracted by NASA on many occasions to support interplanetary spacecraft, as well as the Apollo lunar missions. This paper will outline the role played by both the Parkes Radio Telescope and the NASA facilities based in Australia in the exploration of the Solar System between 1960 and 1976, when the Viking missions landed on Mars. It will outline the establishment and operation of the Deep Space Network in Australia and consider the joint US-Australian agreement under which it was managed. It will also discuss the relationship of the NASA stations to the Parkes Radio Telescope and the integration of Parkes into the NASA network to support specific space missions. The particular involvement of Australian facilities in significant space missions will be highlighted and assessed.
NASA Astrophysics Data System (ADS)
Heikkilä, U.; Shi, X.; Phipps, S. J.; Smith, A. M.
2013-10-01
This study investigates the effect of deglacial climate on the deposition of the solar proxy 10Be globally, and at two specific locations, the GRIP site at Summit, Central Greenland, and the Law Dome site in coastal Antarctica. The deglacial climate is represented by three 30 yr time slice simulations of 10 000 BP (years before present = 1950 CE), 11 000 BP and 12 000 BP, compared with a preindustrial control simulation. The model used is the ECHAM5-HAM atmospheric aerosol-climate model, driven with sea surface temperatures and sea ice cover simulated using the CSIRO Mk3L coupled climate system model. The focus is on isolating the 10Be production signal, driven by solar variability, from the weather or climate driven noise in the 10Be deposition flux during different stages of climate. The production signal varies on lower frequencies, dominated by the 11yr solar cycle within the 30 yr time scale of these experiments. The climatic noise is of higher frequencies. We first apply empirical orthogonal functions (EOF) analysis to global 10Be deposition on the annual scale and find that the first principal component, consisting of the spatial pattern of mean 10Be deposition and the temporally varying solar signal, explains 64% of the variability. The following principal components are closely related to those of precipitation. Then, we apply ensemble empirical decomposition (EEMD) analysis on the time series of 10Be deposition at GRIP and at Law Dome, which is an effective method for adaptively decomposing the time series into different frequency components. The low frequency components and the long term trend represent production and have reduced noise compared to the entire frequency spectrum of the deposition. The high frequency components represent climate driven noise related to the seasonal cycle of e.g. precipitation and are closely connected to high frequencies of precipitation. These results firstly show that the 10Be atmospheric production signal is preserved in the deposition flux to surface even during climates very different from today's both in global data and at two specific locations. Secondly, noise can be effectively reduced from 10Be deposition data by simply applying the EOF analysis in the case of a reasonably large number of available data sets, or by decomposing the individual data sets to filter out high-frequency fluctuations.
Interactive Visualization and Analysis of Geospatial Data Sets - TrikeND-iGlobe
NASA Astrophysics Data System (ADS)
Rosebrock, Uwe; Hogan, Patrick; Chandola, Varun
2013-04-01
The visualization of scientific datasets is becoming an ever-increasing challenge as advances in computing technologies have enabled scientists to build high resolution climate models that have produced petabytes of climate data. To interrogate and analyze these large datasets in real-time is a task that pushes the boundaries of computing hardware and software. But integration of climate datasets with geospatial data requires considerable amount of effort and close familiarity of various data formats and projection systems, which has prevented widespread utilization outside of climate community. TrikeND-iGlobe is a sophisticated software tool that bridges this gap, allows easy integration of climate datasets with geospatial datasets and provides sophisticated visualization and analysis capabilities. The objective for TrikeND-iGlobe is the continued building of an open source 4D virtual globe application using NASA World Wind technology that integrates analysis of climate model outputs with remote sensing observations as well as demographic and environmental data sets. This will facilitate a better understanding of global and regional phenomenon, and the impact analysis of climate extreme events. The critical aim is real-time interactive interrogation. At the data centric level the primary aim is to enable the user to interact with the data in real-time for the purpose of analysis - locally or remotely. TrikeND-iGlobe provides the basis for the incorporation of modular tools that provide extended interactions with the data, including sub-setting, aggregation, re-shaping, time series analysis methods and animation to produce publication-quality imagery. TrikeND-iGlobe may be run locally or can be accessed via a web interface supported by high-performance visualization compute nodes placed close to the data. It supports visualizing heterogeneous data formats: traditional geospatial datasets along with scientific data sets with geographic coordinates (NetCDF, HDF, etc.). It also supports multiple data access mechanisms, including HTTP, FTP, WMS, WCS, and Thredds Data Server (for NetCDF data and for scientific data, TrikeND-iGlobe supports various visualization capabilities, including animations, vector field visualization, etc. TrikeND-iGlobe is a collaborative open-source project, contributors include NASA (ARC-PX), ORNL (Oakridge National Laboratories), Unidata, Kansas University, CSIRO CMAR Australia and Geoscience Australia.
Hydrothermal mineralising systems as critical systems
NASA Astrophysics Data System (ADS)
Hobbs, Bruce
2015-04-01
Hydrothermal mineralising systems as critical systems. Bruce E Hobbs1,2, Alison Ord1 and Mark A. Munro1. 1. Centre for Exploration Targeting, The University of Western Australia, M006, 35 Stirling Highway, Crawley, WA 6009, Australia. 2. CSIRO Earth and Resource Engineering, Bentley, WA, Australia Hydrothermal mineralising systems are presented as large, open chemical reactors held far from equilibrium during their life-time by the influx of heat, fluid and dissolved chemical species. As such they are nonlinear dynamical systems and need to be analysed using the tools that have been developed for such systems. Hydrothermal systems undergo a number of transitions during their evolution and this paper focuses on methods for characterising these transitions in a quantitative manner and establishing whether they resemble first or second (critical) phase transitions or whether they have some other kind of nature. Critical phase transitions are characterised by long range correlations for some parameter characteristic of the system, power-law probability distributions so that there is no characteristic length scale and a high sensitivity to perturbations; as one approaches criticality, characteristic parameters for the system scale in a power law manner with distance from the critical point. The transitions undergone in mineralised hydrothermal systems are: (i) widespread, non-localised mineral alteration involving exothermic mineral reactions that produce hydrous silicate phases, carbonates and iron-oxides, (ii) strongly localised veining, brecciation and/or stock-work formation, (iii) a series of endothermic mineral reactions involving the formation of non-hydrous silicates, sulphides and metals such as gold, (iv) multiple repetitions of transitions (ii) and (iii). We have quantified aspects of these transitions in gold deposits from the Yilgarn craton of Western Australia using wavelet transforms. This technique is convenient and fast. It enables one to establish if the transition is multifractal (and if so, quantify the multifractal spectrum) and determine the scale dependence of long range correlations or anti-correlations. The availability of long drill holes with detailed chemical analyses and mineral abundances derived from hyperspectral data enables individual ore bodies to be characterised in a quantitative manner and constraints placed on whether the various transition are possibly critical or of some other form. We also present some simple nonlinear models that produce the multifractal character and correlation scaling relations observed in these data sets,
An investigation of the Carina Nebula
NASA Astrophysics Data System (ADS)
Brooks, Kate J.
2000-10-01
It is well known that the radiation fields and stellar winds of massive stars can drastically affect the physical conditions, structure and chemistry of the giant molecular cloud (GMC) from which they formed. It is also thought that massive stars are at least partly responsible for triggering further star formation within a GMC. The details of this interaction, however, are not well understood and additional detailed study of massive star-forming regions is needed. This study has focused on a multi-wavelength investigation of the Carina Nebula. This is a spectacular massive star-forming region that contains two of the most massive star clusters in our galaxy, Trumpler 14 and Trumpler 16, and one of the most massive stars known -- η Car. The goal of this study has been to obtain information on the molecular gas, ionized gas and photodissociation regions (PDRs) from a collection of instruments which have the highest angular resolution and sensitivity available to date. The Mopra Telescope and the Swedish-ESO Submillimeter Telescope (SEST) were used to obtain a series of molecular line observations of the GMC between 150 and 230 GHz. Observations of H110α recombination-line emission at 4.874 GHz and the related continuum emission were obtained with the Australia Telescope Compact Array and used to study the ionized gas associated with the two HII regions, Car I and Car II. H2 1--0 S(1) (2.12 microns) and Brγ (2.16 microns) observations using the University of New South Wales Infrared Fabry-Perot (UNSWIRF) and 3.29 micron narrow-band observations obtained with the SPIREX/Abu thermal infrared camera were used to study the PDRs on the surface of molecular clumps in the Keyhole region, a dark optical feature in the vicinity of η Car. The results of these observations provide detailed information on the excitation conditions, kinematics and morphology of regions within the HII region/molecular cloud complex of the Carina Nebula. In addition, the results confirm that the Carina Nebula is one of the most extreme and complex cases of massive stars interacting with their environment and show that there is still a wealth of information to be gained from future studies of this region. %% If you have your thesis on the web, please provide the web address here Copies currently available at: http://www.atnf.csiro.au/people/kbrooks/html/publications.html
Further Studies of Forest Structure Parameter Retrievals Using the Echidna® Ground-Based Lidar
NASA Astrophysics Data System (ADS)
Strahler, A. H.; Yao, T.; Zhao, F.; Yang, X.; Schaaf, C.; Wang, Z.; Li, Z.; Woodcock, C. E.; Culvenor, D.; Jupp, D.; Newnham, G.; Lovell, J.
2012-12-01
Ongoing work with the Echidna® Validation Instrument (EVI), a full-waveform, ground-based scanning lidar (1064 nm) developed by Australia's CSIRO and deployed by Boston University in California conifers (2008) and New England hardwood and softwood (conifer) stands (2007, 2009, 2010), confirms the importance of slope correction in forest structural parameter retrieval; detects growth and disturbance over periods of 2-3 years; provides a new way to measure the between-crown clumping factor in leaf area index retrieval using lidar range; and retrieves foliage profiles with more lower-canopy detail than a large-footprint aircraft scanner (LVIS), while simulating LVIS foliage profiles accurately from a nadir viewpoint using a 3-D point cloud. Slope correction is important for accurate retrieval of forest canopy structural parameters, such as mean diameter at breast height (DBH), stem count density, basal area, and above-ground biomass. Topographic slope can induce errors in parameter retrievals because the horizontal plane of the instrument scan, which is used to identify, measure, and count tree trunks, will intersect trunks below breast height in the uphill direction and above breast height in the downhill direction. A test of three methods at southern Sierra Nevada conifer sites improved the range of correlations of these EVI-retrieved parameters with field measurements from 0.53-0.68 to 0.85-0.93 for the best method. EVI scans can detect change, including both growth and disturbance, in periods of two to three years. We revisited three New England forest sites scanned in 2007-2009 or 2007-2010. A shelterwood stand at the Howland Experimental Forest, Howland, Maine, showed increased mean DBH, above-ground biomass and leaf area index between 2007 and 2009. Two stands at the Harvard Forest, Petersham, Massachusetts, suffered reduced leaf area index and reduced stem count density as the result of an ice storm that damaged the stands. At one stand, broken tops were visible in the 2010 point cloud canopy reconstruction. A new method for retrieval of the forest canopy between-crown clumping index from angular gaps in hemispherically-projected EVI data traces gaps as they narrow with range from the instrument, thus providing the approximate physical size, rather than angular size, of the gaps. In applying this method to a range of sites in the southern Sierra Nevada, element clumping index values are lower (more between-crown clumping effect) in more open stands, providing improved results as compared to conventional hemispherical photography. In dense stands with fewer gaps, the clumping index values were closer. Foliage profiles retrieved from EVI scans at five Sierra Nevada sites are closely correlated with those of the airborne Lidar Vegetation Imaging Sensor (LVIS) when averaged over a diameter of 100 m. At smaller diameters, the EVI scans have more detail in lower canopy layers and the LVIS and EVI foliage profiles are more distinct. Foliage profiles derived from processing 3-D site point clouds with a nadir view match the LVIS foliage profiles more closely than profiles derived from EVI in scan mode. Removal of terrain effects significantly enhances the match with LVIS profiles. This research was supported by the US National Science Foundation under grant MRI DBI-0923389.
NASA Astrophysics Data System (ADS)
Jenk, Theo Manuel; Rubino, Mauro; Etheridge, David; Ciobanu, Viorela Gabriela; Blunier, Thomas
2016-08-01
Palaeoatmospheric records of carbon dioxide and its stable carbon isotope composition (δ13C) obtained from polar ice cores provide important constraints on the natural variability of the carbon cycle. However, the measurements are both analytically challenging and time-consuming; thus only data exist from a limited number of sampling sites and time periods. Additional analytical resources with high analytical precision and throughput are thus desirable to extend the existing datasets. Moreover, consistent measurements derived by independent laboratories and a variety of analytical systems help to further increase confidence in the global CO2 palaeo-reconstructions. Here, we describe our new set-up for simultaneous measurements of atmospheric CO2 mixing ratios and atmospheric δ13C and δ18O-CO2 in air extracted from ice core samples. The centrepiece of the system is a newly designed needle cracker for the mechanical release of air entrapped in ice core samples of 8-13 g operated at -45 °C. The small sample size allows for high resolution and replicate sampling schemes. In our method, CO2 is cryogenically and chromatographically separated from the bulk air and its isotopic composition subsequently determined by continuous flow isotope ratio mass spectrometry (IRMS). In combination with thermal conductivity measurement of the bulk air, the CO2 mixing ratio is calculated. The analytical precision determined from standard air sample measurements over ice is ±1.9 ppm for CO2 and ±0.09 ‰ for δ13C. In a laboratory intercomparison study with CSIRO (Aspendale, Australia), good agreement between CO2 and δ13C results is found for Law Dome ice core samples. Replicate analysis of these samples resulted in a pooled standard deviation of 2.0 ppm for CO2 and 0.11 ‰ for δ13C. These numbers are good, though they are rather conservative estimates of the overall analytical precision achieved for single ice sample measurements. Facilitated by the small sample requirement, replicate measurements are feasible, allowing the method precision to be improved potentially. Further, new analytical approaches are introduced for the accurate correction of the procedural blank and for a consistent detection of measurement outliers, which is based on δ18O-CO2 and the exchange of oxygen between CO2 and the surrounding ice (H2O).
Passive microseismic monitoring at an Australian CO2 geological storage site
NASA Astrophysics Data System (ADS)
Siggins, Anthony
2010-05-01
Passive microseismic monitoring at an Australian CO2 geological storage site A.F. Siggins1 and T. Daley2 1. CO2CRC at CSIRO Earth Science and Resource Engineering, Clayton, Victoria, Australia 2. Lawrence Berkeley National Labs, Berkeley, CA, USA Prior to the injection of CO2, background micro-seismic (MS) monitoring commenced at the CO2CRC Otway project site in Victoria, south-eastern Australia on the 4th of October 2007. The seismometer installation consisted of a solar powered ISS MS™ seismometer connected to two triaxial geophones placed in a gravel pack in a shallow borehole at 10m and 40 m depth respectively. The seismometer unit was interfaced to a digital radio which communicated with a remote computer containing the seismic data base. This system was designed to give a qualitative indication of any natural micro-seismicity at the site and to provide backup to a more extensive geophone array installed at the reservoir depth of approximately 2000m. During the period, October to December 2007 in excess of 150 two-station events were recorded. These events could all be associated with surface engineering activities during the down-hole installation of instruments at the nearby Naylor 1 monitoring well and surface seismic weight drop investigations on site. Source location showed the great majority of events to be clustered on the surface. MS activity then quietened down with the completion of these tasks. Injection of a CO2 rich gas commenced in mid March 2008 continuing until late August 2009 with approximately 65,000 tonnes being injected at 2050m depth in to a depleted natural gas formation. Only a small number of subsurface MS events were recorded during 2008 although the monitoring system suffered from long periods of down-time due to power supply failures and frequent mains power outages in the region. In March 2009 the surface installation was upgraded with new hardware and software. The seismometer was replaced with a more sensitive ISS 32-bit GS™ unit. Internet access to the monitoring system and data base was then established with a Telstra Next G connection. Due to the higher sensitivity of the seismometer, many more low amplitude sub-surface events are now being recorded, possibly associated with deep truncated faults in the south west corner of the injection site although any causal link with the CO2 injection remains to be determined.
Climate Change and Hydrology of a Snow-fed Watershed in Western Nepal
NASA Astrophysics Data System (ADS)
Pandey, V. P.; Bharati, L.; Dhaubanjar, S.
2017-12-01
Many river basins across the globe are experiencing varying degrees of impacts from climate change. Snow-fed watersheds are expected to be affected even more. Chamelia, a tributary of Mahakali river basin, is a snow-fed river in the western Nepal with catchment area of 1,603 km2above the confluence with Mahakali River. Forest cover (40%) and rainfed agriculture (28%) covers more than two-third of the watershed. Topography varies from 505 to 7,090 m. According to the data from Department of Electricity Development (DoED) this watershed contains 14 licensed hydropower projects of varying capacities. Climate change may affect various aspects of the hydropower project, all of which are hinged around hydrology. This study simulated hydrological response of Chamelia watershed using Soil and Water Assessment Tool (SWAT) as an input for a hydro-economic model to analyze the water-energy-food nexus. The model was calibrated for the period of 2001-2007 and validated for 2008-2013 and then used to examine the streamflow response to climate change. Future climates for near-future (2020-2045), mid-future (2046-2070) and far-future (2071-2095) were considered based on CSIRO-CCAM Regional Circulation Model (RCM), derived from ACCESS1, downloaded from South Asia Cordex for RCP4.5 and RCP8.5 scenarios and then bias corrected using linear scaling method. Results, based on climate date at Station-103 showed that maximum temperature under RCP4.5 (RCP8.5) scenario for near-, mid-, and far-futures are projected to increase by 1.2°C (1.4°C), 1.5°C (2.8°C), and 2.3°C (2.6°C), respectively, from the baseline. Minimum temperature for the same scenarios and future periods, in the same order, are projected to increase by 1.1°C (1.5°C), 2.1°C (3.6°C), and 2.5°C (4.7°C), respectively, from the baseline. Precipitation in the other hand under RCP4.5 (RCP8.5) scenario for near-, mid-, and far-futures are projected to increase by 10.2% (10.4%), 7.6% (13.6%), and 3.1% (12.2%), respectively, from the baseline. As a result of the projected changes, streamflow is expected to alter at varying rates for the three future periods of time and two scenarios. The ultimate results of this nexus study are useful for water infrastructure planning to ensure long-term sustainability in the changing context.
NASA Astrophysics Data System (ADS)
Levin, I.; Naegler, T.
2009-04-01
Sulphur hexafluoride (SF6) is one of the strongest greenhouse gases per molecule in the atmosphere. SF6 emissions are also one of the six greenhouse gases targeted for reduction under the Kyoto Protocol. Here we present a long-term data set of globally distributed high-precision atmospheric SF6 observations which show an increase in mixing ratios from near zero in the 1970s to a global mean value of 6.3 ppt by the end of 2007. Because of its long atmospheric lifetime of around 3000 years, the accumulation of SF6 in the atmosphere is a direct measure of its global emissions: Analysis of our long-term data records implies a decrease of global SF6 sources after 1995, most likely due to emission reductions in industrialised countries. However, after 1998 the global SF6 source increases again, which is probably due to enhanced emissions from transition economies such as in China and India. Moreover, observed north-south concentration differences in SF6 suggest that emissions calculated from statistical (bottom-up) information and reported by Annex II parties to the United Nations Framework Convention on Climate Change (UNFCCC) may be too low by up to 50%. This clearly shows the importance and need for atmospheric (top-down) validation of Kyoto reporting which is only feasible with a dense world-wide observational network for greenhouse and other trace gases. Other members of the Global SF6 Trends Team: R. Heinz (1), D. Osusko (1), E. Cuevas (2), A. Engel (3), J. Ilmberger (1), R.L. Langenfelds (4), B. Neininger (5), C.v. Rohden (1), L.P. Steele (4), A. Varlagin (6), R. Weller (7), D.E. Worthy (8), S.A. Zimov (9) (1) Institut für Umweltphysik, University of Heidelberg, 69120 Heidelberg, Germany, (2) Centro de Investigación Atmosférica de Izaña, Instituto Nacional de Meteorología (INM), 38071 Santa Cruz de Tenerife, Spain, (3) Institut für Atmosphäre und Umwelt, J.W. Goethe Universität Frankfurt, 60438 Frankfurt/Main, Germany, (4) Centre for Australian Weather and Climate Research / CSIRO Marine and Atmospheric Research (CMAR), Aspendale, Victoria 3195, Australia, (5) MetAir AG, 6313 Menzingen, Switzerland, (6) Svertsov Institute for Evolutionary and Ecological Problems (IPEE), 117071 Moscow, Russia, (7) Alfred Wegener Institute for Polar and Marine Research, 27568 Bremerhaven, Germany, (8) Environment Canada, Climate Research Division / CCMR, Toronto, ON M3H 5T4, Canada, (9) Cherskii, Republic of Sakha (Yakutia), Russia
The Australian Integrated Marine Observing System
NASA Astrophysics Data System (ADS)
Proctor, R.; Meyers, G.; Roughan, M.; Operators, I.
2008-12-01
The Integrated Marine Observing System (IMOS) is a 92M project established with 50M from the National Collaborative Research Infrastructure Strategy (NCRIS) and co-investments from 10 operators including Universities and government agencies (see below). It is a nationally distributed set of equipment established and maintained at sea, oceanographic data and information services that collectively will contribute to meeting the needs of marine research in both open oceans and over the continental shelf around Australia. In particular, if sustained in the long term, it will permit identification and management of climate change in the marine environment, an area of research that is as yet almost a blank page, studies relevant to conservation of marine biodiversity and research on the role of the oceans in the climate system. While as an NCRIS project IMOS is intended to support research, the data streams are also useful for many societal, environmental and economic applications, such as management of offshore industries, safety at sea, management of marine ecosystems and fisheries and tourism. The infrastructure also contributes to Australia's commitments to international programs of ocean observing and international conventions, such as the 1982 Law of the Sea Convention that established the Australian Exclusive Economic Zone, the United Nations Framework Convention on Climate Change, the Global Ocean Observing System and the intergovernmental coordinating activity Global Earth Observation System of Systems. IMOS is made up of nine national facilities that collect data, using different components of infrastructure and instruments, and two facilities that manage and provide access to data and enhanced data products, one for in situ data and a second for remotely sensed satellite data. The observing facilities include three for the open (bluewater) ocean (Argo Australia, Enhanced Ships of Opportunity and Southern Ocean Time Series), three facilities for coastal currents and water properties (Moorings, Ocean Gliders and HF Radar) and three for coastal ecosystems (Acoustic Tagging and Tracking, Autonomous Underwater Vehicle and a biophysical sensor network on the Great Barrier Reef). The value from this infrastructure investment lies in the coordinated deployment of a wide range of equipment aimed at deriving critical data sets that serve multiple applications. Additional information on IMOS is available at the website (http://www.imos.org.au). The IMOS Operators are Australian Institute of Marine Science, James Cook University, Sydney Institute of Marine Science, Geoscience Australia, Bureau of Meteorology, South Australia Research and Development Institute, University of Western Australia, Curtin University of Technology, CSIRO Marine and Atmospheric Research, University of Tasmania.
NASA Astrophysics Data System (ADS)
Fraser, A.; Palmer, P. I.; Feng, L.; Boesch, H.; Cogan, A.; Parker, R.; Dlugokencky, E. J.; Fraser, P. J.; Krummel, P. B.; Langenfelds, R. L.; O'Doherty, S.; Prinn, R. G.; Steele, L. P.; van der Schoot, M.; Weiss, R. F.
2013-06-01
We use an ensemble Kalman filter (EnKF), together with the GEOS-Chem chemistry transport model, to estimate regional monthly methane (CH4) fluxes for the period June 2009-December 2010 using proxy dry-air column-averaged mole fractions of methane (XCH4) from GOSAT (Greenhouse gases Observing SATellite) and/or NOAA ESRL (Earth System Research Laboratory) and CSIRO GASLAB (Global Atmospheric Sampling Laboratory) CH4 surface mole fraction measurements. Global posterior estimates using GOSAT and/or surface measurements are between 510-516 Tg yr-1, which is less than, though within the uncertainty of, the prior global flux of 529 ± 25 Tg yr-1. We find larger differences between regional prior and posterior fluxes, with the largest changes in monthly emissions (75 Tg yr-1) occurring in Temperate Eurasia. In non-boreal regions the error reductions for inversions using the GOSAT data are at least three times larger (up to 45%) than if only surface data are assimilated, a reflection of the greater spatial coverage of GOSAT, with the two exceptions of latitudes >60° associated with a data filter and over Europe where the surface network adequately describes fluxes on our model spatial and temporal grid. We use CarbonTracker and GEOS-Chem XCO2 model output to investigate model error on quantifying proxy GOSAT XCH4 (involving model XCO2) and inferring methane flux estimates from surface mole fraction data and show similar resulting fluxes, with differences reflecting initial differences in the proxy value. Using a series of observing system simulation experiments (OSSEs) we characterize the posterior flux error introduced by non-uniform atmospheric sampling by GOSAT. We show that clear-sky measurements can theoretically reproduce fluxes within 10% of true values, with the exception of tropical regions where, due to a large seasonal cycle in the number of measurements because of clouds and aerosols, fluxes are within 15% of true fluxes. We evaluate our posterior methane fluxes by incorporating them into GEOS-Chem and sampling the model at the location and time of surface CH4 measurements from the AGAGE (Advanced Global Atmospheric Gases Experiment) network and column XCH4 measurements from TCCON (Total Carbon Column Observing Network). The posterior fluxes modestly improve the model agreement with AGAGE and TCCON data relative to prior fluxes, with the correlation coefficients (r2) increasing by a mean of 0.04 (range: -0.17 to 0.23) and the biases decreasing by a mean of 0.4 ppb (range: -8.9 to 8.4 ppb).
NASA Astrophysics Data System (ADS)
Fraser, A.; Palmer, P. I.; Feng, L.; Boesch, H.; Cogan, A.; Parker, R.; Dlugokencky, E. J.; Fraser, P. J.; Krummel, P. B.; Langenfelds, R. L.; O'Doherty, S.; Prinn, R. G.; Steele, L. P.; van der Schoot, M.; Weiss, R. F.
2012-12-01
We use an ensemble Kalman filter (EnKF), together with the GEOS-Chem chemistry transport model, to estimate regional monthly methane (CH4) fluxes for the period June 2009-December 2010 using proxy dry-air column-averaged mole fractions of methane (XCH4) from GOSAT (Greenhouse gases Observing SATellite) and/or NOAA ESRL (Earth System Research Laboratory) and CSIRO GASLAB (Global Atmospheric Sampling Laboratory) CH4 surface mole fraction measurements. Global posterior estimates using GOSAT and/or surface measurements are between 510-516 Tg yr-1, which is less than, though within the uncertainty of, the prior global flux of 529 ± 25 Tg yr-1. We find larger differences between regional prior and posterior fluxes, with the largest changes (75 Tg yr-1) occurring in Temperate Eurasia. In non-boreal regions the error reductions for inversions using the GOSAT data are at least three times larger (up to 45%) than if only surface data are assimilated, a reflection of the greater spatial coverage of GOSAT, with the two exceptions of latitudes > 60° associated with a data filter and over Europe where the surface network adequately describes fluxes on our model spatial and temporal grid. We use CarbonTracker and GEOS-Chem XCO2 model output to investigate model error on quantifying proxy GOSAT XCH4 (involving model XCO2) and inferring methane flux estimates from surface mole fraction data and show similar resulting fluxes, with differences reflecting initial differences in the proxy value. Using a series of observing system simulation experiments (OSSEs) we characterize the posterior flux error introduced by non-uniform atmospheric sampling by GOSAT. We show that clear-sky measurements can theoretically reproduce fluxes within 5% of true values, with the exception of South Africa and Tropical South America where, due to a large seasonal cycle in the number of measurements because of clouds and aerosols, fluxes are within 17% and 19% of true fluxes, respectively. We evaluate our posterior methane fluxes by incorporating them into GEOS-Chem and sampling the model at the location and time of independent surface CH4 measurements from the AGAGE (Advanced Global Atmospheric Gases Experiment) network and column XCH4 measurements from TCCON (Total Carbon Column Observing Network). The posterior fluxes modestly improve the model agreement with AGAGE and TCCON data relative to prior fluxes, with the correlation coefficients (r2) increasing by a mean of 0.04 (range: -0.17, 0.23) and the biases decreasing by a mean of 0.4 ppb (range: -8.9, 8.4 ppb).
Faecal bulking efficacy of Australasian breakfast cereals.
Monro, John A
2002-01-01
Faecal bulk may play an important role in preventing a range of disorders of the large bowel, but as yet there is little information available on the relative faecal bulking capacities of various foods. Breakfast cereals are often promoted as a good source of potential bulk for 'inner health' because they provide dietary fibre, but their relative abilities to provide faecal bulk per se have not been described. The faecal bulking efficacy of 28 representative Australasian breakfast cereals was therefore measured. A rat model developed for the purpose, and shown to give similar responses as humans to cereal fibres, was used to measure faecal bulking efficacy as increases in fully hydrated faecal weight/100 g diet, based on precise measurements of food intake, faecal dry matter output and faecal water-holding capacity (g water held without stress/g faecal dry matter). Compared to a baseline diet containing 50% sucrose, increments in hydrated faecal weight due to 50% breakfast cereal ranged from slightly negative (Cornflakes, -2 g/100 g diet) to about 80 g/100 g diet (San Bran). Most breakfast cereals increased hydrated faecal weight by between 10 and 20 g/100 g diet from a baseline of 21 +/- 1.5 g/100 g diet, but four products containing high levels of wheat bran had an exceptionally large impact on hydrated faecal weight (increment > 20 g/100 g diet), and the changes resulted more from relative changes in dry matter output than in faecal water retention/gram. However, as faecal water retention was about 2.5 g water/g faecal dry matter on average, increases in dry matter represented large increases in faecal water load. Faecal bulking indices (FBI) for most of the breakfast cereals were less than 20 (wheat bran = 100). The content of wheat bran equivalents for faecal bulk (WBE(fb)) in the breakfast cereals was calculated from FBI. Most breakfast cereals contributed, per serve, less than 10% of a theoretical daily reference value for faecal bulk (DRV(fb) = 63 WBE(fb)/day), which was based on data from human clinical trials and dietary fibre recommendations. Based on the WBE(fb) contribution/serving that would be required to meet the DRV(fb) from the number of servings of dietary fibre sources in the CSIRO 12345+ food and nutrition plan, the results suggest that although some high bran breakfast cereals may contribute substantially to, and many are reasonable sources of, faecal bulk, for most of them, one or two servings at breakfast cannot be relied on to effectively redress shortfalls in faecal bulk elsewhere in the diet.
NASA Astrophysics Data System (ADS)
Heikkilä, U.; Shi, X.; Phipps, S. J.; Smith, A. M.
2014-04-01
This study investigates the effect of deglacial climate on the deposition of the solar proxy 10Be globally, and at two specific locations, the GRIP site at Summit, Central Greenland, and the Law Dome site in coastal Antarctica. The deglacial climate is represented by three 30 year time slice simulations of 10 000 BP (years before present = 1950 CE), 11 000 and 12 000 BP, compared with a preindustrial control simulation. The model used is the ECHAM5-HAM atmospheric aerosol-climate model, driven with sea-surface temperatures and sea ice cover simulated using the CSIRO Mk3L coupled climate system model. The focus is on isolating the 10Be production signal, driven by solar variability, from the weather- or climate-driven noise in the 10Be deposition flux during different stages of climate. The production signal varies at lower frequencies, dominated by the 11 year solar cycle within the 30 year timescale of these experiments. The climatic noise is of higher frequencies than 11 years during the 30 year period studied. We first apply empirical orthogonal function (EOF) analysis to global 10Be deposition on the annual scale and find that the first principal component, consisting of the spatial pattern of mean 10Be deposition and the temporally varying solar signal, explains 64% of the variability. The following principal components are closely related to those of precipitation. Then, we apply ensemble empirical decomposition (EEMD) analysis to the time series of 10Be deposition at GRIP and at Law Dome, which is an effective method for adaptively decomposing the time series into different frequency components. The low-frequency components and the long-term trend represent production and have reduced noise compared to the entire frequency spectrum of the deposition. The high-frequency components represent climate-driven noise related to the seasonal cycle of e.g. precipitation and are closely connected to high frequencies of precipitation. These results firstly show that the 10Be atmospheric production signal is preserved in the deposition flux to surface even during climates very different from today's both in global data and at two specific locations. Secondly, noise can be effectively reduced from 10Be deposition data by simply applying the EOF analysis in the case of a reasonably large number of available data sets, or by decomposing the individual data sets to filter out high-frequency fluctuations.
NASA Astrophysics Data System (ADS)
Lemaire, Vincent E. P.; Colette, Augustin; Menut, Laurent
2016-03-01
Because of its sensitivity to unfavorable weather patterns, air pollution is sensitive to climate change so that, in the future, a climate penalty could jeopardize the expected efficiency of air pollution mitigation measures. A common method to assess the impact of climate on air quality consists in implementing chemistry-transport models forced by climate projections. However, the computing cost of such methods requires optimizing ensemble exploration techniques. By using a training data set from a deterministic projection of climate and air quality over Europe, we identified the main meteorological drivers of air quality for eight regions in Europe and developed statistical models that could be used to predict air pollutant concentrations. The evolution of the key climate variables driving either particulate or gaseous pollution allows selecting the members of the EuroCordex ensemble of regional climate projections that should be used in priority for future air quality projections (CanESM2/RCA4; CNRM-CM5-LR/RCA4 and CSIRO-Mk3-6-0/RCA4 and MPI-ESM-LR/CCLM following the EuroCordex terminology). After having tested the validity of the statistical model in predictive mode, we can provide ranges of uncertainty attributed to the spread of the regional climate projection ensemble by the end of the century (2071-2100) for the RCP8.5. In the three regions where the statistical model of the impact of climate change on PM2.5 offers satisfactory performances, we find a climate benefit (a decrease of PM2.5 concentrations under future climate) of -1.08 (±0.21), -1.03 (±0.32), -0.83 (±0.14) µg m-3, for respectively Eastern Europe, Mid-Europe and Northern Italy. In the British-Irish Isles, Scandinavia, France, the Iberian Peninsula and the Mediterranean, the statistical model is not considered skillful enough to draw any conclusion for PM2.5. In Eastern Europe, France, the Iberian Peninsula, Mid-Europe and Northern Italy, the statistical model of the impact of climate change on ozone was considered satisfactory and it confirms the climate penalty bearing upon ozone of 10.51 (±3.06), 11.70 (±3.63), 11.53 (±1.55), 9.86 (±4.41), 4.82 (±1.79) µg m-3, respectively. In the British-Irish Isles, Scandinavia and the Mediterranean, the skill of the statistical model was not considered robust enough to draw any conclusion for ozone pollution.
The Parkes H I Survey of the Magellanic System
NASA Astrophysics Data System (ADS)
Brüns, C.; Kerp, J.; Staveley-Smith, L.; Mebold, U.; Putman, M. E.; Haynes, R. F.; Kalberla, P. M. W.; Muller, E.; Filipovic, M. D.
2005-03-01
We present the first fully and uniformly sampled, spatially complete HI survey of the entire Magellanic System with high velocity resolution (Δ v = 1.0 km s-1), performed with the Parkes Telescope. Approximately 24 percent of the southern sky was covered by this survey on a ≈5´ grid with an angular resolution of HPBW = 14.1 arcmin. A fully automated data-reduction scheme was developed for this survey to handle the large number of HI spectra (1.5×106). The individual Hanning smoothed and polarization averaged spectra have an rms brightness temperature noise of σ = 0.12 K. The final data-cubes have an rms noise of σrms ≈ 0.05 K and an effective angular resolution of ≈16´. In this paper we describe the survey parameters, the data-reduction and the general distribution of the HI gas. The Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC) are associated with huge gaseous features - the Magellanic Bridge, the Interface Region, the Magellanic Stream, and the Leading Arm - with a total HI mass of M(HI) = 4.87×108 M⊙ [d/55 kpc]2, if all HI gas is at the same distance of 55 kpc. Approximately two thirds of this HI gas is located close to the Magellanic Clouds (Magellanic Bridge and Interface Region), and 25% of the HI gas is associated with the Magellanic Stream. The Leading Arm has a four times lower HI mass than the Magellanic Stream, corresponding to 6% of the total HI mass of the gaseous features. We have analyzed the velocity field of the Magellanic Clouds and their neighborhood introducing a LMC-standard-of-rest frame. The HI in the Magellanic Bridge shows low velocities relative to the Magellanic Clouds suggesting an almost parallel motion, while the gas in the Interface Region has significantly higher relative velocities indicating that this gas is leaving the Magellanic Bridge building up a new section of the Magellanic Stream. The Leading Arm is connected to the Magellanic Bridge close to an extended arm of the LMC. The clouds in the Magellanic Stream and the Leading Arm show significant differences, both in the column density distribution and in the shapes of the line profiles. The HI gas in the Magellanic Stream is more smoothly distributed than the gas in the Leading Arm. These morphological differences can be explained if the Leading Arm is at considerably lower z-heights and embedded in a higher pressure ambient medium. The Parkes Telescope is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO.
Atmospheric CO2 Over the Last 1000 Years: WAIS Divide Ice Core Record
NASA Astrophysics Data System (ADS)
Ahn, J.; Brook, E. J.
2009-04-01
How atmospheric CO2 varied over the last thousands years is of great interest because we may see not only natural, but also anthropogenic variations (Ruddiman, Climatic Change, 2003). The Law Dome ice cores reveal decadal to centennial variations in CO2 over the last 2000 years (MacFarling Meure et al., Geophys. Res. Lett., 2006). However, these variations have not yet been well confirmed in other ice core records. Here we use a newly drilled WAIS Divide ice core, which is ideal for this purpose because WAIS Divide has relatively high snow accumulation rate and small gas age distribution that allow us to observe decadal CO2 variations with minimal damping. We have started an extensive study of CO2 in WAIS Divide core. So far we have obtained data for 960-1940 A.D. from the WDC05-A core drilled in 2005-2006. 344 ice samples from 103 depths were analyzed and the standard error of the mean is ~0.8 ppm on average. Ancient air in 8~12 g of bubbly ice is liberated by crushing with steel pins at -35 °C and trapped in stainless steel tubes at -262 °C. CO2 mixing ratio in the extracted air is precisely determined using a gas chromatographic method. Details of the high-precision methods are described in Ahn et al. (J. of Glaciology, in press). Our new results show preindustrial atmospheric CO2 variability of ~ 10 ppm. The most striking feature of the record is a rapid atmospheric CO2 decrease of 7~8 ppm within ~20 years at ~ 1600 A.D. Considering the larger smoothing of gas records in the WAIS Divide relative to Law Dome, our results confirm the atmospheric CO2 decrease of ~10 ppm in Law Dome records observed at this time. However, this event is not significant in the Dronning Maud Land ice core (Siegenthaler et al., Tellus, 2005), probably due to more extensive smoothing of gas records in the core. Similar rapid changes of CO2 at other times in the WAIS Divide record need to be confirmed with higher resolution studies. We also found that our WAIS Divide CO2 data are slightly higher than those of Law Dome by 3~5 ppm over most of the record. It is not clear whether the offset is due to real variability in ice cores or an analytical offset. We are participating in international laboratory intercalibration to determine the origin of the offset. Several WDC05-A and Law Dome ice samples are shared and will be analyzed for data comparison with CSIRO (Australian Common Wealth Scientific and Research Organization).
New insights for the hydrology of the Rhine based on the new generation climate models
NASA Astrophysics Data System (ADS)
Bouaziz, Laurène; Sperna Weiland, Frederiek; Beersma, Jules; Buiteveld, Hendrik
2014-05-01
Decision makers base their choices of adaptation strategies on climate change projections and their associated hydrological consequences. New insights of climate change gained under the new generation of climate models belonging to the IPCC 5th assessment report may influence (the planning of) adaption measures and/or future expectations. In this study, hydrological impacts of climate change as projected under the new generation of climate models for the Rhine were assessed. Hereto we downscaled 31 General Circulation Models (GCMs), which were developed as part of the Coupled Model Intercomparison Project Phase 5 (CMIP5), using an advanced Delta Change Method for the Rhine basin. Changes in mean monthly, maximum and minimum flows at Lobith were derived with the semi-distributed hydrological model HBV of the Rhine. The projected changes were compared to changes that were previously obtained in the trans-boundary project Rheinblick using eight CMIP3 GCMs and Regional Climate Models (RCMs) for emission scenario A1B. All eight selected CMIP3 models (scenario A1B) predicted for 2071-2100 a decrease in mean monthly flows between June and October. Similar decreases were found for some of the 31 CMIP5 models for Representative Concentration Pathways (RCPs) 4.5, 6.0 and 8.5. However, under each RCP, there were also models that projected an increase in mean flows between June and October and on average the decrease was smaller than for the eight CMIP3 models. For 2071-2100, also the mean annual minimum 7-days discharge decreased less in the CMIP5 model simulations than was projected in CMIP3. When assessing the response of mean monthly flows of the CMIP5 simulation with the CSIRO-Mk3-6-0 and HadGEM2-ES models with respect to initial conditions and RCPs, it was found that natural variability plays a dominant role in the near future (2021-2050), while changes in mean monthly flows are dominated by the radiative forcing in the far future (2071-2100). According to RCP 8.5 model simulations, the change in mean monthly flow from May to November may be half the change in mean monthly flow projected by RCP 4.5. From January to March, RCP 8.5 simulations projected higher changes in mean monthly flows than RCP 4.5 simulations. These new insights based on the CMIP5 simulations imply that for the Rhine, the mean and low flow extremes might not decrease as much in summer as was expected under CMIP3. Stresses on water availability during summer are therefore also less than expected from CMIP3.
NASA Astrophysics Data System (ADS)
Furukawa, J.; Takada, T.; Monma, D.; Lam, L. T.
The UltraBattery has been invented by the CSIRO Energy Technology in Australia and has been developed and produced by the Furukawa Battery Co., Ltd., Japan. This battery is a hybrid energy storage device which combines a super capacitor and a lead-acid battery in single unit cells, taking the best from both technologies without the need of extra, expensive electronic controls. The capacitor enhances the power and lifespan of the lead-acid battery as it acts as a buffer during high-rate discharging and charging, thus enabling it to provide and absorb charge rapidly during vehicle acceleration and braking. The laboratory results of the prototype valve-regulated UltraBatteries show that the capacity, power, available energy, cold cranking and self-discharge of these batteries have met, or exceeded, all the respective performance targets set for both minimum and maximum power-assist HEVs. The cycling performance of the UltraBatteries under micro-, mild- and full-HEV duties is at least four times longer than that of the state-of-the-art lead-acid batteries. Importantly, the cycling performance of UltraBatteries is proven to be comparable or even better than that of the Ni-MH cells. On the other hand, the field trial of UltraBatteries in the Honda Insight HEV shows that the vehicle has surpassed 170,000 km and the batteries are still in a healthy condition. Furthermore, the UltraBatteries demonstrate very good acceptance of the charge from regenerative braking even at high state-of-charge, e.g., 70% during driving. Therefore, no equalization charge is required for the UltraBatteries during field trial. The HEV powered by UltraBatteries gives slightly higher fuel consumption (cf., 4.16 with 4.05 L/100 km) and CO 2 emissions (cf., 98.8 with 96 g km -1) compared with that by Ni-MH cells. There are no differences in driving experience between the Honda Insight powered by UltraBatteries and by Ni-MH cells. Given such comparable performance, the UltraBattery pack costs considerably less - only 20-40% of that of the Ni-MH pack by one estimate. In parallel with the field trial, a similar 144-V valve-regulated UltraBattery pack was also evaluated under simulated medium-HEV duty in our laboratories. In this study, the laboratory performance of the 144-V valve-regulated UltraBattery pack under simulated medium-HEV duty and that of the recently developed flooded-type UltraBattery under micro-HEV duty will be discussed. The flooded-type UltraBattery is expected to be favorable to the micro-HEVs because of reduced cost compared with the equivalent valve-regulated counterpart.
Coupling photogrammetric data with DFN-DEM model for rock slope hazard assessment
NASA Astrophysics Data System (ADS)
Donze, Frederic; Scholtes, Luc; Bonilla-Sierra, Viviana; Elmouttie, Marc
2013-04-01
Structural and mechanical analyses of rock mass are key components for rock slope stability assessment. The complementary use of photogrammetric techniques [Poropat, 2001] and coupled DFN-DEM models [Harthong et al., 2012] provides a methodology that can be applied to complex 3D configurations. DFN-DEM formulation [Scholtès & Donzé, 2012a,b] has been chosen for modeling since it can explicitly take into account the fracture sets. Analyses conducted in 3D can produce very complex and unintuitive failure mechanisms. Therefore, a modeling strategy must be established in order to identify the key features which control the stability. For this purpose, a realistic case is presented to show the overall methodology from the photogrammetry acquisition to the mechanical modeling. By combining Sirovision and YADE Open DEM [Kozicki & Donzé, 2008, 2009], it can be shown that even for large camera to rock slope ranges (tested about one kilometer), the accuracy of the data are sufficient to assess the role of the structures on the stability of a jointed rock slope. In this case, on site stereo pairs of 2D images were taken to create 3D surface models. Then, digital identification of structural features on the unstable block zone was processed with Sirojoint software [Sirovision, 2010]. After acquiring the numerical topography, the 3D digitalized and meshed surface was imported into the YADE Open DEM platform to define the studied rock mass as a closed (manifold) volume to define the bounding volume for numerical modeling. The discontinuities were then imported as meshed planar elliptic surfaces into the model. The model was then submitted to gravity loading. During this step, high values of cohesion were assigned to the discontinuities in order to avoid failure or block displacements triggered by inertial effects. To assess the respective role of the pre-existing discontinuities in the block stability, different configurations have been tested as well as different degree of fracture persistency in order to enhance the possible contribution of rock bridges on the failure surface development. It is believed that the proposed methodology can bring valuable complementary information for rock slope stability analysis in presence of complex fractured system for which classical "Factor of Safety" is difficult to express. References • Harthong B., Scholtès L. & F.V. Donzé, Strength characterization of rock masses, using a coupled DEM-DFN model, Geophysical Journal International, doi: 10.1111/j.1365-246X.2012.05642.x, 2012. • Kozicki J & Donzé FV. YADE-OPEN DEM: an open--source software using a discrete element method to simulate granular material, Engineering Computations, 26(7):786-805, 2009 • Kozicki J, Donzé FV. A new open-source software developed for numerical simulations using discrete modeling methods, Comp. Meth. In Appl. Mech. And Eng. 197:4429-4443, 2008. • Poropat, G.V., New methods for mapping the structure of rock masses. In Proceedings, Explo 2001, Hunter Valley, New South Wales, 28-31 October 2001, pp. 253-260, 2001. • Scholtès, L. & Donzé FV. Modelling progressive failure in fractured rock masses using a 3D discrete element method, International Journal of Rock Mechanics and Mining Sciences, 52:18-30, 2012a. • Scholtès, L. & Donzé, F.-V., DEM model for soft and hard rocks: role of grain interlocking on strength, J. Mech. Phys. Solids, doi: 10.1016/j.jmps.2012.10.005, 2012b. • Sirovision, Commonwealth Scientific and Industrial Research Organisation CSIRO, Siro3D Sirovision 3D Imaging Mapping System Manual Version 4.1, 2010
Nearby Hot Stars May Change Our View of Distant Sources
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-07-01
As if it werent enough that quasars distant and bright nuclei of galaxies twinkle of their own accord due to internal processes, nature also provides another complication: these distant radio sources can also appear to twinkle because of intervening material between them and us. A new study has identified a possible source for the material getting in the way.Unexplained VariabilityA Spitzer infrared view of the Helix nebula, which contains ionized streamers of gas extending radially outward from the central star. [NASA/JPL-Caltech/Univ. of Ariz.]Distant quasars occasionally display extreme scintillation, twinkling with variability timescales shorter than a day. This intra-day variability is much greater than we can account for with standard models of the interstellar medium lying between the quasar and us. So what could cause this extreme scattering instead?The first clue to this mystery came from the discovery of strong variability in the radio source PKS 1322110. In setting up follow-up observations of this object, Mark Walker (Manly Astrophysics, Australia) and collaborators noticed that, in the plane of the sky, PKS 1322110 lies very near the bright star Spica. Could this be coincidence, or might this bright foreground star have something to do with the extreme scattering observed?Diagram explaining the source of the intra-day radio source variability as intervening filaments surrounding a hot star. [M. Walker/CSIRO/Manly Astrophysics]Swarms of ClumpsWalker and collaborators put forward a hypothesis: perhaps the ultraviolet photons of nearby hot stars ionize plasma around them, which in turn causes the extreme scattering of the distant background sources.As a model, the authors consider the Helix Nebula, in which a hot, evolved star is surrounded by cool globules of molecular hydrogen gas. The radiation from the star hits these molecular clumps, dragging them into long radial streamers and ionizing their outer skins.Though the molecular clumps in the Helix Nebula were thought to have formed only as the star evolved late into its lifetime, Walker and collaborators are now suggesting that all stars regardless of spectral type or evolutionary stage may be surrounded by swarms of tiny molecular clumps. Aroundstars that are hot enough, these clumps become the ionized plasma streamers that can cause interference with the light traveling to us from distant sources.Significant MassTo test this theory, Walker and collaborators explore observations of two distant radio quasars that have both exhibited intra-day variability over many years of observations. The team identified a hot A-type star near each of these two sources: J1819+3845 has Vega nearby, and PKS 1257326 has Alhakim.Locations of stars along the line of site to two distant quasars, J1819+3845 (top panel) and PKS 1257326 (bottom panel). Both have a nearby, hot star (blue markers) radially within 2 pc: Vega (z = 7.7 pc) and Alhakim (z = 18 pc), respectively. [Walker et al. 2017]By modeling the systems of the sources and stars, the authors show that the size, location, orientation, and numbers of plasma concentrations necessary to explain observations are all consistent with an environment similar to that of the Helix Nebula. Walker and collaborators find that the total mass in the molecular clumps surrounding the two stars would need to be comparable to the mass of the stars themselves.If this picture is correct, and if all stars are indeed surrounded by molecular clumps like these, then a substantial fraction of the mass of ourgalaxy could be contained in these clumps. Besides explaining distant quasar scintillation, this idea would therefore have a significant impact on our overall understanding of how mass in galaxies is distributed. More observations of twinkling quasars are the next step toward confirming this picture.CitationMark A. Walker et al 2017 ApJ 843 15. doi:10.3847/1538-4357/aa705c
NASA Astrophysics Data System (ADS)
Price, D. T.; Joyce, L. A.; McKenney, D. W.
2009-12-01
Projections of future climate simulated by four state-of-art general circulation models (GCM), namely the U.S. NCAR CCSM 3.0, Canadian CGCM 3.1, Australian CSIRO Mk. 3.5 and Japanese MIROC 3.2, forced by each of the IPCC AR4 SRA2, SRB1 and SRA1B greenhouse gas (GHG) emissions scenarios, were downscaled for Canada and the continental USA. For each GCM projection, monthly climate values for a rectangle covering North America were interpolated using ANUSPLIN (e.g., Hutchinson 1995), to a common 0.0833° geographic grid. The resulting 12 high resolution scenarios provide projected change factors for monthly solar radiation, windspeed and vapor pressure, air temperature and precipitation, for the 21st century, referenced to the averages of simulated monthly means for 1961-1990. The 12 interpolated scenario data sets were subjected to a meta-analysis. Data for each projected variable of each climate scenario were averaged for three consecutive 30-year periods (starting in 2011), to create scenario maps of changes in annual and seasonal means. The contiguous 48 U.S. States were grouped into seven regions based on the classification of Bailey (1994), with Alaska forming an eighth region, while Canada was divided into twelve regions based on the Canadian Terrestrial Ecozones (Wiken, 1986). In each region, data were spatially averaged (with area-weighting) and used to create graphs and summary tables of annual and seasonal trends, including long-term changes in interannual variability. Overall, the meta-analysis showed remarkable agreement among the four GCMs, in terms both of their sensitivity to increasing GHG forcing (SRB1→SRA1B→SRA2) and in the relative magnitudes of the climate changes projected for each scenario in each region. Temperatures were projected to increase by 2-4 °C in the southern USA (summer) to as much as 4-8 °C in northern Canada and Alaska (winter minima), by the mid-2080s, relative to 2000. Precipitation was projected to increase by 5-10% over the same period, but with distinct seasonal trends that differed among regions; one GCM projected significant decreases in precipitation in the southern USA. Solar radiation inputs were generally projected to decline slightly, showing consistent inverse relationships to projected precipitation changes, while vapor pressure generally increased, particularly in summer and particularly in coastal regions. Projected changes in interannual variability (based on ratios of predicted to observed standard deviations of annual and seasonal means for 2071-2100 and 1961-1990) were generally less consistent but often tended to decrease with increasing GHG forcing. The data sets will support national and regional climate change impacts studies, including the USDA Forest Service National Renewable Resource Assessment for 2010 and Canadian forest vulnerability assessment for the Canadian Council of Forest Ministers in 2011.
NASA Astrophysics Data System (ADS)
Whan, K. R.; Lindesay, J. A.; Timbal, B.; Raupach, M. R.; Williams, E.
2010-12-01
Australia’s natural environment is adapted to low rainfall availability and high variability but human systems are less able to adapt to variability in the hydrological cycle. Understanding the mechanisms underlying drought persistence and severity is vital to contextualising future climate change. Multiple external forcings mean the mechanisms of drought occurrence in south-eastern Australian are complex. The key influences on SEA climate are El Niño-Southern Oscillation (ENSO), the Indian Ocean Dipole (IOD), the Southern Annular Mode (SAM) and the sub-tropical ridge (STR); each of these large-scale climate modes (LSCM) has been studied widely. The need for research into the interactions among the modes has been noted [1], although to date this has received limited attention. Relationships between LSCM and hydrometeorological variability are nonlinear, making linearity assumptions underlying usual statistical techniques (e.g. correlation, principle components analysis) questionable. In the current research a statistical technique that can deal with nonlinear interactions is applied to a new dataset enabling a full examination of the Australian water balance. The Australian Water Availability Project (AWAP) dataset models the Australian water balance on a fine grid [2]. Hydrological parameters (e.g. soil moisture, evaporation, runoff) are modelled from meteorological data, allowing the complete Australian water balance (climate and hydrology) to be examined and the mechanisms of drought to be studied holistically. Classification and regression trees (CART) are a powerful regression-based technique that is capable of accounting for nonlinear effects. Although it has limited previous application in climate research [3] this methodology is particularly informative in cases with multiple predictors and nonlinear relationships such as climate variability. Statistical relationships between variables are the basis for the decision rules in CART that are used to split the data into increasingly homogeneous groups. CART is applied to the AWAP dataset to identify the hydroclimatic regimes associated with various combinations of LSCM and the importance of each mode in producing the regime. Analysis of the LSCM is conducted on a range of hydroclimatic variables to assess the relative and combined influences of these LSCM on the Australian water balance. This gives information about interactions between LSCM that are vital for specific hydroclimatic states (e.g. drought) and about which combinations of LSCM result in specific regimes. The dominant LSCM in different seasons and the relationships among the climate drivers have been identified. 1. Ummenhofer, C., et al., What causes southeast Australia's worst droughts? Geophysical Research Letters, 2009. 36: p. L04706. 2. Raupach, M., et al., Australian Water Availability Project (AWAP). CSIRO Marine and Atmospheric Research Component: Final Report for Phase 3. 2008. 3. Burrows, W., et al., CART Decision-Tree Statistical Analysis and Prediction of Summer Season Maximum Surface Ozone for the Vancouver, Montreal and Atlantic Regions of Canada. Journal of Applied Meteorology, 1995. 34: p. 1848-1862.
NASA Astrophysics Data System (ADS)
Danyushevsky, L.; Ryan, C.; Kamenetsky, V.; Crawford, A.
2001-12-01
Sulphide inclusions have been identified in olivine phenocrysts (and in one case in a spinel phenocryst) in primitive volcanic rocks from mid- ocean ridges, subduction-related island arcs and backarc basins. These inclusions represent droplets of an immiscible sulphide melt and are trapped by olivine crystals growing from silicate melts. Sulphide melt is usually trapped as separate inclusions, however combined inclusions of sulphide and silicate melts have also been observed. Sulphide inclusions have rounded shapes and vary in size from several up to 100 microns in diameter. At room temperature sulphide inclusions consist of several phases. These phases are formed as a result of crystallisation of the sulphide melt after it was trapped. Crystallisation occurs due to decreasing temperature in the magma chamber after trapping and/or when magma ascents from the magma chamber during eruptions. In all studied sulphides three different phases can be identified: a high- Fe, low-Ni, low-Cu phase; a high-Fe, high-Ni, low-Cu phase; and high-Fe, low-Ni, high-Cu phase. Low-Cu phases appear to be monomineralic, whereas the high-Cu phase is usually composed of a fine intergrowth of high- and low-Cu phases, resembling the quench 'spinifex' structure. Fe, Ni and Cu are the major elements in all sulphides studied. The amount of Ni decreases with decreasing forsterite content of the host olivine phenocryst, which is an index of the degree of silicate magma fractionation. Since Ni content of the silicate magma is decreasing during fractionation, this indicates either that the immiscible sulfide melt remains in equilibrium with the silicate melt continuously changing its composition during fractionation, or that the sulfide melt is continuously separated from the silicate melt during fractionation, with later formed droplets having lower Ni content due to the lower Ni content of the evolved, stronger fractionated silicate melt. Trace element contents of the sulfide inclusions have been analysed on the proton microprobe at CSIRO in Sydney. The main trace elements in the sulfide inclusions are Zn, Pb, Ag, and Se. Other trace elements are below detection limits, which are normally at a level of several ppm. Zn concentrations (120 +/- 40 ppm) in sulphides are similar to those in silicate melts. This indicates that separation of the sulfide melt does not affect Zn contents of silicate melts. On the contrary, Ag (30 +/- 10 ppm) and Pb (40 +/- 10 ppm) contents in sulphides are at least in order of magnitude higher than in the silicate melt, and thus separation of the immiscible sulfide melt can significantly decrease Pb and Ag contents of the silicate magma. The widespread occurrence of sulfide inclusions, which were also described in olivine phenocrysts from ocean island basalts, indicates common saturation at low pressure of mantle-derived magmas with reduced sulfur.
Attribution and Characterisation of Sclerophyll Forested Landscapes Over Large Areas
NASA Astrophysics Data System (ADS)
Jones, Simon; Soto-Berelov, Mariela; Suarez, Lola; Wilkes, Phil; Woodgate, Will; Haywood, Andrew
2016-06-01
This paper presents a methodology for the attribution and characterisation of Sclerophyll forested landscapes over large areas. First we define a set of woody vegetation data primitives (e.g. canopy cover, leaf area index (LAI), bole density, canopy height), which are then scaled-up using multiple remote sensing data sources to characterise and extract landscape woody vegetation features. The advantage of this approach is that vegetation landscape features can be described from composites of these data primitives. The proposed data primitives act as building blocks for the re-creation of past woody characterisation schemes as well as allowing for re-compilation to support present and future policy and management and decision making needs. Three main research sites were attributed; representative of different sclerophyll woody vegetated systems (Box Iron-bark forest; Mountain Ash forest; Mixed Species foothills forest). High resolution hyperspectral and full waveform LiDAR data was acquired over the three research sites. At the same time, land management agencies (Victorian Department of Environment, Land Water and Planning) and researchers (RMIT, CRC for Spatial Information and CSIRO) conducted fieldwork to collect structural and functional measurements of vegetation, using traditional forest mensuration transects and plots, terrestrial lidar scanning and high temporal resolution in-situ autonomous laser (VegNet) scanners. Results are presented of: 1) inter-comparisons of LAI estimations made using ground based hemispherical photography, LAI 2200 PCA, CI-110 and terrestrial and airborne laser scanners; 2) canopy height and vertical canopy complexity derived from airborne LiDAR validated using ground observations; and, 3) time-series characterisation of land cover features. 1. Accuracy targets for remotely sensed LAI products to match within ground based estimates are ± 0.5 LAI or a 20% maximum (CEOS/GCOS) with new aspirational targets of 5%). In this research we conducted a total of 67 ground-based method-to-method pairwise comparisons across 11 plots in five sites, incorporating the previously mentioned LAI methods. Out of the 67 comparisons, 29 had an RMSE ≥ 0.5 LAIe. This has important implications for the validation of remotely sensed products since ground based techniques themselves exhibit LAI variations greater than internationally recommended guidelines for satellite product accuracies. 2. Two methods of canopy height derivation are proposed and tested over a large area (4 Million Ha). 99th percentile maximum height achieved a RMSE of 6.6%, whilst 95th percentile dominant height a RMSE = 10.3%. Vertical canopy complexity (i.e. the number of forest layers of strata) was calculated as the local maxima of vegetation density within the LiDAR canopy profile and determined using a cubic spline smoothing of Pgap. This was then validated against in-situ and LiDAR observations of canopy strata with an RMSE 0.39 canopy layers. 3. Preliminary results are presented of landcover characterisation using LandTrendr analysis of Landsat LEDAPS data. kNN is then used to link these features to a dense network of 800 field plots sites.
NASA Astrophysics Data System (ADS)
Fenta Mekonnen, Dagnenet; Disse, Markus
2018-04-01
Climate change is becoming one of the most threatening issues for the world today in terms of its global context and its response to environmental and socioeconomic drivers. However, large uncertainties between different general circulation models (GCMs) and coarse spatial resolutions make it difficult to use the outputs of GCMs directly, especially for sustainable water management at regional scale, which introduces the need for downscaling techniques using a multimodel approach. This study aims (i) to evaluate the comparative performance of two widely used statistical downscaling techniques, namely the Long Ashton Research Station Weather Generator (LARS-WG) and the Statistical Downscaling Model (SDSM), and (ii) to downscale future climate scenarios of precipitation, maximum temperature (Tmax) and minimum temperature (Tmin) of the Upper Blue Nile River basin at finer spatial and temporal scales to suit further hydrological impact studies. The calibration and validation result illustrates that both downscaling techniques (LARS-WG and SDSM) have shown comparable and good ability to simulate the current local climate variables. Further quantitative and qualitative comparative performance evaluation was done by equally weighted and varying weights of statistical indexes for precipitation only. The evaluation result showed that SDSM using the canESM2 CMIP5 GCM was able to reproduce more accurate long-term mean monthly precipitation but LARS-WG performed best in capturing the extreme events and distribution of daily precipitation in the whole data range. Six selected multimodel CMIP3 GCMs, namely HadCM3, GFDL-CM2.1, ECHAM5-OM, CCSM3, MRI-CGCM2.3.2 and CSIRO-MK3 GCMs, were used for downscaling climate scenarios by the LARS-WG model. The result from the ensemble mean of the six GCM showed an increasing trend for precipitation, Tmax and Tmin. The relative change in precipitation ranged from 1.0 to 14.4 % while the change for mean annual Tmax may increase from 0.4 to 4.3 °C and the change for mean annual Tmin may increase from 0.3 to 4.1 °C. The individual result of the HadCM3 GCM has a good agreement with the ensemble mean result. HadCM3 from CMIP3 using A2a and B2a scenarios and canESM2 from CMIP5 GCMs under RCP2.6, RCP4.5 and RCP8.5 scenarios were downscaled by SDSM. The result from the two GCMs under five different scenarios agrees with the increasing direction of three climate variables (precipitation, Tmax and Tmin). The relative change of the downscaled mean annual precipitation ranges from 2.1 to 43.8 % while the change for mean annual Tmax and Tmin may increase in the range from 0.4 to 2.9 °C and from 0.3 to 1.6 °C respectively.
Novel technique to ensure battery reliability in 42-V PowerNets for new-generation automobiles
NASA Astrophysics Data System (ADS)
Lam, L. T.; Haigh, N. P.; Phyland, C. G.; Huynh, T. D.
The proposed 42-V PowerNet in automobiles requires the battery to provide a large number of shallow discharge-charge cycles at a high rate. High-rate discharge is necessary for engine cranking, while high-rate charge is associated with regenerative braking. The battery will therefore operate at these high rates in a partial-state-of-charge condition — 'HRPSoC duty'. Under simulated HRPSoC duty, it is found that the valve-regulated lead-acid (VRLA) battery fails prematurely due to the progressive accumulation of lead sulfate mainly on the surfaces of the negative plates. This is because the lead sulfate layer cannot be converted efficiently back to sponge lead during charging either from the engine or from the regenerative braking. Eventually, this layer of lead sulfate develops to such extent that the effective surface area of the plate is reduced markedly and the plate can no longer deliver the high-cranking current demanded by the automobile. The objective of this study is to develop and optimize a pulse-generation technique to minimize the development of lead sulfate layers on negative plates of VRLA batteries subjected to HRPSoC duty. The technique involves the application of sets of charging pulses of different frequency. It is found that the cycle-life performance of VRLA batteries is enhanced markedly when d.c. pulses of high frequency are used. For example, battery durability is raised from ˜10 600 cycles (no pulses) to 32 000 cycles with pulses of high frequency. Two key factors contribute to this improvement. The first factor is localization of the charging current on the surfaces of the plates — the higher the frequency, the greater is the amount of current concentrated on the plate surface. This phenomenon is known as the 'skin effect' as only the outer 'skin' of the plate is effectively carrying the current. The second factor is delivery of sufficient charge to the Faradaic resistance of the plate to compensate for the energy loss to inductance and double-layer capacitance effects. The Faradaic resistance represents the electrochemical reaction, i.e., conversion of lead sulfate to lead. The inductance simply results from the connection either between the cables and the terminals of the battery or between the terminals, bus-bars, and the lugs of the plates. The capacitance arises from the double layer which exists at the interface between the plate and the electrolyte solution. These findings have provided a demonstration and a scientific explanation of the benefit of superimposed pulsed current charging in suppressing the sulfation of negative plates in VRLA batteries operated under 42-V PowerNet and hybrid electric vehicle duties. A Novel Pulse™ device has been developed by the CSIRO. This device has the capability to be programmable to suite various applications and can be miniaturized to be encapsulated in the battery cover.
Using new technologies to promote weight management: a randomised controlled trial study protocol.
Jane, Monica; Foster, Jonathan; Hagger, Martin; Pal, Sebely
2015-05-27
Over the last three decades, overweight and obesity and the associated health consequences have become global public health priorities. Methods that have been tried to address this problem have not had the desired impact, suggesting that other approaches need to be considered. One of the lessons learned throughout these attempts is that permanent weight loss requires sustained dietary and lifestyle changes, yet adherence to weight management programs has often been noted as one of the biggest challenges. This trial aims to address this issue by examining whether social media, as a potential health promotion tool, will improve adherence to a weight management program. To test the effectiveness of this measure, the designated program will be delivered via the popular social networking site Facebook, and compared to a standard delivery method that provides exactly the same content but which is communicated through a pamphlet. The trial will be conducted over a period of twelve weeks, with a twelve week follow-up. Although weight loss is expected, this study will specifically investigate the effectiveness of social media as a program delivery method. The program utilised will be one that has already been proven to achieve weight loss, namely The CSIRO Total Wellbeing Diet. This project will be conducted as a 3-arm randomised controlled trial. One hundred and twenty participants will be recruited from the Perth community, and will be randomly assigned to one of the following three groups: the Facebook group, the pamphlet group, or a control group. The Facebook Group will receive the weight management program delivered via a closed group in Facebook, the Pamphlet Group will be given the same weight management program presented in a booklet, and the Control Group will follow the Australian Dietary Guidelines and the National Physical Activity Guidelines for Adults as usual care. Change in weight, body composition and waist circumference will be initial indicators of adherence to the program. Secondary outcome measures will be blood glucose, insulin, blood pressure, arterial stiffness, physical activity, eating behaviour, mental well-being (stress, anxiety, and depression), social support, self-control, self-efficacy, Facebook activity, and program evaluation. It is expected that this trial will support the use of social media - a source of social support and information sharing - as a delivery method for weight management programs, enhancing the reduction in weight expected from dietary and physical activity changes. Facebook is a popular, easy to access and cost-effective online platform that can be used to assist the formation of social groups, and could be translated into health promotion practice relatively easily. It is anticipated in the context of the predicted findings that social media will provide an invaluable resource for health professionals and patients alike. Australian New Zealand Clinical Trials Register (ANZCTR): ACTRN12614000536662. Date registered: 21 May 2014.
Installation and operation of a large scale RAPS system in Peru
NASA Astrophysics Data System (ADS)
Cole, J. F.
In 1997, International Lead Zinc Research Organization Inc. (ILZRO), Solar Energy Industries Association (SEIA), and the Ministry of Energy and Mines (MEM) of Peru signed a Memorandum of Understanding to facilitate the installation of hybrid remote area power supply (RAPS) systems in the Amazon region of Peru. Many remote villages in this vast region have either no or limited electricity supplied by diesel generators running a few hours per day. Subsequently, ILZRO sponsored the engineering design of the hybrid RAPS system and SEIA supported a socio-economic study to determine the sustainability of such systems and the locations for pilot installations. In mid-1998, the Peruvian government approved the design of the system. ILZRO then began efforts to obtain governmental and inter-governmental funding to supplement its own funds to underwrite the cost of manufacture and installation of the systems in two villages in the Amazon region. Additional major funding has been received from the Global Environmental Facility (GEF) administered by the United Nations Development Program (UNDP) and from the Common Fund for Commodities (CFC). Funds have also been received from the US Department of Energy, the International Greenhouse Partnership (Australia) and the Peruvian government. The RAPS system consists of modules designed to provide 150 kW h per day of utility grade ac electricity over a 24 h period. Each module contains a diesel generator, battery bank using heavy-duty 2 V VRLA GEL batteries, a battery charger, a photovoltaic array and an ac/dc inverter. The batteries and electrical components are housed in modified shipping containers. The modules can be installed with a new generator or retrofitted to an existing generator. The charging and discharging regime of the batteries has been recommended by a study carried out by CSIRO, which has simulated the RAPS operation. The system will employ a partial-state-of-charge (PSOC) regime in order to optimize the life of the batteries, which have a projected life of 8-10 years. A remote monitoring system will consist of a satellite link between each of the remote area power systems and one or more central hosts. The system operator will be able to obtain actual operational status of the system and will be able to change set points and to force operation of certain functions in order to test the system. Preliminary cost analyses indicate that such RAPS systems are more economically attractive to provide electricity to remote villages than other alternatives, including 24 h diesel generation and grid extension. The past 5 years have provided a number of lessons learned, particularly related to dealing with government agencies in a developing country, overcoming logistical problems such as shipping long distances and dealing with difficult climate and terrain. Despite difficulties encountered, the promise of RAPS systems as a rapidly growing market for lead-acid batteries appears to be bright given the demand for sustainable remote electrification.
FOREWORD: International Conference on Planetary Boundary Layer and Climate Change
NASA Astrophysics Data System (ADS)
Djolov, G.; Esau, I.
2010-05-01
One of the greatest achievements of climate science has been the establisment of the concept of climate change on a multitude of time scales. The Earth's complex climate system does not allow a straightforward interpretation of dependences between the external parameter perturbation, internal stochastic system dynamics and the long-term system response. The latter is usually referred to as climate change in a narrow sense (IPCC, 2007). The focused international conference "Planetary Boundary Layers and Climate Change" has addressed only time scales and dynamical aspects of climate change with possible links to the turbulent processes in the Planetary Boundary Layer (PBL). Although limited, the conference topic is by no means singular. One should clearly understand that the PBL is the layer where 99% of biosphere and human activity are concentrated. The PBL is the layer where the energy fluxes, which are followed by changes in cryosphere and other known feedbacks, are maximized. At the same time, the PBL processes are of a naturally small scale. What is the averaged long-term effect of the small-scale processes on the long-term climate dynamics? Can this effect be recognized in existing long-term paleo-climate data records? Can it be modeled? What is the current status of our theoretical understanding of this effect? What is the sensitivity of the climate model projections to the representation of small-scale processes? Are there significant indirect effects, e.g. through transport of chemical components, of the PBL processes on climate? These and other linked questions have been addressed during the conference. The Earth's climate has changed many times during the planet's history, with events ranging from ice ages to long periods of warmth. Historically, natural factors such as the amount of energy released from the Sun, volcanic eruptions and changes in the Earth's orbit have affected the Earth's climate. Beginning late in the 18th century, human activities associated with the Industrial Revolution such as the addition of greenhouse gases and aerosols has changed the composition of the atmosphere. These changes are likely to have influenced temperature, precipitation, storms and sea level (IPCC, 2007). However, these features of the climate also vary naturally, so determining what fraction of climate changes are due to natural variability versus human activities is challenging and not yet a solved problem. Africa is vulnerable to climate change as its ability to adaptat and mitigate is considerably dampened (IPCC, 2007). Climate change may impede a nations ability to achieve sustainable development and the Millennium Development Goals, and because of that Africa (particularly sub-tropical Africa) will experience increased levels of water stress and reduced agricultural yields of up to 50% by 2020. An example of the scale of the region's vulnerability was demonstrated during the last very dry year (1991/92) when 30% of the southern African population was put on food aid and more than one million people were displaced. Climate change in Africa is essentially dependent on our understanding of the PBL processes both due to the indispensible role of the atmospheric convection in the African climate and due to its tele-connections to other regions, e.g. the tropical Pacific and Indian monsoon regions. Although numerous publications attribute the observed changes to one or another modification of the convective patterns, further progress is impeded by imperfections of the small-scale process parameterizations in the models. The uncertainties include parameter uncertainties of known physical processes, which could be reduced through better observations/modelling, as well as uncertainties in our knowledge of physical processes themselves (or structural uncertainties), which could be reduced only through theoretical development and design of new, original observations/experiments (Oppenheimer et al., Science, 2007). Arguably, the structural uncertainties is hard to reduce and this could be one of the reasons determining slow progress in narrowing the climate model uncertainty range over the last 30 years (Knutti and Hagerl, Nature Geoscience, 2008). One of the most prominent structural uncertainties in the ongoing transient climate change is related to poor understanding and hence incorrect modelling of the turbulent physics and dynamics processes in the planetary boundary layer. Nevertheless, the climate models continue to rely on physically incorrect boundary layer parameterizations (Cuxart et al., BLM, 2006), whose erroneous dynamical response in the climate models may lead to significant abnormalities in simulated climate. At present, international efforts in theoretical understanding of the turbulent mixing have resulted in significant progress in turbulence simulation, measurements and parameterizations. However, this understanding has not yet found its way to the climate research community. Vice versa, climate research is not usually addressed by the boundary layer research community. The gap needs to be closed in order to crucially complete the scientific basis of climate change studies. The focus of the proposed forum could be formulated as follows: The planetary boundary layer determines several key parameters controlling the Earth's climate system but being a dynamic sub-system, just a layer of turbulent mixing in the atmosphere/ocean, it is also controlled by the climate system and its changes. Such a dynamic relationship causes a planetary boundary layer feedback (PBL-feedback) which could be defined as the response of the surface air temperature on changes in the vertical turbulent mixing. The forum participants have discussed both climatological and fluid dynamic aspects of this response, in order to quantify their role in the Earth's transient heat uptake and its representation in climate models. The choice of the forum location and dates are motivated by the role of tropical oceans and convection in the climate system and the prominent demonstration of the climate sensitivity to the ocean heat uptake observed off Cape Town. The international conference responded to the urgent need of advancing our understanding of the complex climate system and development of adequate measures for saving the planet from environmental disaster. It also fits well with the Republic of South African government's major political decision to include the responses to global change/climate change at the very top of science and technology policy. The conference participants are grateful to the Norway Research Council and the National Research Foundation (NRF) RSA who supported the Conference through the project "Analysis and Possibility for Control of Atmospheric Boundary Layer Processes to Facilitate Adaptation to Environmental Changes" realized in the framework of the Programme for Research and Co-operation Phase II between the two countries. Kirstenbosh Biodiversity Institute and Botanical Gardens, Cape Town contribution of securing one of the most beautiful Conference venues in the world and technical support is also highly appreciated. G. Djolov and I. Esau Editors Conference_Photo Conference Organising Comittee Djolov, G.South AfricaUniversity of Pretoria Esau, I.NorwayNansen Environmental and Remote Sensing Center Hewitson, B.South AfricaUniversity of Cape Town McGregor, J.AustraliaCSIRO Marine and Atmospheric Research Midgley, G.South AfricaSouth African National Botanical Institute Mphepya, J.South AfricaSouth African Weather Service Piketh, S.South AfricaUniversity of the Witwatersrand Pielke, R.USAUniversity of Colorado, Boulder Pienaar, K.South AfricaUniversity of the North West Rautenbach, H.South AfricaUniversity of Pretoria Zilitinkevich, S.FinlandUniversity of Helsinki The conference was organized by: University of Pretoria Nansen Environmental and Remote Sensing Center With support and sponsorship from: Norwegian Research Council (grant N 197649) Kirstenbosh Biodiversity Institute and Botanical Gardens
NASA Astrophysics Data System (ADS)
Lombardo, Vincenzo; Piana, Fabrizio; Mimmo, Dario; Fubelli, Giandomenico; Giardino, Marco
2016-04-01
Encoding of geologic knowledge in formal languages is an ambitious task, aiming at the interoperability and organic representation of geological data, and semantic characterization of geologic maps. Initiatives such as GeoScience Markup Language (last version is GeoSciML 4, 2015[1]) and INSPIRE "Data Specification on Geology" (an operative simplification of GeoSciML, last version is 3.0 rc3, 2013[2]), as well as the recent terminological shepherding of the Geoscience Terminology Working Group (GTWG[3]) have been promoting information exchange of the geologic knowledge. There have also been limited attempts to encode the knowledge in a machine-readable format, especially in the lithology domain (see e.g. the CGI_Lithology ontology[4]), but a comprehensive ontological model that connect the several knowledge sources is still lacking. This presentation concerns the "OntoGeonous" initiative, which aims at encoding the geologic knowledge, as expressed through the standard vocabularies, schemas and data models mentioned above, through a number of interlinked computational ontologies, based on the languages of the Semantic Web and the paradigm of Linked Open Data. The initiative proceeds in parallel with a concrete case study, concerning the setting up of a synthetic digital geological map of the Piemonte region (NW Italy), named "GEOPiemonteMap" (developed by the CNR Institute of Geosciences and Earth Resources, CNR IGG, Torino), where the description and classification of GeologicUnits has been supported by the modeling and implementation of the ontologies. We have devised a tripartite ontological model called OntoGeonous that consists of: 1) an ontology of the geologic features (in particular, GeologicUnit, GeomorphologicFeature, and GeologicStructure[5], modeled from the definitions and UML schemata of CGI vocabularies[6], GeoScienceML and INSPIRE, and aligned with the Planetary realm of NASA SWEET ontology[7]), 2) an ontology of the Earth materials (as defined by the SimpleLithology CGI vocabulary and aligned as a subclass of the Substance class in NASA SWEET ontology), and 3) an ontology of the MappedFeatures (as defined in the Representation sub-taxonomy of the NASA SWEET ontology). The latter correspond to the concrete elements of the map, with their geometry (polygons, lines) and geographical coordinates. The ontology model has been developed by taking into account applications primarily concerning the needs of geological mapping; nevertheless, the model is general enough to be applied to other contexts. In particular, we show how the automatic reasoning capabilities of the ontology system can be employed in tasks of unit definition and input filling of the map database and for supporting geologists in thematic re-classification of the map instances (e.g. for coloring tasks). ---------------------------------------- [1] http://www.geosciml.org [2] http://inspire.jrc.ec.europa.eu/documents/Data_Specifications/INSPIRE_DataSpecification_GE_v3.0rc3.pdf [3] http://www.cgi-iugs.org/tech_collaboration/geoscience_terminology_working_group.html [4] https://www.seegrid.csiro.au/subversion/CGI_CDTGVocabulary/trunk/OwlWork/CGI_Lithology.owl [5] We are currently neglecting the encoding of the geologic events, left as a future work. [6] http://resource.geosciml.org/vocabulary/cgi/201211/ [7] Web site: https://sweet.jpl.nasa.gov, Di Giuseppe et al., 2013, SWEET ontology coverage for earth system sciences, http://www.ics.uci.edu/~ndigiuse/Nicholas_DiGiuseppe/Research_files/digiuseppe14.pdf; S. Barahmand et al. 2009, A Survey on SWEET Ontologies and their Applications, http://www-scf.usc.edu/~taheriya/reports/csci586-report.pdf
Data Scientists ARE coming of age: but WHERE are they coming from?
NASA Astrophysics Data System (ADS)
Evans, N.; Bastrakova, I.; Connor, N.; Raymond, O.; Wyborn, L. A.
2013-12-01
The fourth paradigm of data intensive science is upon us: a new fundamental scientific methodology has emerged which is underpinned by the capability to analyse large volumes of data using advanced computational capacities. This combination is enabling earth and space scientists to respond to decadal challenges on issues such as the sustainable development of our natural resources, impacts of climate change and protection from national hazards. Fundamental to the data intensive paradigm is data that are readily accessible and capable of being integrated and amalgamated with other data often from multiple sources. For many years Earth and Space science practitioners have been drowning in a data deluge. In many cases, either lacking confidence in their capability and/or not having the time or capacity to manage these data assets they have called in the data professionals. However, such people rarely had domain knowledge of the data they were dealing with and before long it emerged that although the ';containers' of data were now much better managed and documented, in reality the content was locked up and difficult to access, particularly for HPC environments where national to global scale problems were being addressed. Geoscience Australia (GA) is the custodian of over 4 PB of Geoscientific data and is a key provider of evidence-based, scientific advice to government on national issues. Since 2011, in collaboration with CSIRO Minerals Down Under Program, and the National Computational Infrastructure, GA has begun a series of data intensive scientific research pilots that focussed on applying advanced ICT tools and technologies to enhance scientific outcomes for the agency, in particular, national scale analysis of data sets that can be up to 500 TB in size. As in any change program, a small group of innovators and early adopters took up the challenge of data intensive science and quickly showed that GA was able to use new ICT technologies to exploit an information-rich world to undertake applied research and to deliver new business outcomes in ways that current technologies do not allow. The innovators clearly had the necessary skills to rapidly adapt to data intensive techniques. However, if we were to scale out to the rest of the organisation, we needed to quantify these skills. The Strategic People Development Section of GA agreed to: * Conduct a capability analysis of the scientific staff that participated in the pilot projects including a review of university training and post graduate training; and * Conduct capability analysis of the technical groups involved in the pilot projects. The analysis identified the need for multi-disciplinary teams across the spectrum from pure scientists to pure ICT staff along with a key hybrid role - the Data Scientist, who has a greater capacity in mathematical, numerical modelling, statistics, computational skills, software engineering and spatial skills and the ability to integrate data across multiple domains. To fill the emerging gap, GA is asking the questions; how do we find or develop this capability, can we successfully transform the Scientist or the ICT Professional, are our educational facilities modifying their training - but it is certainly leading GA to acknowledge, formalise, and promote a continuum of skills and roles, changing our recruitment, re-assignment and Learning and Development strategic decisions.
Burnham, Samantha C; Bourgeat, Pierrick; Doré, Vincent; Savage, Greg; Brown, Belinda; Laws, Simon; Maruff, Paul; Salvado, Olivier; Ames, David; Martins, Ralph N; Masters, Colin L; Rowe, Christopher C; Villemagne, Victor L
2016-09-01
Brain amyloid β (Aβ) deposition and neurodegeneration have been documented in about 50-60% of cognitively healthy elderly individuals (aged 60 years or older). The long-term cognitive consequences of the presence of Alzheimer's disease pathology and neurodegeneration, and whether they have an independent or synergistic effect on cognition, are unclear. We aimed to characterise the long-term clinical and cognitive trajectories of healthy elderly individuals using a two-marker (Alzheimer's disease pathology and neurodegeneration) imaging construct. Between Nov 3, 2006, and Nov 25, 2014, 573 cognitively healthy individuals in Melbourne and Perth, Australia, (mean age 73·1 years [SD 6·2]; 58% women) were enrolled in the Australian Imaging, Biomarker and Lifestyle (AIBL) study. Alzheimer's disease pathology (A) was determined by measuring Aβ deposition by PET, and neurodegeneration (N) was established by measuring hippocampal volume using MRI. Individuals were categorised as A(-)N(-), A(+)N(-), A(+)N(+), or suspected non-Alzheimer's disease pathophysiology (A(-)N(+), SNAP). Clinical progression, hippocampal volume, standard neuropsychological tests, and domain-specific and global cognitive composite scores were assessed over 6 years of follow-up. Linear mixed effect models and a Cox proportional hazards model of survival were used to evaluate, compare, and contrast the clinical, cognitive, and volumetric trajectories of patients in the four AN categories. 50 (9%) healthy individuals were classified as A(+)N(+), 87 (15%) as A(+)N(-), 310 (54%) as A(-)N(-), and 126 (22%) as SNAP. APOE ε4 was more frequent in participants in the A(+)N(+) (27; 54%) and A(+)N(-) (42; 48%) groups than in the A(-)N(-) (66; 21%) and SNAP groups (23; 18%). The A(+)N(-) and A(+)N(+) groups had significantly faster cognitive decline than the A(-)N(-) group (0·08 SD per year for AIBL-Preclinical AD Cognitive Composite [PACC]; p<0·0001; and 0·25; p<0·0001; respectively). The A (+)N(+) group also had faster hippocampal atrophy than the A(-)N(-) group (0·04 cm(3) per year; p=0·02). The SNAP group generally did not show significant decline over time compared with the A(-)N(-) group (0·03 SD per year [p=0·19] for AIBL-PACC and a 0·02 cm(3) per year increase [p=0·16] for hippocampal volume), although SNAP was sometimes associated with lower baseline cognitive scores (0·20 SD less than A(-)N(-) for AIBL-PACC). Within the follow-up, 24% (n=12) of individuals in the A(+)N(+) group and 16% (n=14) in the A(+)N(-) group progressed to amnestic mild cognitive impairment or Alzheimer's disease, compared with 9% (n=11) in the SNAP group. Brain amyloidosis, a surrogate marker of Alzheimer's disease pathology, is a risk factor for cognitive decline and for progression from preclinical stages to symptomatic stages of the disease, with neurodegeneration acting as a compounding factor. However, neurodegeneration alone does not confer a significantly different risk of cognitive decline from that in the group with neither brain amyloidosis or neurodegeneration. CSIRO Flagship Collaboration Fund and the Science and Industry Endowment Fund (SIEF), National Health and Medical Research Council, the Dementia Collaborative Research Centres programme, McCusker Alzheimer's Research Foundation, and Operational Infrastructure Support from the Government of Victoria. Copyright © 2016 Elsevier Ltd. All rights reserved.
Climate change in the Pacific - is it real or not?
NASA Astrophysics Data System (ADS)
Kuleshov, Yuriy
2013-04-01
In this presentation, novel approaches and new ideas for students and young researchers to appreciate the importance of climate science are discussed. These approaches have been applied through conducting a number of training workshops in the Pacific Island Countries and teaching a course on climate change international law and climate change science at the University of the South Pacific (USP) - the first course on this type in the Pacific. Particular focus of this presentation is on broadening students' experience with application of web-based information tools for analysis of climatic extremes and natural hazards such as tropical cyclones. Over the past few years, significant efforts of Australian climate scientists have been dedicated to improving understanding of climate in the Pacific through the International Climate Change Adaptation Initiative (the Australian Government Initiative to assist with high priority climate adaptation needs in vulnerable countries in the Asia-Pacific region). The first comprehensive scientific report about the Pacific climate has been published in 2011, as an outcome of the Pacific Climate Change Science Program (PCCSP). A range of web-based information tools such as the Pacific Tropical Cyclone Data Portal, the Pacific Climate Change Data Portal and the Pacific Seasonal Climate Prediction Portal has been also developed through the PCCSP and the Pacific Adaptation Strategy Assistance Program. Currently, further advancement in seasonal climate prediction science and developing enhanced software tools for the Pacific is undertaken through the Theme 1 of the Pacific Australia Climate Change Science and Adaptation Planning (PACCSAP) Program. This new scientific knowledge needs to be transferred to students to provide them with true information about climate change and its impact on the Pacific Island Countries. Teachers and educators need their knowledge-base regularly updated and tools that will help their students critically evaluate information transmitted via the mass media. This is particularly important when educators present to students cutting edge science knowledge on climate change. Climate change skeptics through mass media attack climate scientists and dismiss their findings about magnitude of climate change. A novel approach implemented in our training workshops and teaching courses gives students practical hands on experience in examining climate data using the developed web-based information tools. Using the tools, students can examine climate of the Pacific Island Countries, derive trends in climate variables such as temperature and rainfall and make their own conclusions. An open forum "Is climate change real or not?" has also been included as an integral part of these workshops and teaching, giving an opportunity for students to present their findings. They have also been asked to provide examples of observed change in the environment in their countries which may be related to climate change. Tropical cyclones are the most destructive severe weather events in the Pacific which regularly affect countries in the region. Understanding importance of updating knowledge about cyclones, extensive training in using the Pacific Tropical Cyclone Data Portal (http://www.bom.gov.au/cyclone/history/tracks/) has also been provided. Using this sophisticated web-based tool, students can learn about occurrences of cyclones in waters around their countries and over the whole Pacific. Positive feedback from university students and participants of training workshops has been obtained and this approach may be recommended for educators to include in their courses. Acknowledgement The research discussed in this paper was conducted through the PASAP, PCCSP and PACCSAP supported by the AusAID and Department of Climate Change and Energy Efficiency and delivered by the Bureau of Meteorology and CSIRO.
Regional Climate Change Hotspots over Africa
NASA Astrophysics Data System (ADS)
Anber, U.
2009-04-01
Regional Climate Change Index (RCCI), is developed based on regional mean precipitation change, mean surface air temperature change, and change in precipitation and temperature interannual variability. The RCCI is a comparative index designed to identify the most responsive regions to climate change, or Hot- Spots. The RCCI is calculated for Seven land regions over North Africa and Arabian region from the latest set of climate change projections by 14 global climates for the A1B, A2 and B1 IPCC emission scenarios. The concept of climate change can be approaches from the viewpoint of vulnerability or from that of climate response. In the former case a Hot-Spot can be defined as a region for which potential climate change impacts on the environment or different activity sectors can be particularly pronounced. In the other case, a Hot-Spot can be defined as a region whose climate is especially responsive to global change. In particular, the characterization of climate change response-based Hot-Spot can provide key information to identify and investigate climate change Hot-Spots based on results from multi-model ensemble of climate change simulations performed by modeling groups from around the world as contributions to the Assessment Report of Intergovernmental Panel on Climate Change (IPCC). A Regional Climate Change Index (RCCI) is defined based on four variables: change in regional mean surface air temperature relative to the global average temperature change ( or Regional Warming Amplification Factor, RWAF ), change in mean regional precipitation ( , of present day value ), change in regional surface air temperature interannual variability ( ,of present day value), change in regional precipitation interannual variability ( , of present day value ). In the definition of the RCCI it is important to include quantities other than mean change because often mean changes are not the only important factors for specific impacts. We thus also include inter annual variability, which is critical for many activity sectors, such as agriculture and water management. The RCCI is calculated for the above mentioned set of global climate change simulations and is inter compared across regions to identify climate change, Hot- Spots, that is regions with the largest values of RCCI. It is important to stress that, as will be seen, the RCCI is a comparative index, that is a small RCCI value does not imply a small absolute change, but only a small climate response compared to other regions. The models used are: CCMA-3-T47 CNRM-CM3 CSIRO-MK3 GFDL-CM2-0 GISS-ER INMCM3 IPSL-CM4 MIROC3-2M MIUB-ECHO-G MPI-ECHAM5 MRI-CGCM2 NCAR-CCSM3 NCAR-PCM1 UKMO-HADCM3 Note that the 3 IPCC emission scenarios, A1B, B1 and A2 almost encompass the entire IPCC scenario range, the A2 being close to the high end of the range, the B1 close to the low end and the A1B lying toward the middle of the range. The model data are obtained from the IPCC site and are interpolated onto a common 1 degree grid to facilitate intercomparison. The RCCI is here defined as in Giorgi (2006), except that the entire yea is devided into two six months periods, D J F M A M and J J A S O N. RCCI=[n(∆P)+n(∆σP)+n(RWAF)+n(∆σT)]D...M + [n(∆P)+n(∆σP)+n(RWAF)+n(∆σT)]J…N (1)
EDITORIAL: Plasmas and plasmons: links in nanosilver Plasmas and plasmons: links in nanosilver
NASA Astrophysics Data System (ADS)
Demming, Anna
2013-03-01
Silver has long been valued not just for its rarity but also for its broad ranging attractive properties as a conductor, catalyst and antimicrobial agent, among others. In nanoscale structures, silver takes on a number of additional attributes, as properties such as antimicrobial activity show size dependence. In addition plasmonic properties are exhibited, which enhance local electromagnetic fields and can be hugely beneficial in sensing and imaging applications. As a result silver nanoparticles are increasingly in demand. In this issue researchers describe a microplasma-assisted electrochemical synthesis that allows excellent control over the size and spacing of the resulting particles, which are important parameters for optimizing their performance in device applications [1]. Wet chemistry [2] and lithography [3] are common processes for silver nanoparticle synthesis. However, other methods are constantly in development. Biosynthesis approaches have been attracting increasing interest as more environmentally friendly alternatives. Takayuki Kuwabara and colleagues at Xiamen University in China used the sundried biomass of Cinnamomum camphora leaf to reduce silver nitrate [4], demonstrating a cost-efficient alternative to conventional methods which might also be suitable for large-scale production. At Zhejiang Normal University researchers noted that the abasic site (AP site) in the DNA duplex can act as a capping scaffold in the generation of fluorescent silver nanoclusters [5]. In addition the resulting fluorescence of the nanocrystals can be used for detecting DNA single-nucleotide polymorphism. Researchers in Malaysia have also noted the potential sensing applications of nanoparticles of another noble metal for swine DNA [6]. They observed that single-strand DNA was absorbed on gold nanoparticles and led to a colour shift from pinkish-red to grey-purple. The shift was the result of a reduction in the surface plasmon resonance peak at 530 nm and new features appearing in the 620-800 nm regions of the absorption spectra. A number of research groups have investigated the possibility of exploiting the plasmonic properties of silver and gold nanostructures for optoelectronic devices [7-9]. The advantages can be quite substantial. Researchers in Korea successfully used silver nanoparticles to obtain a 38% increase in performance of blue LEDs by using silver nanoparticles embedded in p-GaN [10]. The researchers attribute the improvement to an increase in the spontaneous emission rate through resonance coupling between the excitons in multiple quantum wells and localized surface plasmons in the silver nanoparticles. In their work reported in this issue Kostya Ostrikov and his co-authors bridge the link between microplasma-assisted electrochemical process parameters and the plasmonic response. As they point out, 'This is an important experimental step towards bringing together plasma chemistry and plasmonics' [1]. All-gas-phase plasma approaches have already been demonstrated for the synthesis of nanoparticles of other metals. X D Pi and colleagues from the University of Minnesota demonstrated how one simple gas-phase process could produce stable silicon nanocrystal emitters with tailored size and surface functionalization [11]. Previously silicon nanocrystals had been prone to emission instabilities in air. Now Ostrikov and colleagues at the University of Sydney, CSIRO Materials Science and Engineering in Australia and the Key Laboratory for Laser Plasmas in China have studied microplasma-assisted electrochemical synthesis of Ag nanoparticles for plasmonic applications [1]. The synthesis uses moderate temperatures and atmospheric pressures and does not involve any toxic reducing agents. In addition they demonstrate how it allows control over nanoparticle size and interparticle spacing to optimize performance in device applications. Despite the overlap in plasma physics and the origins of plasmonic phenomena, studies of the relationship between plasma electrochemical synthesis and the plasmonic properties of nanoparticles have been limited until now. Yet Kostya Ostrikov and colleagues place particular emphasis on the potential of research at 'the intersection of reactive plasma chemistry and plasmonics'. While navigating the maze of intertwining disciplines that feed into nanotechnology research can be daunting, as this research highlights, great insights and advances may be gained where the different strands of research connect. References [1] Huang X Z, Zhong X X, Lu Y, Li Y S, Rider A E, Furman S A and Ostrikov K 2013 Plasmonic Ag nanoparticles via environment-benign atmospheric microplasma electrochemistry Nanotechnology 24 095604 [2] Sun Y and Xia Y 2002 Shape-controlled synthesis of gold and silver nanoparticles Science 298 2176-9 [3] Hulteen J C, Treichel D A, Smith M T, Duval M L, Jensen T R and Van Duyne R P 1999 Nanosphere lithography: size-tunable silver nanoparticle and surface cluster arrays J. Phys. Chem. B 103 3854-63 [4] Huang J et al 2007 Biosynthesis of silver and gold nanoparticles by novel sundried Cinnamomum camphora leaf Nanotechnology 18 105104 [5] Ma K, Cui Q, Liu G, Wu F, Xu S and Shao Y 2011 DNA abasic site-directed formation of fluorescent silver nanoclusters for selective nucleobase recognition Nanotechnology 22 305502 [6] Ali M E, Hashim U, Mustafa S, Che Man Y B, Yusop M H M, Bari M F, Islam Kh N and Hasan M F 2011 Nanoparticle sensor for label free detection of swine DNA in mixed biological samples Nanotechnology 22 195503 [7] Berini P, Olivieri A and Chen C 2012 Thin Au surface plasmon waveguide Schottky detectors on p-Si Nanotechnology 23 444011 [8] Reilly T H III, Van De Lagemaat J, Tenent R C, Morfa A J and Rowlen K L 2008 Surface-plasmon enhanced transparent electrodes in organic photovoltaics Appl. Phys. Lett. 92 243304 [9] Bialiayeu A, Bottomley A, Prezgot D, Ianoul A and Albert J 2012 Plasmon-enhanced refractometry using silver nanowire coatings on tilted fibre Bragg gratings Nanotechnology 23 444012 [10] Cho C-Y, Kwon M-K, Lee S-J, Han S-H, Kang J-W, Kang S-E, Lee D-Y and Park S-J 2010 Surface plasmon-enhanced light-emitting diodes using silver nanoparticles embedded in p-GaN Nanotechnology 21 205201 [11] Pi X D, Liptak R W, Deneen N J, Wells N P, Carter C B, Campbell S A and Kortshagen U 2008 Air-stable full-visible-spectrum emission from silicon nanocrystals synthesized by an all-gas-phase plasma approach Nanotechnology 19 245603
NASA Astrophysics Data System (ADS)
Gaynor, Suzie; Corney, Stuart; Ling, Fiona; Bindoff, Nathan
2010-05-01
Climate Futures for Tasmania is an interdisciplinary and inter-institutional collaboration of twelve core participating partners (both national and state organisations) who are contributing more than 7.5 million in cash and in-kind over the three-year life of the project. The project is led by the Antarctic Climate and Ecosystems Cooperative Research Centre at the University of Tasmania, with significant contributions by CSIRO, Australia's national research organisation, Tasmania's major power generation company, Hydro Tasmania and the Tasmanian State government, through the Department of Primary Industries, Parks, Water and the Environment. The coordination, community interaction and management of the project are unique within the university environment. The project has required multiple levels of engagement to achieve end-user driven research that delivers highly practical and usable outcomes to stakeholders who are relatively new to climate change concepts. The project is generating new information on climate change in the 21st century for local communities in Tasmania, by dynamically downscaling global climate models. It focuses on the information interests of Tasmanian communities, businesses, industries and governments through analysis of general climate, agriculture, water and catchments, and extreme events. We are engaging with more than 50 end user organisations and to date have been involved in more than 700 engagement activities. The governance structure provides purpose to our stakeholders and given us opportunity to communicate and educate. From this opportunity has come invites and introductions to take our science further into the stakeholders' organisations and to new organisations. From these invites and introductions has come new partnerships and more opportunity to educate and influence organisational behaviour. Our approach to engagement and communication fosters a learning environment that encompasses adult education principles. We have structured our formal and informal engagement activities to encourage active involvement, thus learning. We are providing early preliminary results and tailored products for our end-users to ‘trial' and learn along the way. As we have struggled with difficult climate change science decisions, we have asked our stakeholders to be involved and develop the solution with us. We have actively avoided the traditional information transfer approach of one-size fits all with regards to activities and products. We sit with our stakeholders, listen to what their information needs are, understand how their organisation works and functions, and involve them. Our stakeholders inform, guide and drive our research and the engagement activities. We go to their board rooms, their offices, their paddocks. Our engagement activities are as much about us learning from them, as it is us ‘teaching' them about climate science. It has been highly effective to start an educational journey with them from the beginning of our research, rather than deliver challenging concepts and conclusions at the end. The key element of our communication and education strategy has been continual engagement with relevant state and national government departments, and other major stakeholders. This ongoing engagement introduces critical and often hard-to-understand climate science concepts to stakeholders early in the project, thus allowing such concepts to become familiar over the length of the project. This strategy ensures that in the conclusive reporting stage, our stakeholders are well-versed in the language and concepts necessary to engage with the conclusions, and consequently change behaviours. Our stakeholders have become advocates for our research and climate change science. Early engagement has encouraged a sense of ownership and familiarity of the climate science. This is crucial in the climate change space, where results can be controversial, difficult to appreciate and often ignored. What has been clearly important and successful has been providing the right information, at the right time, to the right people.
Detection of amino acetonitrile in Sgr B2(N)
NASA Astrophysics Data System (ADS)
Belloche, A.; Menten, K. M.; Comito, C.; Müller, H. S. P.; Schilke, P.; Ott, J.; Thorwirth, S.; Hieret, C.
2008-04-01
Context: Amino acids are building blocks of proteins and therefore key ingredients for the origin of life. The simplest amino acid, glycine (NH2CH2COOH), has long been searched for in the interstellar medium but has not been unambiguously detected so far. At the same time, more and more complex molecules have been newly found toward the prolific Galactic center source Sagittarius B2. Aims: Since the search for glycine has turned out to be extremely difficult, we aimed at detecting a chemically related species (possibly a direct precursor), amino acetonitrile (NH2CH2CN). Methods: With the IRAM 30 m telescope we carried out a complete line survey of the hot core regions Sgr B2(N) and (M) in the 3 mm range, plus partial surveys at 2 and 1.3 mm. We analyzed our 30 m line survey in the LTE approximation and modeled the emission of all known molecules simultaneously. We identified spectral features at the frequencies predicted for amino acetonitrile lines having intensities compatible with a unique rotation temperature. We also used the Very Large Array to look for cold, extended emission from amino acetonitrile. Results: We detected amino acetonitrile in Sgr B2(N) in our 30 m telescope line survey and conducted confirmatory observations of selected lines with the IRAM Plateau de Bure and the Australia Telescope Compact Array interferometers. The emission arises from a known hot core, the Large Molecule Heimat, and is compact with a source diameter of 2 arcsec (0.08 pc). We derived a column density of 2.8 × 1016 cm-2, a temperature of 100 K, and a linewidth of 7 km s-1. Based on the simultaneously observed continuum emission, we calculated a density of 1.7 × 108 cm-3, a mass of 2340 M_⊙, and an amino acetonitrile fractional abundance of 2.2 × 10-9. The high abundance and temperature may indicate that amino acetonitrile is formed by grain surface chemistry. We did not detect any hot, compact amino acetonitrile emission toward Sgr B2(M) or any cold, extended emission toward Sgr B2, with column-density upper limits of 6 × 1015 and 3 × 1012-14 cm-2, respectively. Conclusions: Based on our amino acetonitrile detection toward Sgr B2(N) and a comparison to the pair methylcyanide/acetic acid both detected in this source, we suggest that the column density of both glycine conformers in Sgr B2(N) is well below the best upper limits published recently by other authors, and probably below the confusion limit in the 1-3 mm range. Based on observations carried out with the IRAM Plateau de Bure Interferometer, the IRAM 30 m telescope, the Australia Telescope Compact Array, and the NRAO Very Large Array. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). The Australia Telescope Compact Array is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Table [see full textsee full text] and Fig. [see full textsee full text] are only available in electronic form at http://www.aanda.org The calibrated and deconvolved data cubes and images (line and continuum) obtained with the PdBI, the ATCA, and the VLA are available in FITS format at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/482/179
NASA Astrophysics Data System (ADS)
Kuleshov, Yuriy; de Wit, Roald; Atalifo, Terry; Prakash, Bipendra; Waqaicelua, Alipate; Kunitsugu, Masashi; Caroff, Philippe; Chane-Ming, Fabrice
2013-04-01
Tropical cyclones are the most extreme weather phenomena which severely impact coastal communities and island nations. There is an ongoing research (i) on accurate analysis of observed trends in tropical cyclone occurrences, and (ii) how tropical cyclone frequency and intensity may change in the future as a result of climate change. Reliable historical records of cyclone activity are vital for this research. The Pacific Australia Climate Change Science and Adaptation Planning (PACCSAP) program is dedicated to help Pacific Island countries and Timor Leste gain a better understanding of how climate change will impact their regions. One of the key PACCSAP projects is focused on developing a tropical cyclone archive, climatology and seasonal prediction for the regions. As part of the project, historical tropical cyclone best track data have been examined and prepared to be subsequently displayed through the enhanced tropical cyclone data portal for the Southern Hemisphere and the Western Pacific Ocean. Data from the Regional Specialised Meteorological Centre (RSMC) Nadi, Fiji and Tropical Cyclone Warning Centres (TCWCs) in Brisbane, Darwin and Wellington for 1969-1970 to 2010-2011 tropical cyclone seasons have been carefully examined. Errors and inconsistencies which have been found during the quality control procedure have been corrected. To produce a consolidated data set for the South Pacific Ocean, best track data from these four centres have been used. Specifically, for 1969-1970 to 1994-1995 tropical cyclone seasons, data from TCWCs in Brisbane, Darwin and Wellington have been used. In 1995, RSMC Nadi, Fiji has been established with responsibilities for issuing tropical cyclone warnings and preparing best track data for the area south of the equator to 25°S, 160°E to 120°W. Consequently, data from RSMC Nadi have been used as a primary source for this area, starting from the 1995-1996 tropical cyclone season. These data have been combined with the data from TCWC Wellington for the area 25°S to 40°S, 160°E to 120°W and with the data from TCWCs in Brisbane and Darwin for the area south of the equator to 37°S, 135°E to 160°E. In addition, tropical cyclone best track data for the North-West Pacific for 1977-2011 seasons prepared at RSMC Tokyo and for the South Indian Ocean for 1969-2011 prepared at RSMC la Réunion have been added to the dataset. As a result, new design of the Southern Hemisphere/Pacific Tropical Cyclone Data Portal (http://www.bom.gov.au/cyclone/history/tracks/) incorporates best track data for the Western Pacific both south and north of the equator and for the South Indian Ocean. The portal has been developed using the OpenLayers web mapping library. Main features of the portal include dynamic map navigation, presenting detailed cyclone information for a selected region in the Southern Hemisphere and North-West Pacific and displaying changes in tropical cyclone intensity over the lifetime of a cyclone. One of the unique features of the portal is its enhanced functionality for spatial and temporal selection for cyclones in selected areas (e.g. economic exclusion zones of the countries). Acknowledgement The research discussed in this paper was conducted through the PACCSAP supported by the AusAID and the Department of Climate Change and Energy Efficiency and delivered by the Bureau of Meteorology and CSIRO. We acknowledge C. Shamsu, D. Duong, P. Lopatecki, W. Banerjee, P. He, P. Wickramasinghe and A. Bauers from the School of Computer Sciences and IT at the Royal Melbourne Institute of Technology (RMIT) University, Melbourne, Australia for their contribution to the development of the portal's functionality on spatial selection.
PREFACE: IUPAP C20 Conference on Computational Physics (CCP 2011)
NASA Astrophysics Data System (ADS)
Troparevsky, Claudia; Stocks, George Malcolm
2012-12-01
Increasingly, computational physics stands alongside experiment and theory as an integral part of the modern approach to solving the great scientific challenges of the day on all scales - from cosmology and astrophysics, through climate science, to materials physics, and the fundamental structure of matter. Computational physics touches aspects of science and technology with direct relevance to our everyday lives, such as communication technologies and securing a clean and efficient energy future. This volume of Journal of Physics: Conference Series contains the proceedings of the scientific contributions presented at the 23rd Conference on Computational Physics held in Gatlinburg, Tennessee, USA, in November 2011. The annual Conferences on Computational Physics (CCP) are dedicated to presenting an overview of the most recent developments and opportunities in computational physics across a broad range of topical areas and from around the world. The CCP series has been in existence for more than 20 years, serving as a lively forum for computational physicists. The topics covered by this conference were: Materials/Condensed Matter Theory and Nanoscience, Strongly Correlated Systems and Quantum Phase Transitions, Quantum Chemistry and Atomic Physics, Quantum Chromodynamics, Astrophysics, Plasma Physics, Nuclear and High Energy Physics, Complex Systems: Chaos and Statistical Physics, Macroscopic Transport and Mesoscopic Methods, Biological Physics and Soft Materials, Supercomputing and Computational Physics Teaching, Computational Physics and Sustainable Energy. We would like to take this opportunity to thank our sponsors: International Union of Pure and Applied Physics (IUPAP), IUPAP Commission on Computational Physics (C20), American Physical Society Division of Computational Physics (APS-DCOMP), Oak Ridge National Laboratory (ORNL), Center for Defect Physics (CDP), the University of Tennessee (UT)/ORNL Joint Institute for Computational Sciences (JICS) and Cray, Inc. We are grateful to the committees that helped put the conference together, especially the local organizing committee. Particular thanks are also due to a number of ORNL staff who spent long hours with the administrative details. We are pleased to express our thanks to the conference administrator Ann Strange (ORNL/CDP) for her responsive and efficient day-to-day handling of this event, Sherry Samples, Assistant Conference Administrator (ORNL), Angie Beach and the ORNL Conference Office, and Shirley Shugart (ORNL) and Fern Stooksbury (ORNL) who created and maintained the conference website. Editors: G Malcolm Stocks (ORNL) and M Claudia Troparevsky (UT) http://ccp2011.ornl.gov Chair: Dr Malcolm Stocks (ORNL) Vice Chairs: Adriana Moreo (ORNL/UT) James Guberrnatis (LANL) Local Program Committee: Don Batchelor (ORNL) Jack Dongarra (UTK/ORNL) James Hack (ORNL) Robert Harrison (ORNL) Paul Kent (ORNL) Anthony Mezzacappa (ORNL) Adriana Moreo (ORNL) Witold Nazarewicz (UT) Loukas Petridis (ORNL) David Schultz (ORNL) Bill Shelton (ORNL) Claudia Troparevsky (ORNL) Mina Yoon (ORNL) International Advisory Board Members: Joan Adler (Israel Institute of Technology, Israel) Constantia Alexandrou (University of Cyprus, Cyprus) Claudia Ambrosch-Draxl (University of Leoben, Austria) Amanda Barnard (CSIRO, Australia) Peter Borcherds (University of Birmingham, UK) Klaus Cappelle (UFABC, Brazil) Giovanni Ciccotti (Università degli Studi di Roma 'La Sapienza', Italy) Nithaya Chetty (University of Pretoria, South Africa) Charlotte Froese-Fischer (NIST, US) Giulia A. Galli (University of California, Davis, US) Gillian Gehring (University of Sheffield, UK) Guang-Yu Guo (National Taiwan University, Taiwan) Sharon Hammes-Schiffer (Penn State, US) Alex Hansen (Norweigan UST) Duane D. Johnson (University of Illinois at Urbana-Champaign, US) David Landau (University of Georgia, US) Joaquin Marro (University of Granada, Spain) Richard Martin (UIUC, US) Todd Martinez (Stanford University, US) Bill McCurdy (Lawrence Berkeley National Laboratory, US) Ingrid Mertig (Martin Luther University, Germany) Alejandro Muramatsu (Universitat Stuttgart, Germany) Richard Needs (Cavendish Laboratory, UK) Giuseppina Orlandini (University of Trento, Italy) Martin Savage (University of Washington, US) Thomas Schulthess (ETH, Switzerland) Dzidka Szotek (Daresbury Laboratory, UK) Hideaki Takabe (Osaka University, Japan) William M. Tang (Princeton University, US) James Vary (Iowa State, US) Enge Wang (Chinese Academy of Science, China) Jian-Guo Wang (Institute of Applied Physics and Computational Mathematics, China) Jian-Sheng Wang (National University, Singapore) Dan Wei (Tsinghua University, China) Tony Williams (University of Adelaide, Australia) Rudy Zeller (Julich, Germany) Conference Administrator: Ann Strange (ORNL)
NASA Astrophysics Data System (ADS)
Scott, Susan M.; McClelland, David E.
2008-06-01
At GRG17 in Dublin in 2004, it was decided to hold GRG18 in Sydney in 2007. Every six years, the GRG conference (held every three years) and Amaldi meeting (held every two years) occur in the same year around July. This was to be the case in 2007. By mutual agreement of the International Society on General Relativity and Gravitation (ISGRG), which oversees the GR conferences and The Gravitational Wave International Committee (GWIC), which oversees the Amaldi meetings, it was decided to hold these two important conferences concurrently, for the first time, at the same venue, namely Sydney. At a time when the gravitational wave community was beginning to explore the possibility of searches to probe various aspects of the theory, the vision was to bring that community together with the community of gravitational theorists in order to better appreciate the work being done by both parties and to explore possibilities for future research using the mutual expertise. The logistics of running two such large meetings concurrently were considerable. The format agreed upon by the ISGRG and GWIC was the following: common plenary sessions in the mornings from Monday to Friday; six parallel GR workshop sessions and an Amaldi session each afternoon from Monday to Friday (except Wednesday); a combined poster session on Wednesday; a full day of Amaldi sessions on the final day (Saturday). The scientific programme for GRG18 was overseen by a Scientific Organising Committee established by the ISGRG and chaired by Professor Sathyaprakash. The scientific programme for Amaldi7 was overseen by GWIC chaired by Professor Cerdonio. One of the highlights of the conferences was the breadth and quality of the plenary programme put together by the scientific committees. Not only did these talks give an excellent snapshot of the entire field at this time, but they also explored the interfaces with other related fields, which proved of special interest to participants. We were given superb overviews of the state of the art of: observational handles on dark energy; collider physics experiments designed to probe cosmology; gravitational dynamics of large stellar systems; and the use of analogue condensed-matter systems in the laboratory to investigate black hole event horizons. In the more mainstream areas we were given timely reviews of: the Gravity Probe B and STEP missions; quasi-local black hole horizons and their applications; cosmic censorship; the spin foam model approach to quantum gravity; the causal dynamical triangulations approach to quantum gravity; superstring theory applied to questions in particle physics; the current status and prospects for gravitational wave astronomy; ground-based gravitational wave detection; and technological developments for the future LISA mission. This issue is published as the proceedings of GRG18 and Amaldi7. It contains the overview articles by the plenary speakers, the summaries of each GRG18 workshop parallel session as provided by the workshop chairs, and the highlights of the Amaldi7 meeting as selected by the Amaldi7 chairs. Other Amaldi7 talks and posters will appear as articles in a refereed issue of the electronic Journal of Physics Conference Series. This CQG special issue and the related issue of JPCS will be electronically linked. The conference organisers would like to acknowledge the financial support of: The Australian National University; IUPAP; The Australian Institute of Physics; BHP Billiton; The University of Western Australia; The University of New South Wales; The Institute of Physics; The Gravity Research Foundation; SGI; CosNet; The Australian Mathematical Sciences Institute; Springer; Duraduct; the New South Wales Government; The Australasian Society for General Relativity and Gravitation; the Mexican GR bid; the Centre for Precision Optics; The Anglo-Australian Observatory; Newspec; CSIRO; and The University of Melbourne. We would like to thank the GRG18 Scientific Organising Committee, GWIC and the Local Organising Committee for all their hard work in putting together these very successful combined conferences, which attracted 520 participants. Many of the practical aspects of the organisation were handled by the event management company Conexion, and their professionalism, expertise and dedication were greatly appreciated. We would also like to thank the editorial staff at CQG, especially Eirini Messaritaki and Joseph Tennant, for their support and efficiency in preparing this issue. Finally, we would like to thank all the participants for their lively and colourful contributions to making these conferences a success.
Celler, Branko; Argha, Ahmadreza; Varnfield, Marlien; Jayasena, Rajiv
2018-04-09
In a home telemonitoring trial, patient adherence with scheduled vital signs measurements is an important aspect that has not been thoroughly studied and for which data in the literature are limited. Levels of adherence have been reported as varying from approximately 40% to 90%, and in most cases, the adherence rate usually dropped off steadily over time. This drop is more evident in the first few weeks or months after the start. Higher adherence rates have been reported for simple types of monitoring and for shorter periods of intervention. If patients do not follow the intended procedure, poorer results than expected may be achieved. Hence, analyzing factors that can influence patient adherence is of great importance. The goal of the research was to present findings on patient adherence with scheduled vital signs measurements in the recently completed Commonwealth Scientific and Industrial Research Organisation (CSIRO) national trial of home telemonitoring of patients (mean age 70.5 years, SD 9.3 years) with chronic conditions (chronic obstructive pulmonary disease, coronary artery disease, hypertensive diseases, congestive heart failure, diabetes, or asthma) carried out at 5 locations along the east coast of Australia. We investigated the ability of chronically ill patients to carry out a daily schedule of vital signs measurements as part of a chronic disease management care plan over periods exceeding 6 months (302 days, SD 135 days) and explored different levels of adherence for different measurements as a function of age, gender, and supervisory models. In this study, 113 patients forming the test arm of a Before and After Control Intervention (BACI) home telemonitoring trial were analyzed. Patients were required to monitor on a daily basis a range of vital signs determined by their chronic condition and comorbidities. Vital signs included noninvasive blood pressure, pulse oximetry, spirometry, electrocardiogram (ECG), blood glucose level, body temperature, and body weight. Adherence was calculated as the number of days during which at least 1 measurement was taken over all days where measurements were scheduled. Different levels of adherence for different measurements, as a function of age, gender, and supervisory models, were analyzed using linear regression and analysis of covariance for a period of 1 year after the intervention. Patients were monitored on average for 302 (SD 135) days, although some continued beyond 12 months. The overall adherence rate for all measurements was 64.1% (range 59.4% to 68.8%). The adherence rates of patients monitored in hospital settings relative to those monitored in community settings were significantly higher for spirometry (69.3%, range 60.4% to 78.2%, versus 41.0%, range 33.1% to 49.0%, P<.001), body weight (64.5%, range 55.7% to 73.2%, versus 40.5%, range 32.3% to 48.7%, P<.001), and body temperature (66.8%, range 59.7% to 73.9%, versus 55.2%, range 48.4% to 61.9%, P=.03). Adherence with blood glucose measurements (58.1%, range 46.7% to 69.5%, versus 50.2%, range 42.8% to 57.6%, P=.24) was not significantly different overall. Adherence rates for blood pressure (68.5%, range 62.7% to 74.2%, versus 59.7%, range 52.1% to 67.3%, P=.04), ECG (65.6%, range 59.7% to 71.5%, versus 56.5%, range 48.7% to 64.4%, P=.047), and pulse oximetry (67.0%, range 61.4% to 72.7%, versus 56.4%, range 48.6% to 64.1%, P=.02) were significantly higher in males relative to female subjects. No statistical differences were observed between rates of adherence for the younger patient group (70 years and younger) and older patient group (older than 70 years). Patients with chronic conditions enrolled in the home telemonitoring trial were able to record their vital signs at home at least once every 2 days over prolonged periods of time. Male patients maintained a higher adherence than female patients over time, and patients supervised by hospital-based care coordinators reported higher levels of adherence with their measurement schedule relative to patients supervised in community settings. This was most noticeable for spirometry. Australian New Zealand Clinical Trials Registry ACTRN12613000635763; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=364030&isReview=true (Archived by WebCite at http://www.webcitation.org/6xPOU3DpR). ©Branko Celler, Ahmadreza Argha, Marlien Varnfield, Rajiv Jayasena. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 09.04.2018.
2018-01-01
Background In a home telemonitoring trial, patient adherence with scheduled vital signs measurements is an important aspect that has not been thoroughly studied and for which data in the literature are limited. Levels of adherence have been reported as varying from approximately 40% to 90%, and in most cases, the adherence rate usually dropped off steadily over time. This drop is more evident in the first few weeks or months after the start. Higher adherence rates have been reported for simple types of monitoring and for shorter periods of intervention. If patients do not follow the intended procedure, poorer results than expected may be achieved. Hence, analyzing factors that can influence patient adherence is of great importance. Objective The goal of the research was to present findings on patient adherence with scheduled vital signs measurements in the recently completed Commonwealth Scientific and Industrial Research Organisation (CSIRO) national trial of home telemonitoring of patients (mean age 70.5 years, SD 9.3 years) with chronic conditions (chronic obstructive pulmonary disease, coronary artery disease, hypertensive diseases, congestive heart failure, diabetes, or asthma) carried out at 5 locations along the east coast of Australia. We investigated the ability of chronically ill patients to carry out a daily schedule of vital signs measurements as part of a chronic disease management care plan over periods exceeding 6 months (302 days, SD 135 days) and explored different levels of adherence for different measurements as a function of age, gender, and supervisory models. Methods In this study, 113 patients forming the test arm of a Before and After Control Intervention (BACI) home telemonitoring trial were analyzed. Patients were required to monitor on a daily basis a range of vital signs determined by their chronic condition and comorbidities. Vital signs included noninvasive blood pressure, pulse oximetry, spirometry, electrocardiogram (ECG), blood glucose level, body temperature, and body weight. Adherence was calculated as the number of days during which at least 1 measurement was taken over all days where measurements were scheduled. Different levels of adherence for different measurements, as a function of age, gender, and supervisory models, were analyzed using linear regression and analysis of covariance for a period of 1 year after the intervention. Results Patients were monitored on average for 302 (SD 135) days, although some continued beyond 12 months. The overall adherence rate for all measurements was 64.1% (range 59.4% to 68.8%). The adherence rates of patients monitored in hospital settings relative to those monitored in community settings were significantly higher for spirometry (69.3%, range 60.4% to 78.2%, versus 41.0%, range 33.1% to 49.0%, P<.001), body weight (64.5%, range 55.7% to 73.2%, versus 40.5%, range 32.3% to 48.7%, P<.001), and body temperature (66.8%, range 59.7% to 73.9%, versus 55.2%, range 48.4% to 61.9%, P=.03). Adherence with blood glucose measurements (58.1%, range 46.7% to 69.5%, versus 50.2%, range 42.8% to 57.6%, P=.24) was not significantly different overall. Adherence rates for blood pressure (68.5%, range 62.7% to 74.2%, versus 59.7%, range 52.1% to 67.3%, P=.04), ECG (65.6%, range 59.7% to 71.5%, versus 56.5%, range 48.7% to 64.4%, P=.047), and pulse oximetry (67.0%, range 61.4% to 72.7%, versus 56.4%, range 48.6% to 64.1%, P=.02) were significantly higher in males relative to female subjects. No statistical differences were observed between rates of adherence for the younger patient group (70 years and younger) and older patient group (older than 70 years). Conclusions Patients with chronic conditions enrolled in the home telemonitoring trial were able to record their vital signs at home at least once every 2 days over prolonged periods of time. Male patients maintained a higher adherence than female patients over time, and patients supervised by hospital-based care coordinators reported higher levels of adherence with their measurement schedule relative to patients supervised in community settings. This was most noticeable for spirometry. Trial Registration Australian New Zealand Clinical Trials Registry ACTRN12613000635763; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=364030&isReview=true (Archived by WebCite at http://www.webcitation.org/6xPOU3DpR). PMID:29631991
Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach
NASA Astrophysics Data System (ADS)
Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.
Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.
Extensible packet processing architecture
Robertson, Perry J.; Hamlet, Jason R.; Pierson, Lyndon G.; Olsberg, Ronald R.; Chun, Guy D.
2013-08-20
A technique for distributed packet processing includes sequentially passing packets associated with packet flows between a plurality of processing engines along a flow through data bus linking the plurality of processing engines in series. At least one packet within a given packet flow is marked by a given processing engine to signify by the given processing engine to the other processing engines that the given processing engine has claimed the given packet flow for processing. A processing function is applied to each of the packet flows within the processing engines and the processed packets are output on a time-shared, arbitered data bus coupled to the plurality of processing engines.
Cornwell, Brittany; Villamor, Eduardo; Mora-Plazas, Mercedes; Marin, Constanza; Monteiro, Carlos A; Baylin, Ana
2018-01-01
To determine if processed and ultra-processed foods consumed by children in Colombia are associated with lower-quality nutrition profiles than less processed foods. We obtained information on sociodemographic and anthropometric variables and dietary information through dietary records and 24 h recalls from a convenience sample of the Bogotá School Children Cohort. Foods were classified into three categories: (i) unprocessed and minimally processed foods, (ii) processed culinary ingredients and (iii) processed and ultra-processed foods. We also examined the combination of unprocessed foods and processed culinary ingredients. Representative sample of children from low- to middle-income families in Bogotá, Colombia. Children aged 5-12 years in 2011 Bogotá School Children Cohort. We found that processed and ultra-processed foods are of lower dietary quality in general. Nutrients that were lower in processed and ultra-processed foods following adjustment for total energy intake included: n-3 PUFA, vitamins A, B12, C and E, Ca and Zn. Nutrients that were higher in energy-adjusted processed and ultra-processed foods compared with unprocessed foods included: Na, sugar and trans-fatty acids, although we also found that some healthy nutrients, including folate and Fe, were higher in processed and ultra-processed foods compared with unprocessed and minimally processed foods. Processed and ultra-processed foods generally have unhealthy nutrition profiles. Our findings suggest the categorization of foods based on processing characteristics is promising for understanding the influence of food processing on children's dietary quality. More studies accounting for the type and degree of food processing are needed.
Dynamic control of remelting processes
Bertram, Lee A.; Williamson, Rodney L.; Melgaard, David K.; Beaman, Joseph J.; Evans, David G.
2000-01-01
An apparatus and method of controlling a remelting process by providing measured process variable values to a process controller; estimating process variable values using a process model of a remelting process; and outputting estimated process variable values from the process controller. Feedback and feedforward control devices receive the estimated process variable values and adjust inputs to the remelting process. Electrode weight, electrode mass, electrode gap, process current, process voltage, electrode position, electrode temperature, electrode thermal boundary layer thickness, electrode velocity, electrode acceleration, slag temperature, melting efficiency, cooling water temperature, cooling water flow rate, crucible temperature profile, slag skin temperature, and/or drip short events are employed, as are parameters representing physical constraints of electroslag remelting or vacuum arc remelting, as applicable.
On Intelligent Design and Planning Method of Process Route Based on Gun Breech Machining Process
NASA Astrophysics Data System (ADS)
Hongzhi, Zhao; Jian, Zhang
2018-03-01
The paper states an approach of intelligent design and planning of process route based on gun breech machining process, against several problems, such as complex machining process of gun breech, tedious route design and long period of its traditional unmanageable process route. Based on gun breech machining process, intelligent design and planning system of process route are developed by virtue of DEST and VC++. The system includes two functional modules--process route intelligent design and its planning. The process route intelligent design module, through the analysis of gun breech machining process, summarizes breech process knowledge so as to complete the design of knowledge base and inference engine. And then gun breech process route intelligently output. On the basis of intelligent route design module, the final process route is made, edited and managed in the process route planning module.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1980-01-01
Volume 1 describes the proposed plant: KBW gasification process, ICI low-pressure methanol process and Mobil M-gasoline process, and also with ancillary processes, such as oxygen plant, shift process, RECTISOL purification process, sulfur recovery equipment and pollution control equipment. Numerous engineering diagrams are included. (LTN)
Performing a local reduction operation on a parallel computer
Blocksome, Michael A; Faraj, Daniel A
2013-06-04
A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.
Performing a local reduction operation on a parallel computer
Blocksome, Michael A.; Faraj, Daniel A.
2012-12-11
A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.
Lau, Nathan; Jamieson, Greg A; Skraaning, Gyrd
2016-07-01
We introduce Process Overview, a situation awareness characterisation of the knowledge derived from monitoring process plants. Process Overview is based on observational studies of process control work in the literature. The characterisation is applied to develop a query-based measure called the Process Overview Measure. The goal of the measure is to improve coupling between situation and awareness according to process plant properties and operator cognitive work. A companion article presents the empirical evaluation of the Process Overview Measure in a realistic process control setting. The Process Overview Measure demonstrated sensitivity and validity by revealing significant effects of experimental manipulations that corroborated with other empirical results. The measure also demonstrated adequate inter-rater reliability and practicality for measuring SA based on data collected by process experts. Practitioner Summary: The Process Overview Measure is a query-based measure for assessing operator situation awareness from monitoring process plants in representative settings.
43 CFR 2804.19 - How will BLM process my Processing Category 6 application?
Code of Federal Regulations, 2012 CFR
2012-10-01
... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false How will BLM process my Processing... process my Processing Category 6 application? (a) For Processing Category 6 applications, you and BLM must enter into a written agreement that describes how BLM will process your application. The final agreement...
43 CFR 2804.19 - How will BLM process my Processing Category 6 application?
Code of Federal Regulations, 2013 CFR
2013-10-01
... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false How will BLM process my Processing... process my Processing Category 6 application? (a) For Processing Category 6 applications, you and BLM must enter into a written agreement that describes how BLM will process your application. The final agreement...
Process Correlation Analysis Model for Process Improvement Identification
Park, Sooyong
2014-01-01
Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data. PMID:24977170
Process correlation analysis model for process improvement identification.
Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong
2014-01-01
Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.
Cleanliness of Ti-bearing Al-killed ultra-low-carbon steel during different heating processes
NASA Astrophysics Data System (ADS)
Guo, Jian-long; Bao, Yan-ping; Wang, Min
2017-12-01
During the production of Ti-bearing Al-killed ultra-low-carbon (ULC) steel, two different heating processes were used when the converter tapping temperature or the molten steel temperature in the Ruhrstahl-Heraeus (RH) process was low: heating by Al addition during the RH decarburization process and final deoxidation at the end of the RH decarburization process (process-I), and increasing the oxygen content at the end of RH decarburization, heating and final deoxidation by one-time Al addition (process-II). Temperature increases of 10°C by different processes were studied; the results showed that the two heating processes could achieve the same heating effect. The T.[O] content in the slab and the refining process was better controlled by process-I than by process-II. Statistical analysis of inclusions showed that the numbers of inclusions in the slab obtained by process-I were substantially less than those in the slab obtained by process-II. For process-I, the Al2O3 inclusions produced by Al added to induce heating were substantially removed at the end of decarburization. The amounts of inclusions were substantially greater for process-II than for process-I at different refining stages because of the higher dissolved oxygen concentration in process-II. Industrial test results showed that process-I was more beneficial for improving the cleanliness of molten steel.
Application of agent-based system for bioprocess description and process improvement.
Gao, Ying; Kipling, Katie; Glassey, Jarka; Willis, Mark; Montague, Gary; Zhou, Yuhong; Titchener-Hooker, Nigel J
2010-01-01
Modeling plays an important role in bioprocess development for design and scale-up. Predictive models can also be used in biopharmaceutical manufacturing to assist decision-making either to maintain process consistency or to identify optimal operating conditions. To predict the whole bioprocess performance, the strong interactions present in a processing sequence must be adequately modeled. Traditionally, bioprocess modeling considers process units separately, which makes it difficult to capture the interactions between units. In this work, a systematic framework is developed to analyze the bioprocesses based on a whole process understanding and considering the interactions between process operations. An agent-based approach is adopted to provide a flexible infrastructure for the necessary integration of process models. This enables the prediction of overall process behavior, which can then be applied during process development or once manufacturing has commenced, in both cases leading to the capacity for fast evaluation of process improvement options. The multi-agent system comprises a process knowledge base, process models, and a group of functional agents. In this system, agent components co-operate with each other in performing their tasks. These include the description of the whole process behavior, evaluating process operating conditions, monitoring of the operating processes, predicting critical process performance, and providing guidance to decision-making when coping with process deviations. During process development, the system can be used to evaluate the design space for process operation. During manufacture, the system can be applied to identify abnormal process operation events and then to provide suggestions as to how best to cope with the deviations. In all cases, the function of the system is to ensure an efficient manufacturing process. The implementation of the agent-based approach is illustrated via selected application scenarios, which demonstrate how such a framework may enable the better integration of process operations by providing a plant-wide process description to facilitate process improvement. Copyright 2009 American Institute of Chemical Engineers
Electricity from sunlight. [low cost silicon for solar cells
NASA Technical Reports Server (NTRS)
Yaws, C. L.; Miller, J. W.; Lutwack, R.; Hsu, G.
1978-01-01
The paper discusses a number of new unconventional processes proposed for the low-cost production of silicon for solar cells. Consideration is given to: (1) the Battelle process (Zn/SiCl4), (2) the Battelle process (SiI4), (3) the Silane process, (4) the Motorola process (SiF4/SiF2), (5) the Westinghouse process (Na/SiCl4), (6) the Dow Corning process (C/SiO2), (7) the AeroChem process (SiCl4/H atom), and the Stanford process (Na/SiF4). Preliminary results indicate that the conventional process and the SiI4 processes cannot meet the project goal of $10/kg by 1986. Preliminary cost evaluation results for the Zn/SiCl4 process are favorable.
Composing Models of Geographic Physical Processes
NASA Astrophysics Data System (ADS)
Hofer, Barbara; Frank, Andrew U.
Processes are central for geographic information science; yet geographic information systems (GIS) lack capabilities to represent process related information. A prerequisite to including processes in GIS software is a general method to describe geographic processes independently of application disciplines. This paper presents such a method, namely a process description language. The vocabulary of the process description language is derived formally from mathematical models. Physical processes in geography can be described in two equivalent languages: partial differential equations or partial difference equations, where the latter can be shown graphically and used as a method for application specialists to enter their process models. The vocabulary of the process description language comprises components for describing the general behavior of prototypical geographic physical processes. These process components can be composed by basic models of geographic physical processes, which is shown by means of an example.
Process-based tolerance assessment of connecting rod machining process
NASA Astrophysics Data System (ADS)
Sharma, G. V. S. S.; Rao, P. Srinivasa; Surendra Babu, B.
2016-06-01
Process tolerancing based on the process capability studies is the optimistic and pragmatic approach of determining the manufacturing process tolerances. On adopting the define-measure-analyze-improve-control approach, the process potential capability index ( C p) and the process performance capability index ( C pk) values of identified process characteristics of connecting rod machining process are achieved to be greater than the industry benchmark of 1.33, i.e., four sigma level. The tolerance chain diagram methodology is applied to the connecting rod in order to verify the manufacturing process tolerances at various operations of the connecting rod manufacturing process. This paper bridges the gap between the existing dimensional tolerances obtained via tolerance charting and process capability studies of the connecting rod component. Finally, the process tolerancing comparison has been done by adopting a tolerance capability expert software.
Intranode data communications in a parallel computer
Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E
2014-01-07
Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a computer node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.
Intranode data communications in a parallel computer
Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E
2013-07-23
Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.
Canadian Libraries and Mass Deacidification.
ERIC Educational Resources Information Center
Pacey, Antony
1992-01-01
Considers the advantages and disadvantages of six mass deacidification processes that libraries can use to salvage printed materials: the Wei T'o process, the Diethyl Zinc (DEZ) process, the FMC (Lithco) process, the Book Preservation Associates (BPA) process, the "Bookkeeper" process, and the "Lyophilization" process. The…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Richardson-Klavehn, A; Gardiner, J M
1998-05-01
Depth-of-processing effects on incidental perceptual memory tests could reflect (a) contamination by voluntary retrieval, (b) sensitivity of involuntary retrieval to prior conceptual processing, or (c) a deficit in lexical processing during graphemic study tasks that affects involuntary retrieval. The authors devised an extension of incidental test methodology--making conjunctive predictions about response times as well as response proportions--to discriminate among these alternatives. They used graphemic, phonemic, and semantic study tasks, and a word-stem completion test with incidental, intentional, and inclusion instructions. Semantic study processing was superior to phonemic study processing in the intentional and inclusion tests, but semantic and phonemic study processing produced equal priming in the incidental test, showing that priming was uncontaminated by voluntary retrieval--a conclusion reinforced by the response-time data--and that priming was insensitive to prior conceptual processing. The incidental test nevertheless showed a priming deficit following graphemic study processing, supporting the lexical-processing hypothesis. Adding a lexical decision to the 3 study tasks eliminated the priming deficit following graphemic study processing, but did not influence priming following phonemic and semantic processing. The results provide the first clear evidence that depth-of-processing effects on perceptual priming can reflect lexical processes, rather than voluntary contamination or conceptual processes.
Improving operational anodising process performance using simulation approach
NASA Astrophysics Data System (ADS)
Liong, Choong-Yeun; Ghazali, Syarah Syahidah
2015-10-01
The use of aluminium is very widespread, especially in transportation, electrical and electronics, architectural, automotive and engineering applications sectors. Therefore, the anodizing process is an important process for aluminium in order to make the aluminium durable, attractive and weather resistant. This research is focused on the anodizing process operations in manufacturing and supplying of aluminium extrusion. The data required for the development of the model is collected from the observations and interviews conducted in the study. To study the current system, the processes involved in the anodizing process are modeled by using Arena 14.5 simulation software. Those processes consist of five main processes, namely the degreasing process, the etching process, the desmut process, the anodizing process, the sealing process and 16 other processes. The results obtained were analyzed to identify the problems or bottlenecks that occurred and to propose improvement methods that can be implemented on the original model. Based on the comparisons that have been done between the improvement methods, the productivity could be increased by reallocating the workers and reducing loading time.
Value-driven process management: using value to improve processes.
Melnyk, S A; Christensen, R T
2000-08-01
Every firm can be viewed as consisting of various processes. These processes affect everything that the firm does from accepting orders and designing products to scheduling production. In many firms, the management of processes often reflects considerations of efficiency (cost) rather than effectiveness (value). In this article, we introduce a well-structured process for managing processes that begins not with the process, but rather with the customer and the product and the concept of value. This process progresses through a number of steps which include issues such as defining value, generating the appropriate metrics, identifying the critical processes, mapping and assessing the performance of these processes, and identifying long- and short-term areas for action. What makes the approach presented in this article so powerful is that it explicitly links the customer to the process and that the process is evaluated in term of its ability to effectively serve the customers.
Method for routing events from key strokes in a multi-processing computer systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rhodes, D.A.; Rustici, E.; Carter, K.H.
1990-01-23
The patent describes a method of routing user input in a computer system which concurrently runs a plurality of processes. It comprises: generating keycodes representative of keys typed by a user; distinguishing generated keycodes by looking up each keycode in a routing table which assigns each possible keycode to an individual assigned process of the plurality of processes, one of which processes being a supervisory process; then, sending each keycode to its assigned process until a keycode assigned to the supervisory process is received; sending keycodes received subsequent to the keycode assigned to the supervisory process to a buffer; next,more » providing additional keycodes to the supervisory process from the buffer until the supervisory process has completed operation; and sending keycodes stored in the buffer to processes assigned therewith after the supervisory process has completedoperation.« less
Issues Management Process Course # 38401
DOE Office of Scientific and Technical Information (OSTI.GOV)
Binion, Ula Marie
The purpose of this training it to advise Issues Management Coordinators (IMCs) on the revised Contractor Assurance System (CAS) Issues Management (IM) process. Terminal Objectives: Understand the Laboratory’s IM process; Understand your role in the Laboratory’s IM process. Learning Objectives: Describe the IM process within the context of the CAS; Describe the importance of implementing an institutional IM process at LANL; Describe the process flow for the Laboratory’s IM process; Apply the definition of an issue; Use available resources to determine initial screening risk levels for issues; Describe the required major process steps for each risk level; Describe the personnelmore » responsibilities for IM process implementation; Access available resources to support IM process implementation.« less
Social network supported process recommender system.
Ye, Yanming; Yin, Jianwei; Xu, Yueshen
2014-01-01
Process recommendation technologies have gained more and more attention in the field of intelligent business process modeling to assist the process modeling. However, most of the existing technologies only use the process structure analysis and do not take the social features of processes into account, while the process modeling is complex and comprehensive in most situations. This paper studies the feasibility of social network research technologies on process recommendation and builds a social network system of processes based on the features similarities. Then, three process matching degree measurements are presented and the system implementation is discussed subsequently. Finally, experimental evaluations and future works are introduced.
Pascual, Carlos; Luján, Marcos; Mora, José Ramón; Chiva, Vicente; Gamarra, Manuela
2015-01-01
The implantation of total quality management models in clinical departments can better adapt to the 2009 ISO 9004 model. An essential part of implantation of these models is the establishment of processes and their stabilization. There are four types of processes: key, management, support and operative (clinical). Management processes have four parts: process stabilization form, process procedures form, medical activities cost estimation form and, process flow chart. In this paper we will detail the creation of an essential process in a surgical department, such as the process of management of the surgery waiting list.
T-Check in Technologies for Interoperability: Business Process Management in a Web Services Context
2008-09-01
UML Sequence Diagram) 6 Figure 3: BPMN Diagram of the Order Processing Business Process 9 Figure 4: T-Check Process for Technology Evaluation 10...Figure 5: Notional System Architecture 12 Figure 6: Flow Chart of the Order Processing Business Process 14 Figure 7: Order Processing Activities...features. Figure 3 (created with Intalio BPMS Designer [Intalio 2008]) shows a BPMN view of the Order Processing business process that is used in the
Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona
2012-01-01
Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical process control study on process. Interpretation of such a study provides information about stability, process variability, changing of trends, and quantification of process ability against defective production. Comparative evaluation of critical quality attributes by Pareto charts provides the least capable and most variable process that is liable for improvement. Statistical process control thus proves to be an important tool for six sigma-capable process development and continuous quality improvement.
Streefland, M; Van Herpen, P F G; Van de Waterbeemd, B; Van der Pol, L A; Beuvery, E C; Tramper, J; Martens, D E; Toft, M
2009-10-15
A licensed pharmaceutical process is required to be executed within the validated ranges throughout the lifetime of product manufacturing. Changes to the process, especially for processes involving biological products, usually require the manufacturer to demonstrate that the safety and efficacy of the product remains unchanged by new or additional clinical testing. Recent changes in the regulations for pharmaceutical processing allow broader ranges of process settings to be submitted for regulatory approval, the so-called process design space, which means that a manufacturer can optimize his process within the submitted ranges after the product has entered the market, which allows flexible processes. In this article, the applicability of this concept of the process design space is investigated for the cultivation process step for a vaccine against whooping cough disease. An experimental design (DoE) is applied to investigate the ranges of critical process parameters that still result in a product that meets specifications. The on-line process data, including near infrared spectroscopy, are used to build a descriptive model of the processes used in the experimental design. Finally, the data of all processes are integrated in a multivariate batch monitoring model that represents the investigated process design space. This article demonstrates how the general principles of PAT and process design space can be applied for an undefined biological product such as a whole cell vaccine. The approach chosen for model development described here, allows on line monitoring and control of cultivation batches in order to assure in real time that a process is running within the process design space.
Processing approaches to cognition: the impetus from the levels-of-processing framework.
Roediger, Henry L; Gallo, David A; Geraci, Lisa
2002-01-01
Processing approaches to cognition have a long history, from act psychology to the present, but perhaps their greatest boost was given by the success and dominance of the levels-of-processing framework. We review the history of processing approaches, and explore the influence of the levels-of-processing approach, the procedural approach advocated by Paul Kolers, and the transfer-appropriate processing framework. Processing approaches emphasise the procedures of mind and the idea that memory storage can be usefully conceptualised as residing in the same neural units that originally processed information at the time of encoding. Processing approaches emphasise the unity and interrelatedness of cognitive processes and maintain that they can be dissected into separate faculties only by neglecting the richness of mental life. We end by pointing to future directions for processing approaches.
Global Sensitivity Analysis for Process Identification under Model Uncertainty
NASA Astrophysics Data System (ADS)
Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.
2015-12-01
The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.
Dai, Heng; Ye, Ming; Walker, Anthony P.; ...
2017-03-28
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Social Network Supported Process Recommender System
Ye, Yanming; Yin, Jianwei; Xu, Yueshen
2014-01-01
Process recommendation technologies have gained more and more attention in the field of intelligent business process modeling to assist the process modeling. However, most of the existing technologies only use the process structure analysis and do not take the social features of processes into account, while the process modeling is complex and comprehensive in most situations. This paper studies the feasibility of social network research technologies on process recommendation and builds a social network system of processes based on the features similarities. Then, three process matching degree measurements are presented and the system implementation is discussed subsequently. Finally, experimental evaluations and future works are introduced. PMID:24672309
A model for process representation and synthesis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Thomas, R. H.
1971-01-01
The problem of representing groups of loosely connected processes is investigated, and a model for process representation useful for synthesizing complex patterns of process behavior is developed. There are three parts, the first part isolates the concepts which form the basis for the process representation model by focusing on questions such as: What is a process; What is an event; Should one process be able to restrict the capabilities of another? The second part develops a model for process representation which captures the concepts and intuitions developed in the first part. The model presented is able to describe both the internal structure of individual processes and the interface structure between interacting processes. Much of the model's descriptive power derives from its use of the notion of process state as a vehicle for relating the internal and external aspects of process behavior. The third part demonstrates by example that the model for process representation is a useful one for synthesizing process behavior patterns. In it the model is used to define a variety of interesting process behavior patterns. The dissertation closes by suggesting how the model could be used as a semantic base for a very potent language extension facility.
Radio Telescopes Will Add to Cassini-Huygens Discoveries
NASA Astrophysics Data System (ADS)
2004-12-01
When the European Space Agency's Huygens spacecraft makes its plunge into the atmosphere of Saturn's moon Titan on January 14, radio telescopes of the National Science Foundation's National Radio Astronomy Observatory (NRAO) will help international teams of scientists extract the maximum possible amount of irreplaceable information from an experiment unique in human history. Huygens is the 700-pound probe that has accompanied the larger Cassini spacecraft on a mission to thoroughly explore Saturn, its rings and its numerous moons. The Green Bank Telescope The Robert C. Byrd Green Bank Telescope CREDIT: NRAO/AUI/NSF (Click on image for GBT gallery) The Robert C. Byrd Green Bank Telescope (GBT) in West Virginia and eight of the ten telescopes of the continent-wide Very Long Baseline Array (VLBA), located at Pie Town and Los Alamos, NM, Fort Davis, TX, North Liberty, IA, Kitt Peak, AZ, Brewster, WA, Owens Valley, CA, and Mauna Kea, HI, will directly receive the faint signal from Huygens during its descent. Along with other radio telescopes in Australia, Japan, and China, the NRAO facilities will add significantly to the information about Titan and its atmosphere that will be gained from the Huygens mission. A European-led team will use the radio telescopes to make extremely precise measurements of the probe's position during its descent, while a U.S.-led team will concentrate on gathering measurements of the probe's descent speed and the direction of its motion. The radio-telescope measurements will provide data vital to gaining a full understanding of the winds that Huygens encounters in Titan's atmosphere. Currently, scientists know little about Titan's winds. Data from the Voyager I spacecraft's 1980 flyby indicated that east-west winds may reach 225 mph or more. North-south winds and possible vertical winds, while probably much weaker, may still be significant. There are competing theoretical models of Titan's winds, and the overall picture is best summarized as poorly understood. Predictions of where the Huygens probe will land range from nearly 250 miles east to nearly 125 miles west of the point where its parachute first deploys, depending on which wind model is used. What actually happens to the probe as it makes its parachute descent through Titan's atmosphere will give scientists their best-ever opportunity to learn about Titan's winds. During its descent, Huygens will transmit data from its onboard sensors to Cassini, the "mother ship" that brought it to Titan. Cassini will then relay the data back to Earth. However, the large radio telescopes will be able to receive the faint (10-watt) signal from Huygens directly, even at a distance of nearly 750 million miles. This will not be done to duplicate the data collection, but to generate new data about Huygens' position and motions through direct measurement. Measurements of the Doppler shift in the frequency of Huygens' radio signal made from the Cassini spacecraft, in an experiment led by Mike Bird of the University of Bonn, will largely give information about the speed of Titan's east-west winds. A team led by scientists at NASA's Jet Propulsion Laboratory in Pasadena, CA, will measure the Doppler shift in the probe's signal relative to Earth. These additional Doppler measurements from the Earth-based radio telescopes will provide important data needed to learn about the north-south winds. "Adding the ground-based telescopes to the experiment will not only help confirm the data we get from the Cassini orbiter but also will allow us to get a much more complete picture of the winds on Titan," said William Folkner, a JPL scientist. The VLBA The VLBA CREDIT: NRAO/AUI/NSF (Click on image for VLBA gallery) Another team, led by scientists from the Joint Institute for Very Long Baseline Interferometry in Europe (JIVE), in Dwingeloo, The Netherlands, will use a world-wide network of radio telescopes, including the NRAO telescopes, to track the probe's trajectory with unprecedented accuracy. They expect to measure the probe's position within two-thirds of a mile (1 kilometer) at a distance of nearly 750 million miles. "That's like being able to sit in your back yard and watch the ball in a ping-pong game being played on the Moon," said Leonid Gurvits of JIVE. Both the JPL and JIVE teams will record the data collected by the radio telescopes and process it later. In the case of the Doppler measurements, some real-time information may be available, depending on the strength of the signal, but the scientists on this team also plan to do their detailed analysis on recorded data. The JPL team is utilizing special instrumentation from the Deep Space Network called Radio Science Receivers. One will be loaned to the GBT and another to the Parkes radio observatory. "This is the same instrument that allowed us to support the challenging communications during the landing of the Spirit and Opportunity Mars rovers as well as the Cassini Saturn Orbit Insertion when the received radio signal was very weak," said Sami Asmar, the JPL scientist responsible for the data recording. When the Galileo spacecraft's probe entered Jupiter's atmosphere in 1995, a JPL team used the NSF's Very Large Array (VLA) radio telescope in New Mexico to directly track the probe's signal. Adding the data from the VLA to that experiment dramatically improved the accuracy of the wind-speed measurements. "The Galileo probe gave us a surprise. Contrary to some predictions, we learned that Jupiter's winds got stronger as we went deeper into its atmosphere. That tells us that those deeper winds are not driven entirely by sunlight, but also by heat coming up from the planet's core. If we get lucky at Titan, we'll get surprises there, too," said Robert Preston, another JPL scientist. The Huygens probe is a spacecraft built by the European Space Agency (ESA). In addition to the NRAO telescopes, the JPL Doppler Wind Experiment will use the Australia Telescope National Facility and other radio telescopes in Parkes, Mopra, and Ceduna, Australia; Hobart, Tasmania; Urumqi and Shanghai, China; and Kashima, Japan. The positional measurements are a project led by JIVE and involving ESA, the Netherlands Foundation for Research in Astronomy, the University of Bonn, Helsinki University of Technology, JPL, the Australia Telescope National Facility, the National Astronomical Observatories of China, the Shanghai Astronomical Observatory, and the National Institute for Communication Technologies in Kashima, Japan. The Joint Institute for VLBI in Europe is funded by the national research councils, national facilities and institutes of The Netherlands (NWO and ASTRON), the United Kingdom (PPARC), Italy (CNR), Sweden (Onsala Space Observatory, National Facility), Spain (IGN) and Germany (MPIfR). The European VLBI Network is a joint facility of European, Chinese, South African and other radio astronomy institutes funded by their national research councils. The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.
Process and Post-Process: A Discursive History.
ERIC Educational Resources Information Center
Matsuda, Paul Kei
2003-01-01
Examines the history of process and post-process in composition studies, focusing on ways in which terms, such as "current-traditional rhetoric,""process," and "post-process" have contributed to the discursive construction of reality. Argues that use of the term post-process in the context of second language writing needs to be guided by a…
Improving operational anodising process performance using simulation approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liong, Choong-Yeun, E-mail: lg@ukm.edu.my; Ghazali, Syarah Syahidah, E-mail: syarah@gapps.kptm.edu.my
The use of aluminium is very widespread, especially in transportation, electrical and electronics, architectural, automotive and engineering applications sectors. Therefore, the anodizing process is an important process for aluminium in order to make the aluminium durable, attractive and weather resistant. This research is focused on the anodizing process operations in manufacturing and supplying of aluminium extrusion. The data required for the development of the model is collected from the observations and interviews conducted in the study. To study the current system, the processes involved in the anodizing process are modeled by using Arena 14.5 simulation software. Those processes consist ofmore » five main processes, namely the degreasing process, the etching process, the desmut process, the anodizing process, the sealing process and 16 other processes. The results obtained were analyzed to identify the problems or bottlenecks that occurred and to propose improvement methods that can be implemented on the original model. Based on the comparisons that have been done between the improvement methods, the productivity could be increased by reallocating the workers and reducing loading time.« less
Feller processes: the next generation in modeling. Brownian motion, Lévy processes and beyond.
Böttcher, Björn
2010-12-03
We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes.
Feller Processes: The Next Generation in Modeling. Brownian Motion, Lévy Processes and Beyond
Böttcher, Björn
2010-01-01
We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular Brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes. PMID:21151931
AIRSAR Automated Web-based Data Processing and Distribution System
NASA Technical Reports Server (NTRS)
Chu, Anhua; vanZyl, Jakob; Kim, Yunjin; Lou, Yunling; Imel, David; Tung, Wayne; Chapman, Bruce; Durden, Stephen
2005-01-01
In this paper, we present an integrated, end-to-end synthetic aperture radar (SAR) processing system that accepts data processing requests, submits processing jobs, performs quality analysis, delivers and archives processed data. This fully automated SAR processing system utilizes database and internet/intranet web technologies to allow external users to browse and submit data processing requests and receive processed data. It is a cost-effective way to manage a robust SAR processing and archival system. The integration of these functions has reduced operator errors and increased processor throughput dramatically.
Simplified process model discovery based on role-oriented genetic mining.
Zhao, Weidong; Liu, Xi; Dai, Weihui
2014-01-01
Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.
Electrotechnologies to process foods
USDA-ARS?s Scientific Manuscript database
Electrical energy is being used to process foods. In conventional food processing plants, electricity drives mechanical devices and controls the degree of process. In recent years, several processing technologies are being developed to process foods directly with electricity. Electrotechnologies use...
Challenges associated with the implementation of the nursing process: A systematic review.
Zamanzadeh, Vahid; Valizadeh, Leila; Tabrizi, Faranak Jabbarzadeh; Behshid, Mojghan; Lotfi, Mojghan
2015-01-01
Nursing process is a scientific approach in the provision of qualified nursing cares. However, in practice, the implementation of this process is faced with numerous challenges. With the knowledge of the challenges associated with the implementation of the nursing process, the nursing processes can be developed appropriately. Due to the lack of comprehensive information on this subject, the current study was carried out to assess the key challenges associated with the implementation of the nursing process. To achieve and review related studies on this field, databases of Iran medix, SID, Magiran, PUBMED, Google scholar, and Proquest were assessed using the main keywords of nursing process and nursing process systematic review. The articles were retrieved in three steps including searching by keywords, review of the proceedings based on inclusion criteria, and final retrieval and assessment of available full texts. Systematic assessment of the articles showed different challenges in implementation of the nursing process. Intangible understanding of the concept of nursing process, different views of the process, lack of knowledge and awareness among nurses related to the execution of process, supports of managing systems, and problems related to recording the nursing process were the main challenges that were extracted from review of literature. On systematically reviewing the literature, intangible understanding of the concept of nursing process has been identified as the main challenge in nursing process. To achieve the best strategy to minimize the challenge, in addition to preparing facilitators for implementation of nursing process, intangible understanding of the concept of nursing process, different views of the process, and forming teams of experts in nursing education are recommended for internalizing the nursing process among nurses.
Challenges associated with the implementation of the nursing process: A systematic review
Zamanzadeh, Vahid; Valizadeh, Leila; Tabrizi, Faranak Jabbarzadeh; Behshid, Mojghan; Lotfi, Mojghan
2015-01-01
Background: Nursing process is a scientific approach in the provision of qualified nursing cares. However, in practice, the implementation of this process is faced with numerous challenges. With the knowledge of the challenges associated with the implementation of the nursing process, the nursing processes can be developed appropriately. Due to the lack of comprehensive information on this subject, the current study was carried out to assess the key challenges associated with the implementation of the nursing process. Materials and Methods: To achieve and review related studies on this field, databases of Iran medix, SID, Magiran, PUBMED, Google scholar, and Proquest were assessed using the main keywords of nursing process and nursing process systematic review. The articles were retrieved in three steps including searching by keywords, review of the proceedings based on inclusion criteria, and final retrieval and assessment of available full texts. Results: Systematic assessment of the articles showed different challenges in implementation of the nursing process. Intangible understanding of the concept of nursing process, different views of the process, lack of knowledge and awareness among nurses related to the execution of process, supports of managing systems, and problems related to recording the nursing process were the main challenges that were extracted from review of literature. Conclusions: On systematically reviewing the literature, intangible understanding of the concept of nursing process has been identified as the main challenge in nursing process. To achieve the best strategy to minimize the challenge, in addition to preparing facilitators for implementation of nursing process, intangible understanding of the concept of nursing process, different views of the process, and forming teams of experts in nursing education are recommended for internalizing the nursing process among nurses. PMID:26257793
Automated synthesis of image processing procedures using AI planning techniques
NASA Technical Reports Server (NTRS)
Chien, Steve; Mortensen, Helen
1994-01-01
This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.
NASA Astrophysics Data System (ADS)
Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal
2013-07-01
The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.
Laadan, Oren; Nieh, Jason; Phung, Dan
2012-10-02
Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.
NASA Astrophysics Data System (ADS)
Chi, Xu; Dongming, Guo; Zhuji, Jin; Renke, Kang
2010-12-01
A signal processing method for the friction-based endpoint detection system of a chemical mechanical polishing (CMP) process is presented. The signal process method uses the wavelet threshold denoising method to reduce the noise contained in the measured original signal, extracts the Kalman filter innovation from the denoised signal as the feature signal, and judges the CMP endpoint based on the feature of the Kalman filter innovation sequence during the CMP process. Applying the signal processing method, the endpoint detection experiments of the Cu CMP process were carried out. The results show that the signal processing method can judge the endpoint of the Cu CMP process.
Cheng, Xue Jun; McCarthy, Callum J; Wang, Tony S L; Palmeri, Thomas J; Little, Daniel R
2018-06-01
Upright faces are thought to be processed more holistically than inverted faces. In the widely used composite face paradigm, holistic processing is inferred from interference in recognition performance from a to-be-ignored face half for upright and aligned faces compared with inverted or misaligned faces. We sought to characterize the nature of holistic processing in composite faces in computational terms. We use logical-rule models (Fifić, Little, & Nosofsky, 2010) and Systems Factorial Technology (Townsend & Nozawa, 1995) to examine whether composite faces are processed through pooling top and bottom face halves into a single processing channel-coactive processing-which is one common mechanistic definition of holistic processing. By specifically operationalizing holistic processing as the pooling of features into a single decision process in our task, we are able to distinguish it from other processing models that may underlie composite face processing. For instance, a failure of selective attention might result even when top and bottom components of composite faces are processed in serial or in parallel without processing the entire face coactively. Our results show that performance is best explained by a mixture of serial and parallel processing architectures across all 4 upright and inverted, aligned and misaligned face conditions. The results indicate multichannel, featural processing of composite faces in a manner inconsistent with the notion of coactivity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Fuzzy image processing in sun sensor
NASA Technical Reports Server (NTRS)
Mobasser, S.; Liebe, C. C.; Howard, A.
2003-01-01
This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.
DESIGNING ENVIRONMENTAL, ECONOMIC AND ENERGY EFFICIENT CHEMICAL PROCESSES
The design and improvement of chemical processes can be very challenging. The earlier energy conservation, process economics and environmental aspects are incorporated into the process development, the easier and less expensive it is to alter the process design. Process emissio...
Reversing the conventional leather processing sequence for cleaner leather production.
Saravanabhavan, Subramani; Thanikaivelan, Palanisamy; Rao, Jonnalagadda Raghava; Nair, Balachandran Unni; Ramasami, Thirumalachari
2006-02-01
Conventional leather processing generally involves a combination of single and multistep processes that employs as well as expels various biological, inorganic, and organic materials. It involves nearly 14-15 steps and discharges a huge amount of pollutants. This is primarily due to the fact that conventional leather processing employs a "do-undo" process logic. In this study, the conventional leather processing steps have been reversed to overcome the problems associated with the conventional method. The charges of the skin matrix and of the chemicals and pH profiles of the process have been judiciously used for reversing the process steps. This reversed process eventually avoids several acidification and basification/neutralization steps used in conventional leather processing. The developed process has been validated through various analyses such as chromium content, shrinkage temperature, softness measurements, scanning electron microscopy, and physical testing of the leathers. Further, the performance of the leathers is shown to be on par with conventionally processed leathers through bulk property evaluation. The process enjoys a significant reduction in COD and TS by 53 and 79%, respectively. Water consumption and discharge is reduced by 65 and 64%, respectively. Also, the process benefits from significant reduction in chemicals, time, power, and cost compared to the conventional process.
NASA Astrophysics Data System (ADS)
Schellenberger, Lauren Brownback
Group processing is a key principle of cooperative learning in which small groups discuss their strengths and weaknesses and set group goals or norms. However, group processing has not been well-studied at the post-secondary level or from a qualitative or mixed methods perspective. This mixed methods study uses a phenomenological framework to examine the experience of group processing for students in an undergraduate biology course for preservice teachers. The effect of group processing on students' attitudes toward future group work and group processing is also examined. Additionally, this research investigated preservice teachers' plans for incorporating group processing into future lessons. Students primarily experienced group processing as a time to reflect on past performance. Also, students experienced group processing as a time to increase communication among group members and become motivated for future group assignments. Three factors directly influenced students' experiences with group processing: (1) previous experience with group work, (2) instructor interaction, and (3) gender. Survey data indicated that group processing had a slight positive effect on students' attitudes toward future group work and group processing. Participants who were interviewed felt that group processing was an important part of group work and that it had increased their group's effectiveness as well as their ability to work effectively with other people. Participants held positive views on group work prior to engaging in group processing, and group processing did not alter their atittude toward group work. Preservice teachers who were interviewed planned to use group work and a modified group processing protocol in their future classrooms. They also felt that group processing had prepared them for their future professions by modeling effective collaboration and group skills. Based on this research, a new model for group processing has been created which includes extensive instructor interaction and additional group processing sessions. This study offers a new perspective on the phenomenon of group processing and informs science educators and teacher educators on the effective implementation of this important component of small-group learning.
Properties of the Bivariate Delayed Poisson Process
1974-07-01
and Lewis (1972) in their Berkeley Symposium paper and here their analysis of the bivariate Poisson processes (without Poisson noise) is carried... Poisson processes . They cannot, however, be independent Poisson processes because their events are associated in pairs by the displace- ment centres...process because its marginal processes for events of each type are themselves (univariate) Poisson processes . Cox and Lewis (1972) assumed a
The Application of Six Sigma Methodologies to University Processes: The Use of Student Teams
ERIC Educational Resources Information Center
Pryor, Mildred Golden; Alexander, Christine; Taneja, Sonia; Tirumalasetty, Sowmya; Chadalavada, Deepthi
2012-01-01
The first student Six Sigma team (activated under a QEP Process Sub-team) evaluated the course and curriculum approval process. The goal was to streamline the process and thereby shorten process cycle time and reduce confusion about how the process works. Members of this team developed flowcharts on how the process is supposed to work (by…
Impact of Radio Frequency Identification (RFID) on the Marine Corps’ Supply Process
2006-09-01
Hypothetical Improvement Using a Real-Time Order Processing System Vice a Batch Order Processing System ................56 3. As-Is: The Current... Processing System Vice a Batch Order Processing System ................58 V. RESULTS ................................................69 A. SIMULATION...Time: Hypothetical Improvement Using a Real-Time Order Processing System Vice a Batch Order Processing System ................71 3. As-Is: The
Pletzer, Belinda; Scheuringer, Andrea; Scherndl, Thomas
2017-09-05
Sex differences have been reported for a variety of cognitive tasks and related to the use of different cognitive processing styles in men and women. It was recently argued that these processing styles share some characteristics across tasks, i.e. male approaches are oriented towards holistic stimulus aspects and female approaches are oriented towards stimulus details. In that respect, sex-dependent cognitive processing styles share similarities with attentional global-local processing. A direct relationship between cognitive processing and global-local processing has however not been previously established. In the present study, 49 men and 44 women completed a Navon paradigm and a Kimchi Palmer task as well as a navigation task and a verbal fluency task with the goal to relate the global advantage (GA) effect as a measure of global processing to holistic processing styles in both tasks. Indeed participants with larger GA effects displayed more holistic processing during spatial navigation and phonemic fluency. However, the relationship to cognitive processing styles was modulated by the specific condition of the Navon paradigm, as well as the sex of participants. Thus, different types of global-local processing play different roles for cognitive processing in men and women.
21 CFR 113.83 - Establishing scheduled processes.
Code of Federal Regulations, 2012 CFR
2012-04-01
... competent processing authorities. If incubation tests are necessary for process confirmation, they shall... instituting the process. The incubation tests for confirmation of the scheduled processes should include the.... Complete records covering all aspects of the establishment of the process and associated incubation tests...
21 CFR 113.83 - Establishing scheduled processes.
Code of Federal Regulations, 2014 CFR
2014-04-01
... competent processing authorities. If incubation tests are necessary for process confirmation, they shall... instituting the process. The incubation tests for confirmation of the scheduled processes should include the.... Complete records covering all aspects of the establishment of the process and associated incubation tests...
21 CFR 113.83 - Establishing scheduled processes.
Code of Federal Regulations, 2013 CFR
2013-04-01
... competent processing authorities. If incubation tests are necessary for process confirmation, they shall... instituting the process. The incubation tests for confirmation of the scheduled processes should include the.... Complete records covering all aspects of the establishment of the process and associated incubation tests...
A mathematical study of a random process proposed as an atmospheric turbulence model
NASA Technical Reports Server (NTRS)
Sidwell, K.
1977-01-01
A random process is formed by the product of a local Gaussian process and a random amplitude process, and the sum of that product with an independent mean value process. The mathematical properties of the resulting process are developed, including the first and second order properties and the characteristic function of general order. An approximate method for the analysis of the response of linear dynamic systems to the process is developed. The transition properties of the process are also examined.
Standard services for the capture, processing, and distribution of packetized telemetry data
NASA Technical Reports Server (NTRS)
Stallings, William H.
1989-01-01
Standard functional services for the capture, processing, and distribution of packetized data are discussed with particular reference to the future implementation of packet processing systems, such as those for the Space Station Freedom. The major functions are listed under the following major categories: input processing, packet processing, and output processing. A functional block diagram of a packet data processing facility is presented, showing the distribution of the various processing functions as well as the primary data flow through the facility.
Yoo, Sooyoung; Cho, Minsu; Kim, Eunhye; Kim, Seok; Sim, Yerim; Yoo, Donghyun; Hwang, Hee; Song, Minseok
2016-04-01
Many hospitals are increasing their efforts to improve processes because processes play an important role in enhancing work efficiency and reducing costs. However, to date, a quantitative tool has not been available to examine the before and after effects of processes and environmental changes, other than the use of indirect indicators, such as mortality rate and readmission rate. This study used process mining technology to analyze process changes based on changes in the hospital environment, such as the construction of a new building, and to measure the effects of environmental changes in terms of consultation wait time, time spent per task, and outpatient care processes. Using process mining technology, electronic health record (EHR) log data of outpatient care before and after constructing a new building were analyzed, and the effectiveness of the technology in terms of the process was evaluated. Using the process mining technique, we found that the total time spent in outpatient care did not increase significantly compared to that before the construction of a new building, considering that the number of outpatients increased, and the consultation wait time decreased. These results suggest that the operation of the outpatient clinic was effective after changes were implemented in the hospital environment. We further identified improvements in processes using the process mining technique, thereby demonstrating the usefulness of this technique for analyzing complex hospital processes at a low cost. This study confirmed the effectiveness of process mining technology at an actual hospital site. In future studies, the use of process mining technology will be expanded by applying this approach to a larger variety of process change situations. Copyright © 2016. Published by Elsevier Ireland Ltd.
Study of process variables associated with manufacturing hermetically-sealed nickel-cadmium cells
NASA Technical Reports Server (NTRS)
Miller, L.
1974-01-01
A two year study of the major process variables associated with the manufacturing process for sealed, nickel-cadmium, areospace cells is summarized. Effort was directed toward identifying the major process variables associated with a manufacturing process, experimentally assessing each variable's effect, and imposing the necessary changes (optimization) and controls for the critical process variables to improve results and uniformity. A critical process variable associated with the sintered nickel plaque manufacturing process was identified as the manual forming operation. Critical process variables identified with the positive electrode impregnation/polarization process were impregnation solution temperature, free acid content, vacuum impregnation, and sintered plaque strength. Positive and negative electrodes were identified as a major source of carbonate contamination in sealed cells.
Monitoring autocorrelated process: A geometric Brownian motion process approach
NASA Astrophysics Data System (ADS)
Li, Lee Siaw; Djauhari, Maman A.
2013-09-01
Autocorrelated process control is common in today's modern industrial process control practice. The current practice of autocorrelated process control is to eliminate the autocorrelation by using an appropriate model such as Box-Jenkins models or other models and then to conduct process control operation based on the residuals. In this paper we show that many time series are governed by a geometric Brownian motion (GBM) process. Therefore, in this case, by using the properties of a GBM process, we only need an appropriate transformation and model the transformed data to come up with the condition needs in traditional process control. An industrial example of cocoa powder production process in a Malaysian company will be presented and discussed to illustrate the advantages of the GBM approach.
Meta-control of combustion performance with a data mining approach
NASA Astrophysics Data System (ADS)
Song, Zhe
Large scale combustion process is complex and proposes challenges of optimizing its performance. Traditional approaches based on thermal dynamics have limitations on finding optimal operational regions due to time-shift nature of the process. Recent advances in information technology enable people collect large volumes of process data easily and continuously. The collected process data contains rich information about the process and, to some extent, represents a digital copy of the process over time. Although large volumes of data exist in industrial combustion processes, they are not fully utilized to the level where the process can be optimized. Data mining is an emerging science which finds patterns or models from large data sets. It has found many successful applications in business marketing, medical and manufacturing domains The focus of this dissertation is on applying data mining to industrial combustion processes, and ultimately optimizing the combustion performance. However the philosophy, methods and frameworks discussed in this research can also be applied to other industrial processes. Optimizing an industrial combustion process has two major challenges. One is the underlying process model changes over time and obtaining an accurate process model is nontrivial. The other is that a process model with high fidelity is usually highly nonlinear, solving the optimization problem needs efficient heuristics. This dissertation is set to solve these two major challenges. The major contribution of this 4-year research is the data-driven solution to optimize the combustion process, where process model or knowledge is identified based on the process data, then optimization is executed by evolutionary algorithms to search for optimal operating regions.
5 CFR 1653.13 - Processing legal processes.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 3 2014-01-01 2014-01-01 false Processing legal processes. 1653.13 Section 1653.13 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD COURT ORDERS AND LEGAL PROCESSES AFFECTING THRIFT SAVINGS PLAN ACCOUNTS Legal Process for the Enforcement of a Participant's Legal...
5 CFR 1653.13 - Processing legal processes.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 3 2013-01-01 2013-01-01 false Processing legal processes. 1653.13 Section 1653.13 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD COURT ORDERS AND LEGAL PROCESSES AFFECTING THRIFT SAVINGS PLAN ACCOUNTS Legal Process for the Enforcement of a Participant's Legal...
A Search Algorithm for Generating Alternative Process Plans in Flexible Manufacturing System
NASA Astrophysics Data System (ADS)
Tehrani, Hossein; Sugimura, Nobuhiro; Tanimizu, Yoshitaka; Iwamura, Koji
Capabilities and complexity of manufacturing systems are increasing and striving for an integrated manufacturing environment. Availability of alternative process plans is a key factor for integration of design, process planning and scheduling. This paper describes an algorithm for generation of alternative process plans by extending the existing framework of the process plan networks. A class diagram is introduced for generating process plans and process plan networks from the viewpoint of the integrated process planning and scheduling systems. An incomplete search algorithm is developed for generating and searching the process plan networks. The benefit of this algorithm is that the whole process plan network does not have to be generated before the search algorithm starts. This algorithm is applicable to large and enormous process plan networks and also to search wide areas of the network based on the user requirement. The algorithm can generate alternative process plans and to select a suitable one based on the objective functions.
O'Callaghan, Sean; De Souza, David P; Isaac, Andrew; Wang, Qiao; Hodkinson, Luke; Olshansky, Moshe; Erwin, Tim; Appelbe, Bill; Tull, Dedreia L; Roessner, Ute; Bacic, Antony; McConville, Malcolm J; Likić, Vladimir A
2012-05-30
Gas chromatography-mass spectrometry (GC-MS) is a technique frequently used in targeted and non-targeted measurements of metabolites. Most existing software tools for processing of raw instrument GC-MS data tightly integrate data processing methods with graphical user interface facilitating interactive data processing. While interactive processing remains critically important in GC-MS applications, high-throughput studies increasingly dictate the need for command line tools, suitable for scripting of high-throughput, customized processing pipelines. PyMS comprises a library of functions for processing of instrument GC-MS data developed in Python. PyMS currently provides a complete set of GC-MS processing functions, including reading of standard data formats (ANDI- MS/NetCDF and JCAMP-DX), noise smoothing, baseline correction, peak detection, peak deconvolution, peak integration, and peak alignment by dynamic programming. A novel common ion single quantitation algorithm allows automated, accurate quantitation of GC-MS electron impact (EI) fragmentation spectra when a large number of experiments are being analyzed. PyMS implements parallel processing for by-row and by-column data processing tasks based on Message Passing Interface (MPI), allowing processing to scale on multiple CPUs in distributed computing environments. A set of specifically designed experiments was performed in-house and used to comparatively evaluate the performance of PyMS and three widely used software packages for GC-MS data processing (AMDIS, AnalyzerPro, and XCMS). PyMS is a novel software package for the processing of raw GC-MS data, particularly suitable for scripting of customized processing pipelines and for data processing in batch mode. PyMS provides limited graphical capabilities and can be used both for routine data processing and interactive/exploratory data analysis. In real-life GC-MS data processing scenarios PyMS performs as well or better than leading software packages. We demonstrate data processing scenarios simple to implement in PyMS, yet difficult to achieve with many conventional GC-MS data processing software. Automated sample processing and quantitation with PyMS can provide substantial time savings compared to more traditional interactive software systems that tightly integrate data processing with the graphical user interface.
Wong, Quincy J J; Moulds, Michelle L
2012-12-01
Evidence from the depression literature suggests that an analytical processing mode adopted during repetitive thinking leads to maladaptive outcomes relative to an experiential processing mode. To date, in socially anxious individuals, the impact of processing mode during repetitive thinking related to an actual social-evaluative situation has not been investigated. We thus tested whether an analytical processing mode would be maladaptive relative to an experiential processing mode during anticipatory processing and post-event rumination. High and low socially anxious participants were induced to engage in either an analytical or experiential processing mode during: (a) anticipatory processing before performing a speech (Experiment 1; N = 94), or (b) post-event rumination after performing a speech (Experiment 2; N = 74). Mood, cognition, and behavioural measures were employed to examine the effects of processing mode. For high socially anxious participants, the modes had a similar effect on self-reported anxiety during both anticipatory processing and post-event rumination. Unexpectedly, relative to the analytical mode, the experiential mode led to stronger high standard and conditional beliefs during anticipatory processing, and stronger unconditional beliefs during post-event rumination. These experiments are the first to investigate processing mode during anticipatory processing and post-event rumination. Hence, these results are novel and will need to be replicated. These findings suggest that an experiential processing mode is maladaptive relative to an analytical processing mode during repetitive thinking characteristic of socially anxious individuals. Copyright © 2012 Elsevier Ltd. All rights reserved.
Mertens, Wilson C; Christov, Stefan C; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Cassells, Lucinda J; Marquard, Jenna L
2012-11-01
Chemotherapy ordering and administration, in which errors have potentially severe consequences, was quantitatively and qualitatively evaluated by employing process formalism (or formal process definition), a technique derived from software engineering, to elicit and rigorously describe the process, after which validation techniques were applied to confirm the accuracy of the described process. The chemotherapy ordering and administration process, including exceptional situations and individuals' recognition of and responses to those situations, was elicited through informal, unstructured interviews with members of an interdisciplinary team. The process description (or process definition), written in a notation developed for software quality assessment purposes, guided process validation (which consisted of direct observations and semistructured interviews to confirm the elicited details for the treatment plan portion of the process). The overall process definition yielded 467 steps; 207 steps (44%) were dedicated to handling 59 exceptional situations. Validation yielded 82 unique process events (35 new expected but not yet described steps, 16 new exceptional situations, and 31 new steps in response to exceptional situations). Process participants actively altered the process as ambiguities and conflicts were discovered by the elicitation and validation components of the study. Chemotherapy error rates declined significantly during and after the project, which was conducted from October 2007 through August 2008. Each elicitation method and the subsequent validation discussions contributed uniquely to understanding the chemotherapy treatment plan review process, supporting rapid adoption of changes, improved communication regarding the process, and ensuing error reduction.
Modeling interdependencies between business and communication processes in hospitals.
Brigl, Birgit; Wendt, Thomas; Winter, Alfred
2003-01-01
The optimization and redesign of business processes in hospitals is an important challenge for the hospital information management who has to design and implement a suitable HIS architecture. Nevertheless, there are no tools available specializing in modeling information-driven business processes and the consequences on the communication between information processing, tools. Therefore, we will present an approach which facilitates the representation and analysis of business processes and resulting communication processes between application components and their interdependencies. This approach aims not only to visualize those processes, but to also to evaluate if there are weaknesses concerning the information processing infrastructure which hinder the smooth implementation of the business processes.
Ott, Denise; Kralisch, Dana; Denčić, Ivana; Hessel, Volker; Laribi, Yosra; Perrichon, Philippe D; Berguerand, Charline; Kiwi-Minsker, Lioubov; Loeb, Patrick
2014-12-01
As the demand for new drugs is rising, the pharmaceutical industry faces the quest of shortening development time, and thus, reducing the time to market. Environmental aspects typically still play a minor role within the early phase of process development. Nevertheless, it is highly promising to rethink, redesign, and optimize process strategies as early as possible in active pharmaceutical ingredient (API) process development, rather than later at the stage of already established processes. The study presented herein deals with a holistic life-cycle-based process optimization and intensification of a pharmaceutical production process targeting a low-volume, high-value API. Striving for process intensification by transfer from batch to continuous processing, as well as an alternative catalytic system, different process options are evaluated with regard to their environmental impact to identify bottlenecks and improvement potentials for further process development activities. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
SOI-CMOS Process for Monolithic, Radiation-Tolerant, Science-Grade Imagers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, George; Lee, Adam
In Phase I, Voxtel worked with Jazz and Sandia to document and simulate the processes necessary to implement a DH-BSI SOI CMOS imaging process. The development is based upon mature SOI CMOS process at both fabs, with the addition of only a few custom processing steps for integration and electrical interconnection of the fully-depleted photodetectors. In Phase I, Voxtel also characterized the Sandia process, including the CMOS7 design rules, and we developed the outline of a process option that included a “BOX etch”, that will permit a “detector in handle” SOI CMOS process to be developed The process flows weremore » developed in cooperation with both Jazz and Sandia process engineers, along with detailed TCAD modeling and testing of the photodiode array architectures. In addition, Voxtel tested the radiation performance of the Jazz’s CA18HJ process, using standard and circular-enclosed transistors.« less
Face to face with emotion: holistic face processing is modulated by emotional state.
Curby, Kim M; Johnson, Kareem J; Tyson, Alyssa
2012-01-01
Negative emotions are linked with a local, rather than global, visual processing style, which may preferentially facilitate feature-based, relative to holistic, processing mechanisms. Because faces are typically processed holistically, and because social contexts are prime elicitors of emotions, we examined whether negative emotions decrease holistic processing of faces. We induced positive, negative, or neutral emotions via film clips and measured holistic processing before and after the induction: participants made judgements about cued parts of chimeric faces, and holistic processing was indexed by the interference caused by task-irrelevant face parts. Emotional state significantly modulated face-processing style, with the negative emotion induction leading to decreased holistic processing. Furthermore, self-reported change in emotional state correlated with changes in holistic processing. These results contrast with general assumptions that holistic processing of faces is automatic and immune to outside influences, and they illustrate emotion's power to modulate socially relevant aspects of visual perception.
5 CFR 581.203 - Information minimally required to accompany legal process.
Code of Federal Regulations, 2014 CFR
2014-01-01
... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...
5 CFR 581.203 - Information minimally required to accompany legal process.
Code of Federal Regulations, 2011 CFR
2011-01-01
... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...
5 CFR 581.203 - Information minimally required to accompany legal process.
Code of Federal Regulations, 2013 CFR
2013-01-01
... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...
5 CFR 581.203 - Information minimally required to accompany legal process.
Code of Federal Regulations, 2012 CFR
2012-01-01
... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...
5 CFR 581.203 - Information minimally required to accompany legal process.
Code of Federal Regulations, 2010 CFR
2010-01-01
... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...
20 CFR 405.725 - Effect of expedited appeals process agreement.
Code of Federal Regulations, 2010 CFR
2010-04-01
... PROCESS FOR ADJUDICATING INITIAL DISABILITY CLAIMS Expedited Appeals Process for Constitutional Issues § 405.725 Effect of expedited appeals process agreement. After an expedited appeals process agreement is... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Effect of expedited appeals process agreement...
Common and distinct networks for self-referential and social stimulus processing in the human brain.
Herold, Dorrit; Spengler, Stephanie; Sajonz, Bastian; Usnich, Tatiana; Bermpohl, Felix
2016-09-01
Self-referential processing is a complex cognitive function, involving a set of implicit and explicit processes, complicating investigation of its distinct neural signature. The present study explores the functional overlap and dissociability of self-referential and social stimulus processing. We combined an established paradigm for explicit self-referential processing with an implicit social stimulus processing paradigm in one fMRI experiment to determine the neural effects of self-relatedness and social processing within one study. Overlapping activations were found in the orbitofrontal cortex and in the intermediate part of the precuneus. Stimuli judged as self-referential specifically activated the posterior cingulate cortex, the ventral medial prefrontal cortex, extending into anterior cingulate cortex and orbitofrontal cortex, the dorsal medial prefrontal cortex, the ventral and dorsal lateral prefrontal cortex, the left inferior temporal gyrus, and occipital cortex. Social processing specifically involved the posterior precuneus and bilateral temporo-parietal junction. Taken together, our data show, not only, first, common networks for both processes in the medial prefrontal and the medial parietal cortex, but also, second, functional differentiations for self-referential processing versus social processing: an anterior-posterior gradient for social processing and self-referential processing within the medial parietal cortex and specific activations for self-referential processing in the medial and lateral prefrontal cortex and for social processing in the temporo-parietal junction.
Kumarapeli, P; De Lusignan, S; Ellis, T; Jones, B
2007-03-01
The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.
Use of Analogies in the Study of Diffusion
ERIC Educational Resources Information Center
Letic, Milorad
2014-01-01
Emergent processes, such as diffusion, are considered more difficult to understand than direct processes. In physiology, most processes are presented as direct processes, so emergent processes, when encountered, are even more difficult to understand. It has been suggested that, when studying diffusion, misconceptions about random processes are the…
Is Analytic Information Processing a Feature of Expertise in Medicine?
ERIC Educational Resources Information Center
McLaughlin, Kevin; Rikers, Remy M.; Schmidt, Henk G.
2008-01-01
Diagnosing begins by generating an initial diagnostic hypothesis by automatic information processing. Information processing may stop here if the hypothesis is accepted, or analytical processing may be used to refine the hypothesis. This description portrays analytic processing as an optional extra in information processing, leading us to…
5 CFR 582.305 - Honoring legal process.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Honoring legal process. 582.305 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Compliance With Legal Process § 582.305 Honoring legal process. (a) The agency shall comply with legal process, except where the process cannot be complied with because: (1) It...
5 CFR 582.305 - Honoring legal process.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Honoring legal process. 582.305 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Compliance With Legal Process § 582.305 Honoring legal process. (a) The agency shall comply with legal process, except where the process cannot be complied with because: (1) It...
5 CFR 581.305 - Honoring legal process.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Honoring legal process. 581.305 Section... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Compliance With Process § 581.305 Honoring legal process. (a) The governmental entity shall comply with legal process, except where the process cannot be...
5 CFR 581.305 - Honoring legal process.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Honoring legal process. 581.305 Section... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Compliance With Process § 581.305 Honoring legal process. (a) The governmental entity shall comply with legal process, except where the process cannot be...
5 CFR 582.305 - Honoring legal process.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Honoring legal process. 582.305 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Compliance With Legal Process § 582.305 Honoring legal process. (a) The agency shall comply with legal process, except where the process cannot be complied with because: (1) It...
5 CFR 581.305 - Honoring legal process.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Honoring legal process. 581.305 Section... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Compliance With Process § 581.305 Honoring legal process. (a) The governmental entity shall comply with legal process, except where the process cannot be...
Articulating the Resources for Business Process Analysis and Design
ERIC Educational Resources Information Center
Jin, Yulong
2012-01-01
Effective process analysis and modeling are important phases of the business process management lifecycle. When many activities and multiple resources are involved, it is very difficult to build a correct business process specification. This dissertation provides a resource perspective of business processes. It aims at a better process analysis…
An Integrated Model of Emotion Processes and Cognition in Social Information Processing.
ERIC Educational Resources Information Center
Lemerise, Elizabeth A.; Arsenio, William F.
2000-01-01
Interprets literature on contributions of social cognitive and emotion processes to children's social competence in the context of an integrated model of emotion processes and cognition in social information processing. Provides neurophysiological and functional evidence for the centrality of emotion processes in personal-social decision making.…
2010-04-01
NRL Stennis Space Center (NRL-SSC) for further processing using the NRL SSC Automated Processing System (APS). APS was developed for processing...have not previously developed automated processing for 73 hyperspectral ocean color data. The hyperspectral processing branch includes several
DISCRETE COMPOUND POISSON PROCESSES AND TABLES OF THE GEOMETRIC POISSON DISTRIBUTION.
A concise summary of the salient properties of discrete Poisson processes , with emphasis on comparing the geometric and logarithmic Poisson processes . The...the geometric Poisson process are given for 176 sets of parameter values. New discrete compound Poisson processes are also introduced. These...processes have properties that are particularly relevant when the summation of several different Poisson processes is to be analyzed. This study provides the
Management of processes of electrochemical dimensional processing
NASA Astrophysics Data System (ADS)
Akhmetov, I. D.; Zakirova, A. R.; Sadykov, Z. B.
2017-09-01
In different industries a lot high-precision parts are produced from hard-processed scarce materials. Forming such details can only be acting during non-contact processing, or a minimum of effort, and doable by the use, for example, of electro-chemical processing. At the present stage of development of metal working processes are important management issues electrochemical machining and its automation. This article provides some indicators and factors of electrochemical machining process.
The Hyperspectral Imager for the Coastal Ocean (HICO): Sensor and Data Processing Overview
2010-01-20
backscattering coefficients, and others. Several of these software modules will be developed within the Automated Processing System (APS), a data... Automated Processing System (APS) NRL developed APS, which processes satellite data into ocean color data products. APS is a collection of methods...used for ocean color processing which provide the tools for the automated processing of satellite imagery [1]. These tools are in the process of
[Study on culture and philosophy of processing of traditional Chinese medicines].
Yang, Ming; Zhang, Ding-Kun; Zhong, Ling-Yun; Wang, Fang
2013-07-01
According to cultural views and philosophical thoughts, this paper studies the cultural origin, thinking modes, core principles, general regulation and methods of processing, backtracks processing's culture and history which contains generation and deduction process, experienced and promoting process, and core value, summarizes processing's basic principles which are directed by holistic, objective, dynamic, balanced and appropriate thoughts; so as to propagate cultural characteristic and philosophical wisdom of traditional Chinese medicine processing, to promote inheritance and development of processing and to ensure the maximum therapeutic value of Chinese medical clinical.
Containerless automated processing of intermetallic compounds and composites
NASA Technical Reports Server (NTRS)
Johnson, D. R.; Joslin, S. M.; Reviere, R. D.; Oliver, B. F.; Noebe, R. D.
1993-01-01
An automated containerless processing system has been developed to directionally solidify high temperature materials, intermetallic compounds, and intermetallic/metallic composites. The system incorporates a wide range of ultra-high purity chemical processing conditions. The utilization of image processing for automated control negates the need for temperature measurements for process control. The list of recent systems that have been processed includes Cr, Mo, Mn, Nb, Ni, Ti, V, and Zr containing aluminides. Possible uses of the system, process control approaches, and properties and structures of recently processed intermetallics are reviewed.
A continuous process for the development of Kodak Aerochrome Infrared Film 2443 as a negative
NASA Astrophysics Data System (ADS)
Klimes, D.; Ross, D. I.
1993-02-01
A process for the continuous dry-to-dry development of Kodak Aerochrome Infrared Film 2443 as a negative (CIR-neg) is described. The process is well suited for production processing of long film lengths. Chemicals from three commercial film processes are used with modifications. Sensitometric procedures are recommended for the monitoring of processing quality control. Sensitometric data and operational aerial exposures indicate that films developed in this process have approximately the same effective aerial film speed as films processed in the reversal process recommended by the manufacturer (Kodak EA-5). The CIR-neg process is useful when aerial photography is acquired for resources management applications which require print reproductions. Originals can be readily reproduced using conventional production equipment (electronic dodging) in black and white or color (color compensation).
Antibiotics with anaerobic ammonium oxidation in urban wastewater treatment
NASA Astrophysics Data System (ADS)
Zhou, Ruipeng; Yang, Yuanming
2017-05-01
Biofilter process is based on biological oxidation process on the introduction of fast water filter design ideas generated by an integrated filtration, adsorption and biological role of aerobic wastewater treatment process various purification processes. By engineering example, we show that the process is an ideal sewage and industrial wastewater treatment process of low concentration. Anaerobic ammonia oxidation process because of its advantage of the high efficiency and low consumption, wastewater biological denitrification field has broad application prospects. The process in practical wastewater treatment at home and abroad has become a hot spot. In this paper, anammox bacteria habitats and species diversity, and anaerobic ammonium oxidation process in the form of diversity, and one and split the process operating conditions are compared, focusing on a review of the anammox process technology various types of wastewater laboratory research and engineering applications, including general water quality and pressure filtrate sludge digestion, landfill leachate, aquaculture wastewater, monosodium glutamate wastewater, wastewater, sewage, fecal sewage, waste water salinity wastewater characteristics, research progress and application of the obstacles. Finally, we summarize the anaerobic ammonium oxidation process potential problems during the processing of the actual waste water, and proposed future research focus on in-depth study of water quality anammox obstacle factor and its regulatory policy, and vigorously develop on this basis, and combined process optimization.
Understanding scaling through history-dependent processes with collapsing sample space.
Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan
2015-04-28
History-dependent processes are ubiquitous in natural and social systems. Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space-reducing (SSR) processes necessarily lead to Zipf's law in the rank distributions of their outcomes. We show that by adding noise to SSR processes the corresponding rank distributions remain exact power laws, p(x) ~ x(-λ), where the exponent directly corresponds to the mixing ratio of the SSR process and noise. This allows us to give a precise meaning to the scaling exponent in terms of the degree to which a given process reduces its sample space as it unfolds. Noisy SSR processes further allow us to explain a wide range of scaling exponents in frequency distributions ranging from α = 2 to ∞. We discuss several applications showing how SSR processes can be used to understand Zipf's law in word frequencies, and how they are related to diffusion processes in directed networks, or aging processes such as in fragmentation processes. SSR processes provide a new alternative to understand the origin of scaling in complex systems without the recourse to multiplicative, preferential, or self-organized critical processes.
NASA Astrophysics Data System (ADS)
Bian, X. X.; Gu, Y. Z.; Sun, J.; Li, M.; Liu, W. P.; Zhang, Z. G.
2013-10-01
In this study, the effects of processing temperature and vacuum applying rate on the forming quality of C-shaped carbon fiber reinforced epoxy resin matrix composite laminates during hot diaphragm forming process were investigated. C-shaped prepreg preforms were produced using a home-made hot diaphragm forming equipment. The thickness variations of the preforms and the manufacturing defects after diaphragm forming process, including fiber wrinkling and voids, were evaluated to understand the forming mechanism. Furthermore, both interlaminar slipping friction and compaction behavior of the prepreg stacks were experimentally analyzed for showing the importance of the processing parameters. In addition, autoclave processing was used to cure the C-shaped preforms to investigate the changes of the defects before and after cure process. The results show that the C-shaped prepreg preforms with good forming quality can be achieved through increasing processing temperature and reducing vacuum applying rate, which obviously promote prepreg interlaminar slipping process. The process temperature and forming rate in hot diaphragm forming process strongly influence prepreg interply frictional force, and the maximum interlaminar frictional force can be taken as a key parameter for processing parameter optimization. Autoclave process is effective in eliminating voids in the preforms and can alleviate fiber wrinkles to a certain extent.
Assessment of Advanced Coal Gasification Processes
NASA Technical Reports Server (NTRS)
McCarthy, John; Ferrall, Joseph; Charng, Thomas; Houseman, John
1981-01-01
This report represents a technical assessment of the following advanced coal gasification processes: AVCO High Throughput Gasification (HTG) Process; Bell Single-Stage High Mass Flux (HMF) Process; Cities Service/Rockwell (CS/R) Hydrogasification Process; Exxon Catalytic Coal Gasification (CCG) Process. Each process is evaluated for its potential to produce SNG from a bituminous coal. In addition to identifying the new technology these processes represent, key similarities/differences, strengths/weaknesses, and potential improvements to each process are identified. The AVCO HTG and the Bell HMF gasifiers share similarities with respect to: short residence time (SRT), high throughput rate, slagging and syngas as the initial raw product gas. The CS/R Hydrogasifier is also SRT but is non-slagging and produces a raw gas high in methane content. The Exxon CCG gasifier is a long residence time, catalytic, fluidbed reactor producing all of the raw product methane in the gasifier. The report makes the following assessments: 1) while each process has significant potential as coal gasifiers, the CS/R and Exxon processes are better suited for SNG production; 2) the Exxon process is the closest to a commercial level for near-term SNG production; and 3) the SRT processes require significant development including scale-up and turndown demonstration, char processing and/or utilization demonstration, and reactor control and safety features development.
Integrated Process Modeling-A Process Validation Life Cycle Companion.
Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph
2017-10-17
During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.
Hughes, Brianna H; Greenberg, Neil J; Yang, Tom C; Skonberg, Denise I
2015-01-01
High-pressure processing (HPP) is used to increase meat safety and shelf-life, with conflicting quality effects depending on rigor status during HPP. In the seafood industry, HPP is used to shuck and pasteurize oysters, but its use on abalones has only been minimally evaluated and the effect of rigor status during HPP on abalone quality has not been reported. Farm-raised abalones (Haliotis rufescens) were divided into 12 HPP treatments and 1 unprocessed control treatment. Treatments were processed pre-rigor or post-rigor at 2 pressures (100 and 300 MPa) and 3 processing times (1, 3, and 5 min). The control was analyzed post-rigor. Uniform plugs were cut from adductor and foot meat for texture profile analysis, shear force, and color analysis. Subsamples were used for scanning electron microscopy of muscle ultrastructure. Texture profile analysis revealed that post-rigor processed abalone was significantly (P < 0.05) less firm and chewy than pre-rigor processed irrespective of muscle type, processing time, or pressure. L values increased with pressure to 68.9 at 300 MPa for pre-rigor processed foot, 73.8 for post-rigor processed foot, 90.9 for pre-rigor processed adductor, and 89.0 for post-rigor processed adductor. Scanning electron microscopy images showed fraying of collagen fibers in processed adductor, but did not show pressure-induced compaction of the foot myofibrils. Post-rigor processed abalone meat was more tender than pre-rigor processed meat, and post-rigor processed foot meat was lighter in color than pre-rigor processed foot meat, suggesting that waiting for rigor to resolve prior to processing abalones may improve consumer perceptions of quality and market value. © 2014 Institute of Food Technologists®
PROCESSING ALTERNATIVES FOR DESTRUCTION OF TETRAPHENYLBORATE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, D; Thomas Peters, T; Samuel Fink, S
Two processes were chosen in the 1980's at the Savannah River Site (SRS) to decontaminate the soluble High Level Waste (HLW). The In Tank Precipitation (ITP) process (1,2) was developed at SRS for the removal of radioactive cesium and actinides from the soluble HLW. Sodium tetraphenylborate was added to the waste to precipitate cesium and monosodium titanate (MST) was added to adsorb actinides, primarily uranium and plutonium. Two products of this process were a low activity waste stream and a concentrated organic stream containing cesium tetraphenylborate and actinides adsorbed on monosodium titanate (MST). A copper catalyzed acid hydrolysis process wasmore » built to process (3, 4) the Tank 48H cesium tetraphenylborate waste in the SRS's Defense Waste Processing Facility (DWPF). Operation of the DWPF would have resulted in the production of benzene for incineration in SRS's Consolidated Incineration Facility. This process was abandoned together with the ITP process in 1998 due to high benzene in ITP caused by decomposition of excess sodium tetraphenylborate. Processing in ITP resulted in the production of approximately 1.0 million liters of HLW. SRS has chosen a solvent extraction process combined with adsorption of the actinides to decontaminate the soluble HLW stream (5). However, the waste in Tank 48H is incompatible with existing waste processing facilities. As a result, a processing facility is needed to disposition the HLW in Tank 48H. This paper will describe the process for searching for processing options by SRS task teams for the disposition of the waste in Tank 48H. In addition, attempts to develop a caustic hydrolysis process for in tank destruction of tetraphenylborate will be presented. Lastly, the development of both a caustic and acidic copper catalyzed peroxide oxidation process will be discussed.« less
NASA Astrophysics Data System (ADS)
Luqman, M.; Rosli, M. U.; Khor, C. Y.; Zambree, Shayfull; Jahidi, H.
2018-03-01
Crank arm is one of the important parts in a bicycle that is an expensive product due to the high cost of material and production process. This research is aimed to investigate the potential type of manufacturing process to fabricate composite bicycle crank arm and to describe an approach based on analytical hierarchy process (AHP) that assists decision makers or manufacturing engineers in determining the most suitable process to be employed in manufacturing of composite bicycle crank arm at the early stage of the product development process to reduce the production cost. There are four types of processes were considered, namely resin transfer molding (RTM), compression molding (CM), vacuum bag molding and filament winding (FW). The analysis ranks these four types of process for its suitability in the manufacturing of bicycle crank arm based on five main selection factors and 10 sub factors. Determining the right manufacturing process was performed based on AHP process steps. Consistency test was performed to make sure the judgements are consistent during the comparison. The results indicated that the compression molding was the most appropriate manufacturing process because it has the highest value (33.6%) among the other manufacturing processes.
A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences
NASA Astrophysics Data System (ADS)
Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert
2011-09-01
Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.
Quantitative analysis of geomorphic processes using satellite image data at different scales
NASA Technical Reports Server (NTRS)
Williams, R. S., Jr.
1985-01-01
When aerial and satellite photographs and images are used in the quantitative analysis of geomorphic processes, either through direct observation of active processes or by analysis of landforms resulting from inferred active or dormant processes, a number of limitations in the use of such data must be considered. Active geomorphic processes work at different scales and rates. Therefore, the capability of imaging an active or dormant process depends primarily on the scale of the process and the spatial-resolution characteristic of the imaging system. Scale is an important factor in recording continuous and discontinuous active geomorphic processes, because what is not recorded will not be considered or even suspected in the analysis of orbital images. If the geomorphic process of landform change caused by the process is less than 200 m in x to y dimension, then it will not be recorded. Although the scale factor is critical, in the recording of discontinuous active geomorphic processes, the repeat interval of orbital-image acquisition of a planetary surface also is a consideration in order to capture a recurring short-lived geomorphic process or to record changes caused by either a continuous or a discontinuous geomorphic process.
Remote Sensing Image Quality Assessment Experiment with Post-Processing
NASA Astrophysics Data System (ADS)
Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.
2018-04-01
This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.
NASA Astrophysics Data System (ADS)
Gatti, J. R.; Bhattacharjee, P. P.
2014-12-01
Evolution of microstructure and texture during severe deformation and annealing was studied in Al-2.5%Mg alloy processed by two different routes, namely, monotonic Accumulative Roll Bonding (ARB) and a hybrid route combining ARB and conventional rolling (CR). For this purpose Al-2.5%Mg sheets were subjected to 5 cycles of monotonic ARB (equivalent strain (ɛeq) = 4.0) processing while in the hybrid route (ARB + CR) 3 cycle ARB-processed sheets were further deformed by conventional rolling to 75% reduction in thickness (ɛeq = 4.0). Although formation of ultrafine structure was observed in the two processing routes, the monotonic ARB—processed material showed finer microstructure but weak texture as compared to the ARB + CR—processed material. After complete recrystallization, the ARB + CR-processed material showed weak cube texture ({001}<100>) but the cube component was almost negligible in the monotonic ARB-processed material-processed material. However, the ND-rotated cube components were stronger in the monotonic ARB-processed material-processed material. The observed differences in the microstructure and texture evolution during deformation and annealing could be explained by the characteristic differences of the two processing routes.
Process Materialization Using Templates and Rules to Design Flexible Process Models
NASA Astrophysics Data System (ADS)
Kumar, Akhil; Yao, Wen
The main idea in this paper is to show how flexible processes can be designed by combining generic process templates and business rules. We instantiate a process by applying rules to specific case data, and running a materialization algorithm. The customized process instance is then executed in an existing workflow engine. We present an architecture and also give an algorithm for process materialization. The rules are written in a logic-based language like Prolog. Our focus is on capturing deeper process knowledge and achieving a holistic approach to robust process design that encompasses control flow, resources and data, as well as makes it easier to accommodate changes to business policy.
HMI conventions for process control graphics.
Pikaar, Ruud N
2012-01-01
Process operators supervise and control complex processes. To enable the operator to do an adequate job, instrumentation and process control engineers need to address several related topics, such as console design, information design, navigation, and alarm management. In process control upgrade projects, usually a 1:1 conversion of existing graphics is proposed. This paper suggests another approach, efficiently leading to a reduced number of new powerful process graphics, supported by a permanent process overview displays. In addition a road map for structuring content (process information) and conventions for the presentation of objects, symbols, and so on, has been developed. The impact of the human factors engineering approach on process control upgrade projects is illustrated by several cases.
A novel processed food classification system applied to Australian food composition databases.
O'Halloran, S A; Lacy, K E; Grimes, C A; Woods, J; Campbell, K J; Nowson, C A
2017-08-01
The extent of food processing can affect the nutritional quality of foodstuffs. Categorising foods by the level of processing emphasises the differences in nutritional quality between foods within the same food group and is likely useful for determining dietary processed food consumption. The present study aimed to categorise foods within Australian food composition databases according to the level of food processing using a processed food classification system, as well as assess the variation in the levels of processing within food groups. A processed foods classification system was applied to food and beverage items contained within Australian Food and Nutrient (AUSNUT) 2007 (n = 3874) and AUSNUT 2011-13 (n = 5740). The proportion of Minimally Processed (MP), Processed Culinary Ingredients (PCI) Processed (P) and Ultra Processed (ULP) by AUSNUT food group and the overall proportion of the four processed food categories across AUSNUT 2007 and AUSNUT 2011-13 were calculated. Across the food composition databases, the overall proportions of foods classified as MP, PCI, P and ULP were 27%, 3%, 26% and 44% for AUSNUT 2007 and 38%, 2%, 24% and 36% for AUSNUT 2011-13. Although there was wide variation in the classifications of food processing within the food groups, approximately one-third of foodstuffs were classified as ULP food items across both the 2007 and 2011-13 AUSNUT databases. This Australian processed food classification system will allow researchers to easily quantify the contribution of processed foods within the Australian food supply to assist in assessing the nutritional quality of the dietary intake of population groups. © 2017 The British Dietetic Association Ltd.
Collins, Heather R; Zhu, Xun; Bhatt, Ramesh S; Clark, Jonathan D; Joseph, Jane E
2012-12-01
The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. This study parametrically varied demands on featural, first-order configural, or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing), or reflected generalized perceptual differentiation (i.e., differentiation that crosses category and processing type boundaries). ROIs were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories.
Collins, Heather R.; Zhu, Xun; Bhatt, Ramesh S.; Clark, Jonathan D.; Joseph, Jane E.
2015-01-01
The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. The present study parametrically varied demands on featural, first-order configural or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing) or reflected generalized perceptual differentiation (i.e. differentiation that crosses category and processing type boundaries). Regions of interest were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process-specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex, and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain-specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories. PMID:22849402
Byrn, Stephen; Futran, Maricio; Thomas, Hayden; Jayjock, Eric; Maron, Nicola; Meyer, Robert F; Myerson, Allan S; Thien, Michael P; Trout, Bernhardt L
2015-03-01
We describe the key issues and possibilities for continuous final dosage formation, otherwise known as downstream processing or drug product manufacturing. A distinction is made between heterogeneous processing and homogeneous processing, the latter of which is expected to add more value to continuous manufacturing. We also give the key motivations for moving to continuous manufacturing, some of the exciting new technologies, and the barriers to implementation of continuous manufacturing. Continuous processing of heterogeneous blends is the natural first step in converting existing batch processes to continuous. In heterogeneous processing, there are discrete particles that can segregate, versus in homogeneous processing, components are blended and homogenized such that they do not segregate. Heterogeneous processing can incorporate technologies that are closer to existing technologies, where homogeneous processing necessitates the development and incorporation of new technologies. Homogeneous processing has the greatest potential for reaping the full rewards of continuous manufacturing, but it takes long-term vision and a more significant change in process development than heterogeneous processing. Heterogeneous processing has the detriment that, as the technologies are adopted rather than developed, there is a strong tendency to incorporate correction steps, what we call below "The Rube Goldberg Problem." Thus, although heterogeneous processing will likely play a major role in the near-term transformation of heterogeneous to continuous processing, it is expected that homogeneous processing is the next step that will follow. Specific action items for industry leaders are. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
1990-09-01
6 Logistics Systems ............ 7 GOCESS Operation . . . . . . . ..... 9 Work Order Processing . . . . ... 12 Job Order Processing . . . . . . . . . . 14...orders and job orders to the Material Control Section will be discussed separately. Work Order Processing . Figure 2 illustrates typical WO processing...logistics function. The JO processing is similar. Job Order Processing . Figure 3 illustrates typical JO processing in a GOCESS operation. As with WOs, this
Adaptive-optics optical coherence tomography processing using a graphics processing unit.
Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T
2014-01-01
Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.
Data processing system for the Sneg-2MP experiment
NASA Technical Reports Server (NTRS)
Gavrilova, Y. A.
1980-01-01
The data processing system for scientific experiments on stations of the "Prognoz" type provides for the processing sequence to be broken down into a number of consecutive stages: preliminary processing, primary processing, secondary processing. The tasks of each data processing stage are examined for an experiment designed to study gamma flashes of galactic origin and solar flares lasting from several minutes to seconds in the 20 kev to 1000 kev energy range.
General RMP Guidance - Appendix D: OSHA Guidance on PSM
OSHA's Process Safety Management (PSM) Guidance on providing complete and accurate written information concerning process chemicals, process technology, and process equipment; including process hazard analysis and material safety data sheets.
Elaboration Likelihood and the Counseling Process: The Role of Affect.
ERIC Educational Resources Information Center
Stoltenberg, Cal D.; And Others
The role of affect in counseling has been examined from several orientations. The depth of processing model views the efficiency of information processing as a function of the extent to which the information is processed. The notion of cognitive processing capacity states that processing information at deeper levels engages more of one's limited…
5 CFR 582.202 - Service of legal process.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Service of legal process. 582.202 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Service of Legal Process § 582.202 Service of legal process. (a) A person using this part shall serve interrogatories and legal process on the agent to receive process as...
5 CFR 582.202 - Service of legal process.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Service of legal process. 582.202 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Service of Legal Process § 582.202 Service of legal process. (a) A person using this part shall serve interrogatories and legal process on the agent to receive process as...
ERIC Educational Resources Information Center
Popyk, Marilyn K.
1986-01-01
Discusses the new automated office and its six major technologies (data processing, word processing, graphics, image, voice, and networking), the information processing cycle (input, processing, output, distribution/communication, and storage and retrieval), ergonomics, and ways to expand office education classes (versus class instruction). (CT)
ERIC Educational Resources Information Center
Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D.
2016-01-01
Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown…
40 CFR 65.62 - Process vent group determination.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., or Group 2B) for each process vent. Group 1 process vents require control, and Group 2A and 2B... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Process vent group determination. 65... (CONTINUED) CONSOLIDATED FEDERAL AIR RULE Process Vents § 65.62 Process vent group determination. (a) Group...
Code of Federal Regulations, 2010 CFR
2010-07-01
.../or Table 9 compounds are similar and often identical. (3) Biological treatment processes. Biological treatment processes in compliance with this section may be either open or closed biological treatment processes as defined in § 63.111. An open biological treatment process in compliance with this section need...
5 CFR 581.202 - Service of process.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Service of process. 581.202 Section 581... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Service of Process § 581.202 Service of process. (a) A... facilitate proper service of process on its designated agent(s). If legal process is not directed to any...
30 CFR 828.11 - In situ processing: Performance standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 3 2011-07-01 2011-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...
30 CFR 828.11 - In situ processing: Performance standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 3 2012-07-01 2012-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...
30 CFR 828.11 - In situ processing: Performance standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 3 2013-07-01 2013-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...
30 CFR 828.11 - In situ processing: Performance standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 3 2010-07-01 2010-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...
30 CFR 828.11 - In situ processing: Performance standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 3 2014-07-01 2014-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...
Processing Depth, Elaboration of Encoding, Memory Stores, and Expended Processing Capacity.
ERIC Educational Resources Information Center
Eysenck, Michael W.; Eysenck, M. Christine
1979-01-01
The effects of several factors on expended processing capacity were measured. Expended processing capacity was greater when information was retrieved from secondary memory than from primary memory, when processing was of a deep, semantic nature than when it was shallow and physical, and when processing was more elaborate. (Author/GDC)