Report on GMI Special Study #15: Radio Frequency Interference
NASA Technical Reports Server (NTRS)
Draper, David W.
2015-01-01
This report contains the results of GMI special study #15. An analysis is conducted to identify sources of radio frequency interference (RFI) to the Global Precipitation Measurement (GPM) Microwave Imager (GMI). The RFI impacts the 10 GHz and 18 GHz channels at both polarities. The sources of RFI are identified for the following conditions: over the water (including major inland water bodies) in the earth view, and over land in the earth view, and in the cold sky view. A best effort is made to identify RFI sources in coastal regions, with noted degradation of flagging performance due to the highly variable earth scene over coastal regions. A database is developed of such sources, including latitude, longitude, country and city of earth emitters, and position in geosynchronous orbit for space emitters. A description of the recommended approach for identifying the sources and locations of RFI in the GMI channels is given in this paper. An algorithm to flag RFI contaminated pixels which can be incorporated into the GMI Level 1Base/1B algorithms is defined, which includes Matlab code to perform the necessary flagging of RFI. A Matlab version of the code is delivered with this distribution.
Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz
2009-01-01
This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).
NASA Astrophysics Data System (ADS)
Wu, Hao; Wang, Xianhua; Ye, Hanhan; Jiang, Yun; Duan, Fenghua
2018-01-01
We developed an algorithm (named GMI_XCO2) to retrieve the global column-averaged dry air mole fraction of atmospheric carbon dioxide (XCO2) for greenhouse-gases monitor instrument (GMI) and directional polarized camera (DPC) on the GF-5 satellite. This algorithm is designed to work in cloudless atmospheric conditions with aerosol optical thickness (AOT)<0.3. To quantify the uncertainty level of the retrieved XCO2 when the aerosols and cirrus clouds occurred in retrieving XCO2 with the GMI short wave infrared (SWIR) data, we analyzed the errors rate caused by the six types of aerosols and cirrus clouds. The results indicated that in AOT range of 0.05 to 0.3 (550 nm), the uncertainties of aerosols could lead to errors of -0.27% to 0.59%, -0.32% to 1.43%, -0.10% to 0.49%, -0.12% to 1.17%, -0.35% to 0.49%, and -0.02% to -0.24% for rural, dust, clean continental, maritime, urban, and soot aerosols, respectively. The retrieval results presented a large error due to cirrus clouds. In the cirrus optical thickness range of 0.05 to 0.8 (500 nm), the most underestimation is up to 26.25% when the surface albedo is 0.05. The most overestimation is 8.1% when the surface albedo is 0.65. The retrieval results of GMI simulation data demonstrated that the accuracy of our algorithm is within 4 ppm (˜1%) using the simultaneous measurement of aerosols and clouds from DPC. Moreover, the speed of our algorithm is faster than full-physics (FP) methods. We verified our algorithm with Greenhouse-gases Observing Satellite (GOSAT) data in Beijing area during 2016. The retrieval errors of most observations are within 4 ppm except for summer. Compared with the results of GOSAT, the correlation coefficient is 0.55 for the whole year data, increasing to 0.62 after excluding the summer data.
GPM Microwave Imager Engineering Model Results
NASA Technical Reports Server (NTRS)
Newell, David; Krimchansky, Sergey
2010-01-01
The Global Precipitation Measurement (GPM) Microwave Imager (GMI) Instrument is being developed by Ball Aerospace and Technology Corporation (BATC) for the GPM program at NASA Goddard. The Global Precipitation Measurement (GPM) mission is an international effort managed by the National Aeronautics and Space Administration (NASA) to improve climate, weather, and hydro-meteorological predictions through more accurate and more frequent precipitation measurements. The GPM Microwave Imager (GMI) will be used to make calibrated, radiometric measurements from space at multiple microwave frequencies and polarizations. GMI will be placed on the GPM Core Spacecraft together with the Dualfrequency Precipitation Radar (DPR). The DPR is two-frequency precipitation measurement radar, which will operate in the Ku-band and Ka-band of the microwave spectrum. The Core Spacecraft will make radiometric and radar measurements of clouds and precipitation and will be the central element ofGPM's space segment. The data products from GPM will provide information concerning global precipitation on a frequent, near-global basis to meteorologists and scientists making weather forecasts and performing research on the global energy and water cycle, precipitation, hydrology, and related disciplines. In addition, radiometric measurements from GMI and radar measurements from the DPR will be used together to develop a retrieval transfer standard for the purpose of calibrating precipitation retrieval algorithms. This calibration standard will establish a reference against which other retrieval algorithms using only microwave radiometers (and without the benefit of the DPR) on other satellites in the GPM constellation will be compared.
GPM Microwave Imager Design, Predicted Performance and Status
NASA Technical Reports Server (NTRS)
Krimchansky, Sergey; Newell, David
2010-01-01
The Global Precipitation Measurement (GPM) Microwave Imager (GMI) Instrument is being developed by Ball Aerospace and Technology Corporation (BATC) for the GPM program at NASA Goddard. The Global Precipitation Measurement (GPM) mission is an international effort managed by the National Aeronautics and Space Administration (t.JASA) to improve climate, weather, and hydro-meteorological predictions through more accurate and more frequent precipitation measurements. The GPM Microwave Imager (GMI) will be used to make calibrated, radiometric measurements from space at multiple microwave frequencies and polarizations. GMI will be placed on the GPM Core Spacecraft together with the Dual-frequency Precipitation Radar (DPR). The DPR is two-frequency precipitation measurement radar, which will operate in the Ku-band and Ka-band of the microwave spectrum. The Core Spacecraft will make radiometric and radar measurements of clouds and precipitation and will be the central element of GPM's space segment. The data products from GPM will provide information concerning global precipitation on a frequent, near-global basis to meteorologists and scientists making weather forecasts and performing research on the global energy and water cycle, precipitation, hydrology, and related disciplines. In addition, radiometric measurements from GMI and radar measurements from the DPR will be used together to develop a retrieval transfer standard for the purpose of calibrating precipitation retrieval algorithms. This calibration standard will establish a reference against which other retrieval algorithms using only microwave radiometers (and without the benefit of the DPR) on other satellites in the GPM constellation will be compared.
Evaluation of GMI and PMI diffeomorphic-based demons algorithms for aligning PET and CT Images.
Yang, Juan; Wang, Hongjun; Zhang, You; Yin, Yong
2015-07-08
Fusion of anatomic information in computed tomography (CT) and functional information in 18F-FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined 18F-FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole-body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)-based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point-wise mutual information (PMI) diffeomorphic-based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB-approved study. Whole-body PET and CT images were acquired from a combined 18F-FDG PET/CT scanner for each patient. The modified Hausdorff distance (d(MH)) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of d(MH) were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI-based demons and the PMI diffeomorphic-based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined 18F-FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic-based demons algorithm was more accurate than the GMI-based demons algorithm in registering PET/CT esophageal images.
NASA Astrophysics Data System (ADS)
Cinzia Marra, Anna; Casella, Daniele; Martins Costa do Amaral, Lia; Sanò, Paolo; Dietrich, Stefano; Panegrossi, Giulia
2017-04-01
Two new precipitation retrieval algorithms for the Advanced Microwave Scanning Radiometer 2 (AMSR2) and for the GPM Microwave Imager (GMI) are presented. The algorithms are based on the Cloud Dynamics and Radiation Database (CDRD) Bayesian approach and represent an evolution of the previous version applied to Special Sensor Microwave Imager/Sounder (SSMIS) observations, and used operationally within the EUMETSAT Satellite Application Facility on support to Operational Hydrology and Water Management (H-SAF). These new products present as main innovation the use of an extended database entirely empirical, derived from coincident radar and radiometer observations from the NASA/JAXA Global Precipitation Measurement Core Observatory (GPM-CO) (Dual-frequency Precipitation Radar-DPR and GMI). The other new aspects are: 1) a new rain-no-rain screening approach; 2) the use of Empirical Orthogonal Functions (EOF) and Canonical Correlation Analysis (CCA) both in the screening approach, and in the Bayesian algorithm; 2) the use of new meteorological and environmental ancillary variables to categorize the database and mitigate the problem of non-uniqueness of the retrieval solution; 3) the development and implementations of specific modules for computational time minimization. The CDRD algorithms for AMSR2 and GMI are able to handle an extremely large observational database available from GPM-CO and provide the rainfall estimate with minimum latency, making them suitable for near-real time hydrological and operational applications. As far as CDRD for AMSR2, a verification study over Italy using ground-based radar data and over the MSG full disk area using coincident GPM-CO/AMSR2 observations has been carried out. Results show remarkable AMSR2 capabilities for rainfall rate (RR) retrieval over ocean (for RR > 0.25 mm/h), good capabilities over vegetated land (for RR > 1 mm/h), while for coastal areas the results are less certain. Comparisons with NASA GPM products, and with ground-based radar data, show that CDRD for AMSR2 is able to depict very well the areas of high precipitation over all surface types. Similarly, preliminary results of the application of CDRD for GMI are also shown and discussed, highlighting the advantage of the availability of high frequency channels (> 90 GHz) for precipitation retrieval over land and coastal areas.
NASA Technical Reports Server (NTRS)
Munchak, S. Joseph; Meneghini, Robert; Grecu, Mircea; Olson, William S.
2016-01-01
The Global Precipitation Measurement satellite's Microwave Imager (GMI) and Dual-frequency Precipitation Radar (DPR) are designed to provide the most accurate instantaneous precipitation estimates currently available from space. The GPM Combined Algorithm (CORRA) plays a key role in this process by retrieving precipitation profiles that are consistent with GMI and DPR measurements; therefore, it is desirable that the forward models in CORRA use the same geophysical input parameters. This study explores the feasibility of using internally consistent emissivity and surface backscatter cross-sectional (sigma(sub 0)) models for water surfaces in CORRA. An empirical model for DPR Ku and Ka sigma(sub 0) as a function of 10m wind speed and incidence angle is derived from GMI-only wind retrievals under clear-sky conditions. This allows for the sigma(sub 0) measurements, which are also influenced by path-integrated attenuation (PIA) from precipitation, to be used as input to CORRA and for wind speed to be retrieved as output. Comparisons to buoy data give a wind rmse of 3.7 m/s for Ku+GMI and 3.2 m/s for Ku+Ka+GMI retrievals under precipitation (compared to 1.3 m/s for clear-sky GMI-only), and there is a reduction in bias from GANAL background data (-10%) to the Ku+GMI (-3%) and Ku+Ka+GMI (-5%) retrievals. Ku+GMI retrievals of precipitation increase slightly in light (less than 1 mm/h) and decrease in moderate to heavy precipitation (greater than 1 mm/h). The Ku+Ka+GMI retrievals, being additionally constrained by the Ka reflectivity, increase only slightly in moderate and heavy precipitation at low wind speeds (less than 5 m/s) relative to retrievals using the surface reference estimate of PIA as input.
Jin, Shuo; Li, Dengwang; Wang, Hongjun; Yin, Yong
2013-01-07
Accurate registration of 18F-FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from (18)F-FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information-based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application.
Jin, Shuo; Li, Dengwang; Yin, Yong
2013-01-01
Accurate registration of 18F−FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from 18F−FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information‐based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application. PACS numbers: 87.57.nj, 87.57.Q‐, 87.57.uk PMID:23318381
High sensitivity pressure transducer based on the phase characteristics of GMI magnetic sensors
NASA Astrophysics Data System (ADS)
Benavides, L. S.; Costa Silva, E.; Costa Monteiro, E.; Hall Barbosa, C. R.
2018-03-01
This paper presents a new configuration for a GMI pressure transducer based on the reading of the phase characteristics of GMI sensor, intended for biomedical applications. The development process of this new class of magnetic field transducers is discussed, beginning with the definition of the ideal conditioning of the GMI sensor elements (dc level and frequency of the excitation current and sample length) and continuing with computational simulations of the full electronic circuit performed using the experimental data obtained from measured GMI curves, and have shown that the improvement in the sensitivity of GMI magnetometers is larger when phase-based transducers are used instead of magnitude-based transducers. Parameters of interest of the developed prototype are thoroughly analyzed, such as: sensitivity, linearity and frequency response. Also, the spectral noise density of the developed pressure transducer is evaluated and its resolution in the passband is estimated. A low-cost GMI pressure transducer was developed, presenting high resolution, high sensitivity and a frequency bandwidth compatible to the desired biomedical applications.
A New 1DVAR Retrieval for AMSR2 and GMI: Validation and Sensitivites
NASA Astrophysics Data System (ADS)
Duncan, D.; Kummerow, C. D.
2015-12-01
A new non-raining retrieval has been developed for microwave imagers and applied to the GMI and AMSR2 sensors. With the Community Radiative Transfer Model (CRTM) as the forward model for the physical retrieval, a 1-dimensional variational method finds the atmospheric state which minimizes the difference between observed and simulated brightness temperatures. A key innovation of the algorithm development is a method to calculate the sensor error covariance matrix that is specific to the forward model employed and includes off-diagonal elements, allowing the algorithm to handle various forward models and sensors with little cross-talk. The water vapor profile is resolved by way of empirical orthogonal functions (EOFs) and then summed to get total precipitable water (TPW). Validation of retrieved 10m wind speed, TPW, and sea surface temperature (SST) is performed via comparison with buoys and radiosondes as well as global models and other remotely sensed products. In addition to the validation, sensitivity experiments investigate the impact of ancillary data on the under-constrained retrieval, a concern for climate data records that strive to be independent of model biases. The introduction of model analysis data is found to aid the algorithm most at high frequency channels and affect TPW retrievals, whereas wind and cloud water retrievals show little effect from ingesting further ancillary data.
Neuro-genetic system for optimization of GMI samples sensitivity.
Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E
2016-03-01
Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.
GPM Mission Gridded Text Products Providing Surface Precipitation Retrievals
NASA Astrophysics Data System (ADS)
Stocker, Erich Franz; Kelley, Owen; Huffman, George; Kummerow, Christian
2015-04-01
In February 2015, the Global Precipitation Measurement (GPM) mission core satellite will complete its first year in space. The core satellite carries a conically scanning microwave imager called the GPM Microwave Imager (GMI), which also has 166 GHz and 183 GHz frequency channels. The GPM core satellite also carries a dual frequency radar (DPR) which operates at Ku frequency, similar to the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar), and a new Ka frequency. The precipitation processing system (PPS) is producing swath-based instantaneous precipitation retrievals from GMI, both radars including a dual-frequency product, and a combined GMI/DPR precipitation retrieval. These level 2 products are written in the HDF5 format and have many additional parameters beyond surface precipitation that are organized into appropriate groups. While these retrieval algorithms were developed prior to launch and are not optimal, these algorithms are producing very creditable retrievals. It is appropriate for a wide group of users to have access to the GPM retrievals. However, for reseachers requiring only surface precipitation, these L2 swath products can appear to be very intimidating and they certainly do contain many more variables than the average researcher needs. Some researchers desire only surface retrievals stored in a simple easily accessible format. In response, PPS has begun to produce gridded text based products that contain just the most widely used variables for each instrument (surface rainfall rate, fraction liquid, fraction convective) in a single line for each grid box that contains one or more observations. This paper will describe the gridded data products that are being produced and provide an overview of their content. Currently two types of gridded products are being produced: (1) surface precipitation retrievals from the core satellite instruments - GMI, DPR, and combined GMI/DPR (2) surface precipitation retrievals for the partner constellation satellites. Both of these gridded products are generated for a .25 degree x .25 degree hourly grid, which are packaged into daily ASCII files that can downloaded from the PPS FTP site. To reduce the download size, the files are compressed using the gzip utility. This paper will focus on presenting high-level details about the gridded text product being generated from the instruments on the GPM core satellite. But summary information will also be presented about the partner radiometer gridded product. All retrievals for the partner radiometer are done using the GPROF2014 algorithm using as input the PPS generated inter-calibrated 1C product for the radiometer.
Design, Development and Testing of the GMI Reflector Deployment Assembly
NASA Technical Reports Server (NTRS)
Guy, Larry; Foster, Mike; McEachen, Mike; Pellicciotti, Joseph; Kubitschek, Michael
2011-01-01
The GMI Reflector Deployment Assembly (RDA) is an articulating structure that accurately positions and supports the main reflector of the Global Microwave Imager (GMI) throughout the 3 year mission life. The GMI instrument will fly on the core Global Precipitation Measurement (GPM) spacecraft and will be used to make calibrated radiometric measurements at multiple microwave frequencies and polarizations. The GPM mission is an international effort managed by the National Aeronautics and Space Administration (NASA) to improve climate, weather, and hydrometeorological predictions through more accurate and frequent precipitation measurements1. Ball Aerospace and Technologies Corporation (BATC) was selected by NASA Goddard to design, build, and test the GMI instrument. The RDA was designed and manufactured by ATK Aerospace Systems Group to meet a number of challenging packaging and performance requirements. ATK developed a flight-like engineering development unit (EDU) and two flight mechanisms that have been delivered to BATC. This paper will focus on driving GMI instrument system requirements, the RDA design, development, and test activities performed to demonstrate that requirements have been met.
Recent results of the Global Precipitation Measurement (GPM) mission in Japan
NASA Astrophysics Data System (ADS)
Kubota, Takuji; Oki, Riko; Furukawa, Kinji; Kaneko, Yuki; Yamaji, Moeka; Iguchi, Toshio; Takayabu, Yukari
2017-04-01
The Global Precipitation Measurement (GPM) mission is an international collaboration to achieve highly accurate and highly frequent global precipitation observations. The GPM mission consists of the GPM Core Observatory jointly developed by U.S. and Japan and Constellation Satellites that carry microwave radiometers and provided by the GPM partner agencies. The GPM Core Observatory, launched on February 2014, carries the Dual-frequency Precipitation Radar (DPR) by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT). JAXA develops the DPR Level 1 algorithm, and the NASA-JAXA Joint Algorithm Team develops the DPR Level 2 and DPR-GMI combined Level2 algorithms. The Japan Meteorological Agency (JMA) started the DPR assimilation in the meso-scale Numerical Weather Prediction (NWP) system on March 24 2016. This was regarded as the world's first "operational" assimilation of spaceborne radar data in the NWP system of meteorological agencies. JAXA also develops the Global Satellite Mapping of Precipitation (GSMaP), as national product to distribute hourly and 0.1-degree horizontal resolution rainfall map. The GSMaP near-real-time version (GSMaP_NRT) product is available 4-hour after observation through the "JAXA Global Rainfall Watch" web site (http://sharaku.eorc.jaxa.jp/GSMaP) since 2008. The GSMaP_NRT product gives higher priority to data latency than accuracy, and has been used by various users for various purposes, such as rainfall monitoring, flood alert and warning, drought monitoring, crop yield forecast, and agricultural insurance. There is, however, a requirement for shortening of data latency time from GSMaP users. To reduce data latency, JAXA has developed the GSMaP realtime version (GSMaP_NOW) product for observation area of the geostationary satellite Himawari-8 operated by the Japan Meteorological Agency (JMA). GSMaP_NOW product was released to public in November 2, 2015 through the "JAXA Realtime Rainfall Watch" web site (http://sharaku.eorc.jaxa.jp/GSMaP_NOW/). All GPM standard products and the GPM-GSMaP product have been released to the public since September 2014 as Version 03. The GPM products can be downloaded via the internet through the JAXA G-Portal (https://www.gportal.jaxa.jp). On Mar. 2016, the DPR, the GMI, and the DPR-GMI combined algorithms were updated and the first GPM latent heating product (in the TRMM coverage) were released. Therefore, the GPM Version 04 standard products have been provided since Mar. 2016. Furthermore, the GPM-GSMaP algorithms were updated and the GPM-GSMaP Version 04 products have been provided since Jan. 2017.
Evaluation of GMI and PMI diffeomorphic‐based demons algorithms for aligning PET and CT Images
Yang, Juan; Zhang, You; Yin, Yong
2015-01-01
Fusion of anatomic information in computed tomography (CT) and functional information in F18‐FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined F18‐FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole‐body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)‐based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point‐wise mutual information (PMI) diffeomorphic‐based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB‐approved study. Whole‐body PET and CT images were acquired from a combined F18‐FDG PET/CT scanner for each patient. The modified Hausdorff distance (dMH) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of dMH were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI‐based demons and the PMI diffeomorphic‐based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined F18‐FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic‐based demons algorithm was more accurate than the GMI‐based demons algorithm in registering PET/CT esophageal images. PACS numbers: 87.57.nj, 87.57. Q‐, 87.57.uk PMID:26218993
Aerosol Modeling for the Global Model Initiative
NASA Technical Reports Server (NTRS)
Weisenstein, Debra K.; Ko, Malcolm K. W.
2001-01-01
The goal of this project is to develop an aerosol module to be used within the framework of the Global Modeling Initiative (GMI). The model development work will be preformed jointly by the University of Michigan and AER, using existing aerosol models at the two institutions as starting points. The GMI aerosol model will be tested, evaluated against observations, and then applied to assessment of the effects of aircraft sulfur emissions as needed by the NASA Subsonic Assessment in 2001. The work includes the following tasks: 1. Implementation of the sulfur cycle within GMI, including sources, sinks, and aqueous conversion of sulfur. Aerosol modules will be added as they are developed and the GMI schedule permits. 2. Addition of aerosol types other than sulfate particles, including dust, soot, organic carbon, and black carbon. 3. Development of new and more efficient parameterizations for treating sulfate aerosol nucleation, condensation, and coagulation among different particle sizes and types.
Orographic Impacts on Liquid and Ice-Phase Precipitation Processes during OLYMPEX
NASA Astrophysics Data System (ADS)
Petersen, W. A.; Hunzinger, A.; Gatlin, P. N.; Wolff, D. B.
2017-12-01
The Global Precipitation Measurement (GPM) mission Olympic Mountains Experiment (OLYMPEX) focused on physical validation of GPM products in cold-season, mid-latitude frontal precipitation occurring over the Olympic Mountains of Washington State. Herein, we use data collected by the NASA S-band polarimetric radar (NPOL) to quantify and examine ice (IWP), liquid (LWP) and total water paths (TWP) relative to surface precipitation rates and column hydrometeor types for several cases occurring in different synoptic and/or Froude number regimes. These quantities are compared to coincident precipitation properties measured or estimated by GPM's Microwave Imager (GMI) and Dual-frequency Precipitation Radar (DPR). Because ice scattering is the dominant radiometric signature used by the GMI for estimating precipitation over land, and because the DPR is greatly affected by ground clutter in the lowest 1 - 2 km above ground, measurement limitations combined with orographic forcing may impact the degree to which DPR and/or GMI algorithms are able to adequately observe and estimate precipitation over and around orography.Preliminary case results suggest: 1) as expected, the Olympic Mountains force robust enhancements in the liquid and ice microphysical processes on windward slopes, especially in atmospheric river events; 2) localized orographic enhancements alter the balance of liquid and frozen precipitation contributions (IWP/TWP, LWP/TWP) to near surface rain rate, and for two cases examined thus far the balance seems to be sensitive to flow direction at specific intersections with the terrain orientation; and 3) GPM measurement limitations related to the depth of surface clutter impact for the DPR, and degree to which ice processes are coupled to the orographic rainfall process (DPR and GMI), especially along windward mountain slopes, may constrain the ability of retrieval algorithms to properly estimate near-surface precipitation quantities over complex terrain. Ongoing analysis of the OLMPEX dataset will better isolate controls on the orographic precipitation process, better define uncertainties in GPM measurements, and contribute to physically-based approaches for mitigating errors in estimation due to measurement and/or algorithm limitations over complex terrain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, E. Costa, E-mail: edusilva@ele.puc-rio.br; Gusmão, L. A. P.; Barbosa, C. R. Hall
2014-08-15
Recently, our research group at PUC-Rio discovered that magnetic transducers based on the impedance phase characteristics of GMI sensors have the potential to multiply by one hundred the sensitivity values when compared to magnitude-based GMI transducers. Those GMI sensors can be employed in the measurement of ultra-weak magnetic fields, which intensities are even lower than the environmental magnetic noise. A traditional solution for cancelling the electromagnetic noise and interference makes use of gradiometric configurations, but the performance is strongly tied to the homogeneity of the sensing elements. This paper presents a new method that uses electronic circuits to modify themore » equivalent impedance of the GMI samples, aiming at homogenizing their phase characteristics and, consequently, improving the performance of gradiometric configurations based on GMI samples. It is also shown a performance comparison between this new method and another homogenization method previously developed.« less
Global Precipitation Measurement (GPM) Core Observatory Falling Snow Estimates
NASA Astrophysics Data System (ADS)
Skofronick Jackson, G.; Kulie, M.; Milani, L.; Munchak, S. J.; Wood, N.; Levizzani, V.
2017-12-01
Retrievals of falling snow from space represent an important data set for understanding and linking the Earth's atmospheric, hydrological, and energy cycles. Estimates of falling snow must be captured to obtain the true global precipitation water cycle, snowfall accumulations are required for hydrological studies, and without knowledge of the frozen particles in clouds one cannot adequately understand the energy and radiation budgets. This work focuses on comparing the first stable falling snow retrieval products (released May 2017) for the Global Precipitation Measurement (GPM) Core Observatory (GPM-CO), which was launched February 2014, and carries both an active dual frequency (Ku- and Ka-band) precipitation radar (DPR) and a passive microwave radiometer (GPM Microwave Imager-GMI). Five separate GPM-CO falling snow retrieval algorithm products are analyzed including those from DPR Matched (Ka+Ku) Scan, DPR Normal Scan (Ku), DPR High Sensitivity Scan (Ka), combined DPR+GMI, and GMI. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new, the different on-orbit instruments don't capture all snow rates equally, and retrieval algorithms differ. Thus a detailed comparison among the GPM-CO products elucidates advantages and disadvantages of the retrievals. GPM and CloudSat global snowfall evaluation exercises are natural investigative pathways to explore, but caution must be undertaken when analyzing these datasets for comparative purposes. This work includes outlining the challenges associated with comparing GPM-CO to CloudSat satellite snow estimates due to the different sampling, algorithms, and instrument capabilities. We will highlight some factors and assumptions that can be altered or statistically normalized and applied in an effort to make comparisons between GPM and CloudSat global satellite falling snow products as equitable as possible.
GMI Spin Mechanism Assembly Design, Development, and Test Results
NASA Technical Reports Server (NTRS)
Woolaway, Scott; Kubitschek, Michael; Berdanier, Barry; Newell, David; Dayton, Chris; Pellicciotti, Joseph W.
2012-01-01
The GMI Spin Mechanism Assembly (SMA) is a precision bearing and power transfer drive assembly mechanism that supports and spins the Global Microwave Imager (GMI) instrument at a constant rate of 32 rpm continuously for the 3 year plus mission life. The GMI instrument will fly on the core Global Precipitation Measurement (GPM) spacecraft and will be used to make calibrated radiometric measurements at multiple microwave frequencies and polarizations. The GPM mission is an international effort managed by the National Aeronautics and Space Administration (NASA) to improve climate, weather, and hydro-meteorological predictions through more accurate and frequent precipitation measurements [1]. Ball Aerospace and Technologies Corporation (BATC) was selected by NASA Goddard Space Flight Center (GSFC) to design, build, and test the GMI instrument. The SMA design has to meet a challenging set of requirements and is based on BATC space mechanisms heritage and lessons learned design changes made to the WindSat BAPTA mechanism that is currently operating on orbit and has recently surpassed 8 years of Flight operation.
NASA Technical Reports Server (NTRS)
Kubitschek, Michael; Woolaway, Scott; Guy, Larry; Dayton, Chris; Berdanier, Barry; Newell, David; Pellicciotti, Joseph W.
2011-01-01
The GMI Spin Mechanism Assembly (SMA) is a precision bearing and power transfer drive assembly mechanism that supports and spins the Global Microwave Imager (GMI) instrument at a constant rate of 32 rpm continuously for the 3 year plus mission life. The GMI instrument will fly on the core Global Precipitation Measurement (GPM) spacecraft and will be used to make calibrated radiometric measurements at multiple microwave frequencies and polarizations. The GPM mission is an international effort managed by the National Aeronautics and Space Administration (NASA) to improve climate, weather, and hydro-meteorological predictions through more accurate and frequent precipitation measurements [1]. Ball Aerospace and Technologies Corporation (BATC) was selected by NASA Goddard Space Flight Center (GSFC) to design, build, and test the GMI instrument. The SMA design has to meet a challenging set of requirements and is based on BATC space mechanisms heritage and lessons learned design changes made to the WindSat BAPTA mechanism that is currently operating on-orbit and has recently surpassed 8 years of Flight operation.
NASA Astrophysics Data System (ADS)
Berg, W. K.
2016-12-01
The Global Precipitation Mission (GPM) Core Observatory, which was launched in February of 2014, provides a number of advances for satellite monitoring of precipitation including a dual-frequency radar, high frequency channels on the GPM Microwave Imager (GMI), and coverage over middle and high latitudes. The GPM concept, however, is about producing unified precipitation retrievals from a constellation of microwave radiometers to provide approximately 3-hourly global sampling. This involves intercalibration of the input brightness temperatures from the constellation radiometers, development of an apriori precipitation database using observations from the state-of-the-art GPM radiometer and radars, and accounting for sensor differences in the retrieval algorithm in a physically-consistent way. Efforts by the GPM inter-satellite calibration working group, or XCAL team, and the radiometer algorithm team to create unified precipitation retrievals from the GPM radiometer constellation were fully implemented into the current version 4 GPM precipitation products. These include precipitation estimates from a total of seven conical-scanning and six cross-track scanning radiometers as well as high spatial and temporal resolution global level 3 gridded products. Work is now underway to extend this unified constellation-based approach to the combined TRMM/GPM data record starting in late 1997. The goal is to create a long-term global precipitation dataset employing these state-of-the-art calibration and retrieval algorithm approaches. This new long-term global precipitation dataset will incorporate the physics provided by the combined GPM GMI and DPR sensors into the apriori database, extend prior TRMM constellation observations to high latitudes, and expand the available TRMM precipitation data to the full constellation of available conical and cross-track scanning radiometers. This combined TRMM/GPM precipitation data record will thus provide a high-quality high-temporal resolution global dataset for use in a wide variety of weather and climate research applications.
Operating Point Self-Regulator for Giant Magneto-Impedance Magnetic Sensor.
Zhou, Han; Pan, Zhongming; Zhang, Dasha
2017-05-11
The giant magneto-impedance (GMI) magnetic sensor based on the amorphous wire has been believed to be tiny dimensions, high sensitivity, quick response, and small power consumption. This kind of sensor is usually working under a bias magnetic field that is called the sensor's operating point. However, the changes in direction and intensity of the external magnetic field, or the changes in sensing direction and position of the sensor, will lead to fluctuations in operating point when the sensor is working without any magnetic shield. In this work, a GMI sensor based on the operating point self-regulator is designed to overcome the problem. The regulator is based on the compensated feedback control that can maintain the operating point of a GMI sensor in a uniform position. With the regulator, the GMI sensor exhibits a stable sensitivity regardless of the external magnetic field. In comparison with the former work, the developed operating point regulator can improve the accuracy and stability of the operating point and therefore decrease the noise and disturbances that are introduced into the GMI sensor by the previous self-regulation system.
Graded motor imagery and the impact on pain processing in a case of CRPS.
Walz, Andrea D; Usichenko, Taras; Moseley, G Lorimer; Lotze, Martin
2013-03-01
Graded motor imagery (GMI) shows promising results for patients with complex regional pain syndrome (CRPS). In a case with chronic unilateral CRPS type I, we applied GMI for 6 weeks and recorded clinical parameters and cerebral activation using functional magnetic resonance imaging (fMRI; pre-GMI, after each GMI block, and after 6 mo). Changes in fMRI activity were mapped during movement execution in areas associated with pain processing. A healthy participant served as a control for habituation effects. Pain intensity decreased over the course of GMI, and relief was maintained at follow-up. fMRI during movement execution revealed marked changes in S1 and S2 (areas of discriminative pain processing), which seemed to be associated with pain reduction, but none in the anterior insula and the anterior cingulate cortex (areas of affective pain processing). After mental rotation training, the activation intensity of the posterior parietal cortex was reduced to one third. Our case report develops a design capable of differentiating cerebral changes associated with behavioral therapy of CRPS type I study.
Operating Point Self-Regulator for Giant Magneto-Impedance Magnetic Sensor
Zhou, Han; Pan, Zhongming; Zhang, Dasha
2017-01-01
The giant magneto-impedance (GMI) magnetic sensor based on the amorphous wire has been believed to be tiny dimensions, high sensitivity, quick response, and small power consumption. This kind of sensor is usually working under a bias magnetic field that is called the sensor’s operating point. However, the changes in direction and intensity of the external magnetic field, or the changes in sensing direction and position of the sensor, will lead to fluctuations in operating point when the sensor is working without any magnetic shield. In this work, a GMI sensor based on the operating point self-regulator is designed to overcome the problem. The regulator is based on the compensated feedback control that can maintain the operating point of a GMI sensor in a uniform position. With the regulator, the GMI sensor exhibits a stable sensitivity regardless of the external magnetic field. In comparison with the former work, the developed operating point regulator can improve the accuracy and stability of the operating point and therefore decrease the noise and disturbances that are introduced into the GMI sensor by the previous self-regulation system. PMID:28492514
GMI-IPS: Python Processing Software for Aircraft Campaigns
NASA Technical Reports Server (NTRS)
Damon, M. R.; Strode, S. A.; Steenrod, S. D.; Prather, M. J.
2018-01-01
NASA's Atmospheric Tomography Mission (ATom) seeks to understand the impact of anthropogenic air pollution on gases in the Earth's atmosphere. Four flight campaigns are being deployed on a seasonal basis to establish a continuous global-scale data set intended to improve the representation of chemically reactive gases in global atmospheric chemistry models. The Global Modeling Initiative (GMI), is creating chemical transport simulations on a global scale for each of the ATom flight campaigns. To meet the computational demands required to translate the GMI simulation data to grids associated with the flights from the ATom campaigns, the GMI ICARTT Processing Software (GMI-IPS) has been developed and is providing key functionality for data processing and analysis in this ongoing effort. The GMI-IPS is written in Python and provides computational kernels for data interpolation and visualization tasks on GMI simulation data. A key feature of the GMI-IPS, is its ability to read ICARTT files, a text-based file format for airborne instrument data, and extract the required flight information that defines regional and temporal grid parameters associated with an ATom flight. Perhaps most importantly, the GMI-IPS creates ICARTT files containing GMI simulated data, which are used in collaboration with ATom instrument teams and other modeling groups. The initial main task of the GMI-IPS is to interpolate GMI model data to the finer temporal resolution (1-10 seconds) of a given flight. The model data includes basic fields such as temperature and pressure, but the main focus of this effort is to provide species concentrations of chemical gases for ATom flights. The software, which uses parallel computation techniques for data intensive tasks, linearly interpolates each of the model fields to the time resolution of the flight. The temporally interpolated data is then saved to disk, and is used to create additional derived quantities. In order to translate the GMI model data to the spatial grid of the flight path as defined by the pressure, latitude, and longitude points at each flight time record, a weighted average is then calculated from the nearest neighbors in two dimensions (latitude, longitude). Using SciPya's Regular Grid Interpolator, interpolation functions are generated for the GMI model grid and the calculated weighted averages. The flight path points are then extracted from the ATom ICARTT instrument file, and are sent to the multi-dimensional interpolating functions to generate GMI field quantities along the spatial path of the flight. The interpolated field quantities are then written to a ICARTT data file, which is stored for further manipulation. The GMI-IPS is aware of a generic ATom ICARTT header format, containing basic information for all flight campaigns. The GMI-IPS includes logic to edit metadata for the derived field quantities, as well as modify the generic header data such as processing dates and associated instrument files. The ICARTT interpolated data is then appended to the modified header data, and the ICARTT processing is complete for the given flight and ready for collaboration. The output ICARTT data adheres to the ICARTT file format standards V1.1. The visualization component of the GMI-IPS uses Matplotlib extensively and has several functions ranging in complexity. First, it creates a model background curtain for the flight (time versus model eta levels) with the interpolated flight data superimposed on the curtain. Secondly, it creates a time-series plot of the interpolated flight data. Lastly, the visualization component creates averaged 2D model slices (longitude versus latitude) with overlaid flight track circles at key pressure levels. The GMI-IPS consists of a handful of classes and supporting functionality that have been generalized to be compatible with any ICARTT file that adheres to the base class definition. The base class represents a generic ICARTT entry, only defining a single time entry and 3D spatial positioning parameters. Other classes inherit from this base class; several classes for input ICARTT instrument files, which contain the necessary flight positioning information as a basis for data processing, as well as other classes for output ICARTT files, which contain the interpolated model data. Utility classes provide functionality for routine procedures such as: comparing field names among ICARTT files, reading ICARTT entries from a data file and storing them in data structures, and returning a reduced spatial grid based on a collection of ICARTT entries. Although the GMI-IPS is compatible with GMI model data, it can be adapted with reasonable effort for any simulation that creates Hierarchical Data Format (HDF) files. The same can be said of its adaptability to ICARTT files outside of the context of the ATom mission. The GMI-IPS contains just under 30,000 lines of code, eight classes, and a dozen drivers and utility programs. It is maintained with GIT source code management and has been used to deliver processed GMI model data for the ATom campaigns that have taken place to date.
The Disturbing Effect of the Stray Magnetic Fields on Magnetoimpedance Sensors
Wang, Tao; Zhou, Yong; Lei, Chong; Zhi, Shaotao; Guo, Lei; Li, Hengyu; Wu, Zhizheng; Xie, Shaorong; Luo, Jun; Pu, Huayan
2016-01-01
The disturbing effect of the stray magnetic fields of Fe-based amorphous ribbons on the giant magnetoimpedance (GMI) sensor has been investigated systematically in this paper. Two simple methods were used for examining the disturbing effect of the stray magnetic fields of ribbons on the GMI sensor. In order to study the influence of the stray magnetic fields on the GMI effect, the square-shaped amorphous ribbons were tested in front, at the back, on the left and on the top of a meander-line GMI sensor made up of soft ferromagnetic films, respectively. Experimental results show that the presence of ribbons in front or at the back of GMI sensor shifts the GMI curve to a lower external magnetic field. On the contrary, the presence of ribbons on the left or on the top of the GMI sensor shifts the GMI curve to a higher external magnetic field, which is related to the coupling effect of the external magnetic field and the stray magnetic fields. The influence of the area and angle of ribbons on GMI was also studied in this work. The GMI sensor exhibits high linearity for detection of the stray magnetic fields, which has made it feasible to construct a sensitive magnetometer for detecting the typical stray magnetic fields of general soft ferromagnetic materials. PMID:27763498
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stang, John H.
2005-12-19
Cummins Inc., in partnership with the Department of Energy, has developed technology for a new highly efficient, very low emission, diesel engine for light trucks and sport utility vehicles. This work began in April 1997, and started with very aggressive goals for vehicles in the 5751 to 8500 pound GCW weight class. The primary program goals were as follows: (1) EMISSIONS -- NOx = 0.50 g/mi; PM = 0.05 g/mi; CO = 2.8 g/mi; and NMHC = 0.07 g/mi. California decided to issue new and even tougher LEV II light truck regulations late in 1999. EPA also issued its lowermore » Tier 2 regulations late in 2000. The net result was that the targets for this diesel engine project were lowered, and these goals were eventually modified by the publication of Federal Tier 2 emission standards early in 2000 to the following: NOx = 0.07 g/mi; and PM = 0.01 g/mi. (2) FUEL ECONOMY -- The fuel economy goal was 50 percent MPG improvement (combined city/highway) over the 1997 gasoline powered light truck or sport utility vehicle in the vehicle class for which this diesel engine is being designed to replace. The goal for fuel economy remained at 50 percent MPG improvement, even with the emissions goal revisions. (3) COOPERATIVE DEVELOPMENT -- Regular design reviews of the engine program will be conducted with a vehicle manufacturer to insure that the concepts and design specifics are commercially feasible. (DaimlerChrysler has provided Cummins with this design review input.) Cummins has essentially completed a demonstration of proof-of-principle for a diesel engine platform using advanced combustion and fuel system technologies. Cummins reported very early progress in this project, evidence that new diesel engine technology had been developed that demonstrated the feasibility of the above emissions goals. Emissions levels of NOx = 0.4 g/mi and PM = 0.06 g/mi were demonstrated for a 5250 lb. test weight vehicle with passive aftertreatment only. These results were achieved using the full chassis dynamometer FTP-75 test procedure that allowed compliance with the Tier 2 Interim Bin 10 Standards and would apply to vehicles in MY2004 through MY2007 timeframe. In further technology development with active aftertreatment management, Cummins has been able to report that the emissions goals for the Tier 2 Bin 5 standards were met on an engine running the full FTP-75 test procedure. The fuel economy on the chassis tests was measured at over 59 percent MPG improvement over the gasoline engines that are offered in typical SUVs and light trucks. The above demonstration used only in-cylinder fueling for management of the aftertreatment system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
John H. Stang
2005-12-31
Cummins Inc., in partnership with the Department of Energy, has developed technology for a new highly efficient, very low emission, diesel engine for light trucks and sport utility vehicles. This work began in April 1997, and started with very aggressive goals for vehicles in the 5751 to 8500 pound GCW weight class. The primary program goals were as follows: (1) EMISSIONS--NO{sub x} = 0.50 g/mi; PM = 0.05 g/mi; CO = 2.8 g/mi; and NMHC = 0.07 g/mi. California decided to issue new and even tougher LEV II light truck regulations late in 1999. EPA also issued its lower Tiermore » 2 regulations late in 2000. The net result was that the targets for this diesel engine project were lowered, and these goals were eventually modified by the publication of Federal Tier 2 emission standards early in 2000 to the following: NO{sub x} = 0.07 g/mi; and PM = 0.01 g/mi. (2) FUEL ECONOMY--The fuel economy goal was 50 percent MPG improvement (combined city/highway) over the 1997 gasoline powered light truck or sport utility vehicle in the vehicle class for which this diesel engine is being designed to replace. The goal for fuel economy remained at 50 percent MPG improvement, even with the emissions goal revisions. (3) COOPERATIVE DEVELOPMENT--Regular design reviews of the engine program will be conducted with a vehicle manufacturer to insure that the concepts and design specifics are commercially feasible. (DaimlerChrysler has provided Cummins with this design review input.) Cummins has essentially completed a demonstration of proof-of-principle for a diesel engine platform using advanced combustion and fuel system technologies. Cummins reported very early progress in this project, evidence that new diesel engine technology had been developed that demonstrated the feasibility of the above emissions goals. Emissions levels of NOx = 0.4 g/mi and PM = 0.06 g/mi were demonstrated for a 5250 lb. test weight vehicle with passive aftertreatment only. These results were achieved using the full chassis dynamometer FTP-75 test procedure that allowed compliance with the Tier 2 Interim Bin 10 Standards and would apply to vehicles in MY2004 through MY2007 timeframe. In further technology development with active aftertreatment management, Cummins has been able to report that the emissions goals for the Tier 2 Bin 5 standards were met on an engine running the full FTP-75 test procedure. The fuel economy on the chassis tests was measured at over 59 percent MPG improvement over the gasoline engines that are offered in typical SUVs and light trucks. The above demonstration used only in-cylinder fueling for management of the aftertreatment system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stang, John H.
1997-12-01
Cummins Inc., in partnership with the Department of Energy, has developed technology for a new highly efficient, very low emission, diesel engine for light trucks and sport utility vehicles. This work began in April 1997, and started with very aggressive goals for vehicles in the 5751 to 8500 pound GCW weight class. The primary program goals were as follows: (1) EMISSIONS NOx = 0.50 g/mi PM = 0.05 g/mi CO = 2.8 g/mi NMHC = 0.07 g/mi California decided to issue new and even tougher LEV II light truck regulations late in 1999. EPA also issued its lower Tier 2more » regulations late in 2000. The net result was that the targets for this diesel engine project were lowered, and these goals were eventually modified by the publication of Federal Tier 2 emission standards early in 2000 to the following: NOx = 0.07 g/mi PM = 0.01 g/mi (2) FUEL ECONOMY The fuel economy goal was 50 percent MPG improvement (combined city/highway) over the 1997 gasoline powered light truck or sport utility vehicle in the vehicle class for which this diesel engine is being designed to replace. The goal for fuel economy remained at 50 percent MPG improvement, even with the emissions goal revisions. (3) COOPERATIVE DEVELOPMENT Regular design reviews of the engine program will be conducted with a vehicle manufacturer to insure that the concepts and design specifics are commercially feasible. (DaimlerChrysler has provided Cummins with this design review input.) Cummins has essentially completed a demonstration of proof-of-principle for a diesel engine platform using advanced combustion and fuel system technologies. Cummins reported very early progress in this project, evidence that new diesel engine technology had been developed that demonstrated the feasibility of the above emissions goals. Emissions levels of NOx = 0.4 g/mi and PM = 0.06 g/mi were demonstrated for a 5250 lb. test weight vehicle with passive aftertreatment only. These results were achieved using the full chassis dynamometer FTP-75 test procedure that allowed compliance with the Tier 2 Interim Bin 10 Standards and would apply to vehicles in MY2004 through MY2007 timeframe. In further technology development with active aftertreatment management, Cummins has been able to report that the emissions goals for the Tier 2 Bin 5 standards were met on an engine running the full FTP-75 test procedure. The fuel economy on the chassis tests was measured at over 59 percent MPG improvement over the gasoline engines that are offered in typical SUVs and light trucks. The above demonstration used only in-cylinder fueling for management of the aftertreatment system.« less
NASA Technical Reports Server (NTRS)
Strode, Sarah; Rodriguez, Jose; Steenrod, Steve; Liu, Junhua; Strahan, Susan; Nielsen, Eric
2015-01-01
We describe the capabilities of the Global Modeling Initiative (GMI) chemical transport model (CTM) with a special focus on capabilities related to the Atmospheric Tomography Mission (ATom). Several science results based on GMI hindcast simulations and preliminary results from the ATom simulations are highlighted. We also discuss the relationship between GMI and GEOS-5.
Effect of Endurance Training on The Lactate and Glucose Minimum Intensities
Junior, Pedro B.; de Andrade, Vitor L.; Campos, Eduardo Z.; Kalva-Filho, Carlos A.; Zagatto, Alessandro M.; de Araújo, Gustavo G.; Papoti, Marcelo
2018-01-01
Due to the controversy about the sensitive of lactate minimum intensity (LMI) to training and the need to develop other tool for aerobic fitness evaluation, the purpose of this study was to analyze the sensitivity of glucose minimum intensity (GMI) and LMI to endurance training. Eight trained male cyclists (21.4 ± 1.9 years, 67.6 ± 7.5 kg and 1.72 ± 0.10 m) were evaluated twice, before and after 12 weeks of training. GMI and LMI were calculated, respectively, by the lowest blood glucose and lactate values attained during an incremental test performed after a hyperlactemia induction, and VO2max was determined during standard incremental effort. The training was prescribed in three different zones and controlled by heart rate (HR). The training distribution was equivalent to 59.7%, 25.0% and 15.3% below, at and above anaerobic threshold HR respectively. The anaerobic threshold evaluated by GMI and LMI improvement 9.89 ± 4.35% and 10.28 ± 9.89 respectively, after training, but the VO2max 2.52 ± 1.81%. No differences were found between GMI and LMI in pre (218.2 ± 22.1 vs 215.0 ± 18.6 W) and post (240.6 ± 22.9 vs 237.5 ± 18.8 W) training situations. LMI and GMI were sensitive to 12-week aerobic training in cyclist; thus, both protocols can be used to assess aerobic adaptation, athletes diagnostic and prescribe training. Key points The lactate and glucose minimum intensities (GMI) can be used for monitoring training effects on cyclists Although both GMI and lactate minimum intensities are important index of aerobic fitness, they cannot be used to determine aerobic fitness. The polarized training was effective for improvements of maximal oxygen uptake on trained cyclists. PMID:29535585
NASA Technical Reports Server (NTRS)
Oman, Luke D.; Strahan, Susan E.
2016-01-01
Simulations using reanalyzed meteorological conditions have been long used to understand causes of atmospheric composition change over the recent past. Using the new Modern-Era Retrospective analysis for Research and Applications, version 2 (MERRA-2) meteorology, chemistry simulations are being conducted to create products covering 1980-2016 for the atmospheric composition community. These simulations use the Global Modeling Initiative (GMI) chemical mechanism in two different models: the GMI Chemical Transport Model (CTM) and the GEOS-5 model developed Replay mode. Replay mode means an integration of the GEOS-5 general circulation model that is incrementally adjusted each time step toward the MERRA-2 analysis. The GMI CTM is a 1 x 1.25 simulation and the MERRA-2 GMI Replay simulation uses the native MERRA-2 approximately horizontal resolution on the cubed sphere. The Replay simulations is driven by the online use of key MERRA-2 meteorological variables (i.e. U, V, T, and surface pressure) with all other variables calculated in response to those variables. A specialized set of transport diagnostics is included in both runs to better understand trace gas transport and changes over the recent past.
Current Status of Japanese Global Precipitation Measurement (GPM) Research Project
NASA Astrophysics Data System (ADS)
Kachi, Misako; Oki, Riko; Kubota, Takuji; Masaki, Takeshi; Kida, Satoshi; Iguchi, Toshio; Nakamura, Kenji; Takayabu, Yukari N.
2013-04-01
The Global Precipitation Measurement (GPM) mission is a mission led by the Japan Aerospace Exploration Agency (JAXA) and the National Aeronautics and Space Administration (NASA) under collaboration with many international partners, who will provide constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory, which carries the Dual-frequency Precipitation Radar (DPR) developed by JAXA and the National Institute of Information and Communications Technology (NICT), and the GPM Microwave Imager (GMI) developed by NASA. The GPM Core Observatory is scheduled to be launched in early 2014. JAXA also provides the Global Change Observation Mission (GCOM) 1st - Water (GCOM-W1) named "SHIZUKU," as one of constellation satellites. The SHIZUKU satellite was launched in 18 May, 2012 from JAXA's Tanegashima Space Center, and public data release of the Advanced Microwave Scanning Radiometer 2 (AMSR2) on board the SHIZUKU satellite was planned that Level 1 products in January 2013, and Level 2 products including precipitation in May 2013. The Japanese GPM research project conducts scientific activities on algorithm development, ground validation, application research including production of research products. In addition, we promote collaboration studies in Japan and Asian countries, and public relations activities to extend potential users of satellite precipitation products. In pre-launch phase, most of our activities are focused on the algorithm development and the ground validation related to the algorithm development. As the GPM standard products, JAXA develops the DPR Level 1 algorithm, and the NASA-JAXA Joint Algorithm Team develops the DPR Level 2 and the DPR-GMI combined Level2 algorithms. JAXA also develops the Global Rainfall Map product as national product to distribute hourly and 0.1-degree horizontal resolution rainfall map. All standard algorithms including Japan-US joint algorithm will be reviewed by the Japan-US Joint Precipitation Measuring Mission (PMM) Science Team (JPST) before the release. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. At-launch code was developed in December 2012. In addition, JAXA and NASA have provided synthetic DPR L1 data and tests have been performed using them. Japanese Global Rainfall Map algorithm for the GPM mission has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project, which was sponsored by the Japan Science and Technology Agency (JST) under the Core Research for Evolutional Science and Technology (CREST) framework between 2002 and 2007. The GSMaP near-real-time version and reanalysis version have been in operation at JAXA, and browse images and binary data available at the GSMaP web site (http://sharaku.eorc.jaxa.jp/GSMaP/). The GSMaP algorithm for GPM is developed in collaboration with AMSR2 standard algorithm for precipitation product, and their validation studies are closely related. As JAXA GPM product, we will provide 0.1-degree grid and hourly product for standard and near-realtime processing. Outputs will include hourly rainfall, gauge-calibrated hourly rainfall, and several quality information (satellite information flag, time information flag, and gauge quality information) over global areas from 60°S to 60°N. At-launch code of GSMaP for GPM is under development, and will be delivered to JAXA GPM Mission Operation System by April 2013. At-launch code will include several updates of microwave imager and sounder algorithms and databases, and introduction of rain-gauge correction.
Phase 1 Study of the E-Selectin Inhibitor GMI 1070 in Patients with Sickle Cell Anemia
Wun, Ted; Styles, Lori; DeCastro, Laura; Telen, Marilyn J.; Kuypers, Frans; Cheung, Anthony; Kramer, William; Flanner, Henry; Rhee, Seungshin; Magnani, John L.; Thackray, Helen
2014-01-01
Background Sickle cell anemia is an inherited disorder of hemoglobin that leads to a variety of acute and chronic complications. Abnormal cellular adhesion, mediated in part by selectins, has been implicated in the pathophysiology of the vaso-occlusion seen in sickle cell anemia, and selectin inhibition was able to restore blood flow in a mouse model of sickle cell disease. Methods We performed a Phase 1 study of the selectin inhibitor GMI 1070 in patients with sickle cell anemia. Fifteen patients who were clinically stable received GMI 1070 in two infusions. Results The drug was well tolerated without significant adverse events. There was a modest increase in total peripheral white blood cell count without clinical symptoms. Plasma concentrations were well-described by a two-compartment model with an elimination T1/2 of 7.7 hours and CLr of 19.6 mL/hour/kg. Computer-assisted intravital microscopy showed transient increases in red blood cell velocity in 3 of the 4 patients studied. Conclusions GMI 1070 was safe in stable patients with sickle cell anemia, and there was suggestion of increased blood flow in a subset of patients. At some time points between 4 and 48 hours after treatment with GMI 1070, there were significant decreases in biomarkers of endothelial activation (sE-selectin, sP-selectin, sICAM), leukocyte activation (MAC-1, LFA-1, PM aggregates) and the coagulation cascade (tissue factor, thrombin-antithrombin complexes). Development of GMI 1070 for the treatment of acute vaso-occlusive crisis is ongoing. Trial Registration ClinicalTrials.gov NCT00911495 PMID:24988449
In-plane omnidirectional magnetic field sensor based on Giant Magneto Impedance (GMI)
NASA Astrophysics Data System (ADS)
Díaz-Rubio, Ana; García-Miquel, Héctor; García-Chocano, Víctor Manuel
2017-12-01
In this work the design and characterization of an omnidirectional in-plane magnetic field sensor are presented. The sensor is based on the Giant Magneto Impedance (GMI) effect in glass-coated amorphous microwires of composition (Fe6Co94)72.5Si12.5B15. For the first time, a circular loop made with a microwire is used for giving omnidirectional response. In order to estimate the GMI response of the circular loop we have used a theoretical model of GMI, determining the GMI response as the sum of longitudinal sections with different angles of incidence. As a consequence of the circular loop, the GMI ratio of the sensor is reduced to 15% instead of 100% for the axial GMI response of a microwire. The sensor response has been experimentally verified and the GMI response of the circular loop has been studied as function of the magnetic field, driven current, and frequency. First, we have measured the GMI response of a longitudinal microwire for different angles of incidence, covering the full range between the tangential and perpendicular directions to the microwire axis. Then, using these results, we have experimentally verified the decomposition of a microwire with circular shape as longitudinal segments with different angles of incidence. Finally, we have designed a signal conditioning circuit for the omnidirectional magnetic field sensor. The response of the sensor has been studied as a function of the amplitude of the incident magnetic field.
NASA Astrophysics Data System (ADS)
Williams, C. R.
2012-12-01
The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V-polarization at the instrument view angles of nadir to 17 degrees (for DPR) and 48 & 53 degrees off nadir (for GMI). The GPM DSD Working Group is generating integral tables with GV observed DSD correlations and is performing sensitivity and verification tests. One advantage of keeping scattering tables separate from integral tables is that research can progress on the electromagnetic scattering of particles independent of cloud microphysics research. Another advantage of keeping the tables separate is that multiple scattering tables will be needed for frozen precipitation. Scattering tables are being developed for individual frozen particles based on habit, density and operating frequency. And a third advantage of keeping scattering and integral tables separate is that this framework provides an opportunity to communicate GV findings about DSD correlations into integral tables, and thus, into satellite algorithms.
Kondalkar, Vijay V; Li, Xiang; Park, Ikmo; Yang, Sang Sik; Lee, Keekeun
2018-02-05
A chipless, wireless current sensor system was developed using a giant magnetoimpedance (GMI) magnetic sensor and one-port surface acoustic wave (SAW) reflective delay line for real-time power monitoring in a current-carrying conductor. The GMI sensor has a high-quality crystalline structure in each layer, which contributes to a high sensitivity and good linearity in a magnetic field of 3-16 Oe. A 400 MHz RF energy generated from the interdigital transducer (IDT)-type reflector on the one-port SAW delay line was used as an activation source for the GMI magnetic sensor. The one-port SAW delay line replaces the presently existing transceiver system, which is composed of thousands of transistors, thus enabling chipless and wireless operation. We confirmed a large variation in the amplitude of the SAW reflection peak with a change in the impedance of the GMI sensor caused by the current flow through the conductor. Good linearity and sensitivity of ~0.691 dB/A were observed for currents in the range 1-12 A. Coupling of Mode (COM) modeling and impedance matching analysis were also performed to predict the device performance in advance and these were compared with the experimental results.
General MACOS Interface for Modeling and Analysis for Controlled Optical Systems
NASA Technical Reports Server (NTRS)
Sigrist, Norbert; Basinger, Scott A.; Redding, David C.
2012-01-01
The General MACOS Interface (GMI) for Modeling and Analysis for Controlled Optical Systems (MACOS) enables the use of MATLAB as a front-end for JPL s critical optical modeling package, MACOS. MACOS is JPL s in-house optical modeling software, which has proven to be a superb tool for advanced systems engineering of optical systems. GMI, coupled with MACOS, allows for seamless interfacing with modeling tools from other disciplines to make possible integration of dynamics, structures, and thermal models with the addition of control systems for deformable optics and other actuated optics. This software package is designed as a tool for analysts to quickly and easily use MACOS without needing to be an expert at programming MACOS. The strength of MACOS is its ability to interface with various modeling/development platforms, allowing evaluation of system performance with thermal, mechanical, and optical modeling parameter variations. GMI provides an improved means for accessing selected key MACOS functionalities. The main objective of GMI is to marry the vast mathematical and graphical capabilities of MATLAB with the powerful optical analysis engine of MACOS, thereby providing a useful tool to anyone who can program in MATLAB. GMI also improves modeling efficiency by eliminating the need to write an interface function for each task/project, reducing error sources, speeding up user/modeling tasks, and making MACOS well suited for fast prototyping.
Selecting Meteorological Input for the Global Modeling Initiative Assessments
NASA Technical Reports Server (NTRS)
Strahan, Susan; Douglass, Anne; Prather, Michael; Coy, Larry; Hall, Tim; Rasch, Phil; Sparling, Lynn
1999-01-01
The Global Modeling Initiative (GMI) science team has developed a three dimensional chemistry and transport model (CTM) to evaluate the impact of the exhaust of supersonic aircraft on the stratosphere. An important goal of the GMI is to test modules for numerical transport, photochemical integration, and model dynamics within a common framework. This work is focussed on the dependence of the overall assessment on the wind and temperature fields used by the CTM. Three meteorological data sets for the stratosphere were available to GMI: the National Center for Atmospheric Research Community Climate Model (CCM2), the Goddard Earth Observing System Data Assimilation System (GEOS-DAS), and the Goddard Institute for Space Studies general circulation model (GISS-2'). Objective criteria were established by the GMI team to evaluate which of these three data sets provided the best representation of trace gases in the stratosphere today. Tracer experiments were devised to test various aspects of model transport. Stratospheric measurements of long-lived trace gases were selected as a test of the CTM transport. This presentation describes the criteria used in grading the meteorological fields and the resulting choice of wind fields to be used in the GMI assessment. This type of objective model evaluation will lead to a higher level of confidence in these assessments. We suggest that the diagnostic tests shown here be used to augment traditional general circulation model evaluation methods.
Chrysostomou, Stavri; Andreou, Sofia
2017-04-01
The aim of the present study was to assess the cost, acceptability and affordability of the healthy food basket (HFB) among low-income families in Cyprus. HFBs were constructed based on the National Guidelines for Nutrition and Exercise for six different types of households. Acceptability was tested through focus groups. Affordability was defined as the cost of the HFB as a percentage of the guaranteed minimum income (GMI). The value of the GMI is set to be equal to €480 for a single individual and increases with the size of the recipient unit in accordance with the Organization for Economic Co-operation and Development equivalence scales. The Ministry of Labour estimates that, on average, nearly 50% of the GMI is required for food. The total monthly budget for HFB is 0.80, 1.11, 1.27, 1.28, 1.44 and 1.48 times higher than the GMI budget for food among different types of households in Cyprus (a single woman, a single man, a couple, a single woman with two children, a single man with two children and a couple with two children, respectively). In particular, a family with two children on GMI would need to spend a large proportion of their income on the HFB (71.68%). The GMI scheme appears not to consider the cost of healthy food, and thus, families on welfare payments in Cyprus are at a high risk of experiencing food stress. Therefore, additional research is required to measure the cost of the six HFBs in various settings. © 2016 Dietitians Association of Australia.
NASA Astrophysics Data System (ADS)
Qin, F. X.; Peng, H. X.; Popov, V. V.; Phan, M. H.
2011-02-01
Composites consisting of glass-coated amorphous microwire Co 68.59Fe 4.84Si 12.41B 14.16 and 913 E-glass prepregs were designed and fabricated. The influences of tensile stress, annealing and number of composite layers on the giant magneto-impedance (GMI) and giant stress-impedance (GSI) effects in these composites were investigated systematically. It was found that the application of tensile stress along the microwire axis or an increase in the number of composite layers reduced the GMI effect and increased the circular anisotropy field, while the annealing treatment had a reverse effect. The value of matrix-wire interfacial stress calculated via the GMI profiles coincided with the value of the applied effective tensile stress to yield similar GMI profiles. Enhancement of the GSI effect was achieved in the composites relative to their single microwire inclusion. These findings are important for the development of functional microwire-based composites for magnetic- and stress-sensing applications. They also open up a new route for probing the interfacial stress in fibre-reinforced polymer (FRP) composites.
The Advanced Composition Course at GMI.
ERIC Educational Resources Information Center
Swift, Marvin H.
The General Motors Institute (GMI), a wholly owned subsidiary of the General Motors Corporation, was created to provide leaders for its parent organization. GMI is a fully accredited undergraduate college that offers degrees in industrial, electrical, and mechanical engineering and in industrial administration. Since people in business and…
USDA-ARS?s Scientific Manuscript database
This paper provides an overview of the GMI (Geospatial Modeling Interface) simulation framework for environmental model deployment and assessment. GMI currently provides access to multiple environmental models including AgroEcoSystem-Watershed (AgES-W), Nitrate Leaching and Economic Analysis 2 (NLEA...
Wun, Ted; McCavit, Timothy L.; De Castro, Laura M.; Krishnamurti, Lakshmanan; Lanzkron, Sophie; Hsu, Lewis L.; Smith, Wally R.; Rhee, Seungshin; Magnani, John L.; Thackray, Helen
2015-01-01
Treatment of vaso-occlusive crises (VOC) or events in sickle cell disease (SCD) remains limited to symptom relief with opioids. Animal models support the effectiveness of the pan-selectin inhibitor GMI-1070 in reducing selectin-mediated cell adhesion and abrogating VOC. We studied GMI-1070 in a prospective multicenter, randomized, placebo-controlled, double-blind, phase 2 study of 76 SCD patients with VOC. Study drug (GMI-1070 or placebo) was given every 12 hours for up to 15 doses. Other treatment was per institutional standard of care. All subjects reached the composite primary end point of resolution of VOC. Although time to reach the composite primary end point was not statistically different between the groups, clinically meaningful reductions in mean and median times to VOC resolution of 41 and 63 hours (28% and 48%, P = .19 for both) were observed in the active treatment group vs the placebo group. As a secondary end point, GMI-1070 appeared safe in acute vaso-occlusion, and adverse events were not different in the two arms. Also in secondary analyses, mean cumulative IV opioid analgesic use was reduced by 83% with GMI-1070 vs placebo (P = .010). These results support a phase 3 study of GMI-1070 (now rivipansel) for SCD VOC. This trial was registered at www.clinicaltrials.gov as #NCT01119833. PMID:25733584
GMI Instrument Spin Balance Method, Optimization, Calibration, and Test
NASA Technical Reports Server (NTRS)
Ayari, Laoucet; Kubitschek, Michael; Ashton, Gunnar; Johnston, Steve; Debevec, Dave; Newell, David; Pellicciotti, Joseph
2014-01-01
The Global Microwave Imager (GMI) instrument must spin at a constant rate of 32 rpm continuously for the 3 year mission life. Therefore, GMI must be very precisely balanced about the spin axis and CG to maintain stable scan pointing and to minimize disturbances imparted to the spacecraft and attitude control on-orbit. The GMI instrument is part of the core Global Precipitation Measurement (GPM) spacecraft and is used to make calibrated radiometric measurements at multiple microwave frequencies and polarizations. The GPM mission is an international effort managed by the National Aeronautics and Space Administration (NASA) to improve climate, weather, and hydro-meteorological predictions through more accurate and frequent precipitation measurements. Ball Aerospace and Technologies Corporation (BATC) was selected by NASA Goddard Space Flight Center to design, build, and test the GMI instrument. The GMI design has to meet a challenging set of spin balance requirements and had to be brought into simultaneous static and dynamic spin balance after the entire instrument was already assembled and before environmental tests began. The focus of this contribution is on the analytical and test activities undertaken to meet the challenging spin balance requirements of the GMI instrument. The novel process of measuring the residual static and dynamic imbalances with a very high level of accuracy and precision is presented together with the prediction of the optimal balance masses and their locations.
GMI Instrument Spin Balance Method, Optimization, Calibration and Test
NASA Technical Reports Server (NTRS)
Ayari, Laoucet; Kubitschek, Michael; Ashton, Gunnar; Johnston, Steve; Debevec, Dave; Newell, David; Pellicciotti, Joseph
2014-01-01
The Global Microwave Imager (GMI) instrument must spin at a constant rate of 32 rpm continuously for the 3-year mission life. Therefore, GMI must be very precisely balanced about the spin axis and center of gravity (CG) to maintain stable scan pointing and to minimize disturbances imparted to the spacecraft and attitude control on-orbit. The GMI instrument is part of the core Global Precipitation Measurement (GPM) spacecraft and is used to make calibrated radiometric measurements at multiple microwave frequencies and polarizations. The GPM mission is an international effort managed by the National Aeronautics and Space Administration (NASA) to improve climate, weather, and hydro-meteorological predictions through more accurate and frequent precipitation measurements. Ball Aerospace and Technologies Corporation (BATC) was selected by NASA Goddard Space Flight Center to design, build, and test the GMI instrument. The GMI design has to meet a challenging set of spin balance requirements and had to be brought into simultaneous static and dynamic spin balance after the entire instrument was already assembled and before environmental tests began. The focus of this contribution is on the analytical and test activities undertaken to meet the challenging spin balance requirements of the GMI instrument. The novel process of measuring the residual static and dynamic imbalances with a very high level of accuracy and precision is presented together with the prediction of the optimal balance masses and their locations.
Nouér, Simone A; Nucci, Marcio; Kumar, Naveen Sanath; Grazziutti, Monica; Barlogie, Bart; Anaissie, Elias
2011-10-01
Current criteria for assessing treatment response of invasive aspergillosis (IA) rely on nonspecific subjective parameters. We hypothesized that an Aspergillus-specific response definition based on the kinetics of serum Aspergillus galactomannan index (GMI) would provide earlier and more objective response assessment. We compared the 6-week European Organization for Research and Treatment of Cancer/Mycoses Study Group (EORTC/MSG) response criteria with GMI-based response among 115 cancer patients with IA. Success according to GMI required survival with repeatedly negative GMI for ≥2 weeks. Time to response and agreement between the 2 definitions were the study endpoints. Success according to EORTC/MSG and GMI criteria was observed in 73 patients (63%) and 83 patients (72%), respectively. The GMI-based response was determined at a median of 21 days after treatment initiation (range, 15-41 days), 3 weeks before the EORTC/MSG time point, in 72 (87%) of 83 responders. Agreement between definitions was shown in all 32 nonresponders and in 73 of the 83 responders (91% overall), with an excellent κ correlation coefficient of 0.819. Among 10 patients with discordant response (EORTC/MSG failure, GMI success), 1 is alive without IA 3 years after diagnosis; for the other, aspergillosis could not be detected at autopsy. The presence of other life-threatening complications in the remaining 8 patients indicates that IA had resolved. The Aspergillus-specific GMI-based criteria compare favorably to current response definitions for IA and significantly shorten time to response assessment. These criteria rely on a simple, reproducible, objective, and Aspergillus-specific test and should serve as the primary endpoint in trials of IA.
High Frequency Amplitude Detector for GMI Magnetic Sensors
Asfour, Aktham; Zidi, Manel; Yonnet, Jean-Paul
2014-01-01
A new concept of a high-frequency amplitude detector and demodulator for Giant-Magneto-Impedance (GMI) sensors is presented. This concept combines a half wave rectifier, with outstanding capabilities and high speed, and a feedback approach that ensures the amplitude detection with easily adjustable gain. The developed detector is capable of measuring high-frequency and very low amplitude signals without the use of diode-based active rectifiers or analog multipliers. The performances of this detector are addressed throughout the paper. The full circuitry of the design is given, together with a comprehensive theoretical study of the concept and experimental validation. The detector has been used for the amplitude measurement of both single frequency and pulsed signals and for the demodulation of amplitude-modulated signals. It has also been successfully integrated in a GMI sensor prototype. Magnetic field and electrical current measurements in open- and closed-loop of this sensor have also been conducted. PMID:25536003
NASA Technical Reports Server (NTRS)
Strahan, Susan E.; Douglass, Anne R.; Einaudi, Franco (Technical Monitor)
2001-01-01
The Global Modeling Initiative (GMI) Team developed objective criteria for model evaluation in order to identify the best representation of the stratosphere. This work created a method to quantitatively and objectively discriminate between different models. In the original GMI study, 3 different meteorological data sets were used to run an offline chemistry and transport model (CTM). Observationally-based grading criteria were derived and applied to these simulations and various aspects of stratospheric transport were evaluated; grades were assigned. Here we report on the application of the GMI evaluation criteria to CTM simulations integrated with a new assimilated wind data set and a new general circulation model (GCM) wind data set. The Finite Volume Community Climate Model (FV-CCM) is a new GCM developed at Goddard which uses the NCAR CCM physics and the Lin and Rood advection scheme. The FV-Data Assimilation System (FV-DAS) is a new data assimilation system which uses the FV-CCM as its core model. One year CTM simulations of 2.5 degrees longitude by 2 degrees latitude resolution were run for each wind data set. We present the evaluation of temperature and annual transport cycles in the lower and middle stratosphere in the two new CTM simulations. We include an evaluation of high latitude transport which was not part of the original GMI criteria. Grades for the new simulations will be compared with those assigned during the original GMT evaluations and areas of improvement will be identified.
Choosing Meteorological Input for the Global Modeling Initiative Assessment of High Speed Aircraft
NASA Technical Reports Server (NTRS)
Douglas, A. R.; Prather, M. P.; Hall, T. M.; Strahan, S. E.; Rasch, P. J.; Sparling, L. C.; Coy, L.; Rodriquez, J. M.
1998-01-01
The Global Modeling Initiative (GMI) science team is developing a three dimensional chemistry and transport model (CTM) to be used in assessment of the atmospheric effects of aviation. Requirements are that this model be documented, be validated against observations, use a realistic atmospheric circulation, and contain numerical transport and photochemical modules representing atmospheric processes. The model must also retain computational efficiency to be tractable to use for multiple scenarios and sensitivity studies. To meet these requirements, a facility model concept was developed in which the different components of the CTM are evaluated separately. The first use of the GMI model will be to evaluate the impact of the exhaust of supersonic aircraft on the stratosphere. The assessment calculations will depend strongly on the wind and temperature fields used by the CTM. Three meteorological data sets for the stratosphere are available to GMI: the National Center for Atmospheric Research Community Climate Model (CCM2), the Goddard Earth Observing System Data Assimilation System (GEOS DAS), and the Goddard Institute for Space Studies general circulation model (GISS). Objective criteria were established by the GMI team to identify the data set which provides the best representation of the stratosphere. Simulations of gases with simple chemical control were chosen to test various aspects of model transport. The three meteorological data sets were evaluated and graded based on their ability to simulate these aspects of stratospheric measurements. This paper describes the criteria used in grading the meteorological fields. The meteorological data set which has the highest score and therefore was selected for GMI is CCM2. This type of objective model evaluation establishes a physical basis for interpretation of differences between models and observations. Further, the method provides a quantitative basis for defining model errors, for discriminating between different models, and for ready re-evaluation of improved models. These in turn will lead to a higher level of confidence in assessment calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, D.; Devkota, J.; Ruiz, A.
2014-09-28
A systematic study of the effect of depositing CoFe₂O₄ (CFO) films of various thicknesses (d = 0–600 nm) on the giant magneto-impedance (GMI) response of a soft ferromagnetic amorphous ribbon Co₆₅Fe₄Ni₂Si₁₅B₁₄ has been performed. The CFO films were grown on the amorphous ribbons by the pulsed laser deposition technique. X-ray diffraction and transmission electron microscopy revealed a structural variation of the CFO film from amorphous to polycrystalline as the thickness of the CFO film exceeded a critical value of 300 nm. Atomic force microscopy evidenced the increase in surface roughness of the CFO film as the thickness of the CFOmore » film was increased. These changes in the crystallinity and morphology of the CFO film were found to have a distinct impact on the GMI response of the ribbon. Relative to the bare ribbon, coating of amorphous CFO films significantly enhanced the GMI response of the ribbon, while polycrystalline CFO films decreased it considerably. The maximum GMI response was achieved near the onset of the structural transition of the CFO film. These findings are of practical importance in developing high-sensitivity magnetic sensors.« less
NASA Technical Reports Server (NTRS)
Oman, Luke D.; Strahan, Susan E.
2017-01-01
Simulations using reanalysis meteorological fields have long been used to understand the causes of atmospheric composition change in the recent past. Using the new MERRA-2 reanalysis, we are conducting chemistry simulations to create products covering 1980-2016 for the atmospheric composition community. These simulations use the Global Modeling Initiative (GMI) chemical mechanism in two different models: the GMI Chemical Transport Model (CTM) and the GEOS-5 model in Replay mode. Replay mode means an integration of the GEOS-5 general circulation model that is incrementally adjusted each time step toward the MERRA-2 reanalysis. The GMI CTM is a 1 deg x 1.25 deg simulation and the MERRA-2 GMI Replay simulation uses the native MERRA-2 grid of approximately 1/2 deg horizontal resolution on the cubed sphere. A specialized set of transport diagnostics is included in both runs to better understand trace gas transport and its variability in the recent past.
The Impact of Assimilation of GPM Clear Sky Radiance on HWRF Hurricane Track and Intensity Forecasts
NASA Astrophysics Data System (ADS)
Yu, C. L.; Pu, Z.
2016-12-01
The impact of GPM microwave imager (GMI) clear sky radiances on hurricane forecasting is examined by ingesting GMI level 1C recalibrated brightness temperature into the NCEP Gridpoint Statistical Interpolation (GSI)- based ensemble-variational hybrid data assimilation system for the operational Hurricane Weather Research and Forecast (HWRF) system. The GMI clear sky radiances are compared with the Community Radiative Transfer Model (CRTM) simulated radiances to closely study the quality of the radiance observations. The quality check result indicates the presence of bias in various channels. A static bias correction scheme, in which the appropriate bias correction coefficients for GMI data is evaluated by applying regression method on a sufficiently large sample of data representative to the observational bias in the regions of concern, is used to correct the observational bias in GMI clear sky radiances. Forecast results with and without assimilation of GMI radiance are compared using hurricane cases from recent hurricane seasons (e.g., Hurricane Joaquin in 2015). Diagnoses of data assimilation results show that the bias correction coefficients obtained from the regression method can correct the inherent biases in GMI radiance data, significantly reducing observational residuals. The removal of biases also allows more data to pass GSI quality control and hence to be assimilated into the model. Forecast results for hurricane Joaquin demonstrates that the quality of analysis from the data assimilation is sensitive to the bias correction, with positive impacts on the hurricane track forecast when systematic biases are removed from the radiance data. Details will be presented at the symposium.
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz; Kelley, O.; Kummerow, C.; Huffman, G.; Olson, W.; Kwiatkowski, J.
2015-01-01
In February 2015, the Global Precipitation Measurement (GPM) mission core satellite will complete its first year in space. The core satellite carries a conically scanning microwave imager called the GPM Microwave Imager (GMI), which also has 166 GHz and 183 GHz frequency channels. The GPM core satellite also carries a dual frequency radar (DPR) which operates at Ku frequency, similar to the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar, and a new Ka frequency. The precipitation processing system (PPS) is producing swath-based instantaneous precipitation retrievals from GMI, both radars including a dual-frequency product, and a combined GMIDPR precipitation retrieval. These level 2 products are written in the HDF5 format and have many additional parameters beyond surface precipitation that are organized into appropriate groups. While these retrieval algorithms were developed prior to launch and are not optimal, these algorithms are producing very creditable retrievals. It is appropriate for a wide group of users to have access to the GPM retrievals. However, for researchers requiring only surface precipitation, these L2 swath products can appear to be very intimidating and they certainly do contain many more variables than the average researcher needs. Some researchers desire only surface retrievals stored in a simple easily accessible format. In response, PPS has begun to produce gridded text based products that contain just the most widely used variables for each instrument (surface rainfall rate, fraction liquid, fraction convective) in a single line for each grid box that contains one or more observations.This paper will describe the gridded data products that are being produced and provide an overview of their content. Currently two types of gridded products are being produced: (1) surface precipitation retrievals from the core satellite instruments GMI, DPR, and combined GMIDPR (2) surface precipitation retrievals for the partner constellation satellites. Both of these gridded products are generated for a.25 degree x.25 degree hourly grid, which are packaged into daily ASCII (American Standard Code for Information Interchange) files that can downloaded from the PPS FTP (File Transfer Protocol) site. To reduce the download size, the files are compressed using the gzip utility.This paper will focus on presenting high-level details about the gridded text product being generated from the instruments on the GPM core satellite. But summary information will also be presented about the partner radiometer gridded product. All retrievals for the partner radiometer are done using the GPROF2014 algorithmusing as input the PPS generated inter-calibrated 1C product for the radiometer.
Comparison of satellite precipitation products with Q3 over the CONUS
NASA Astrophysics Data System (ADS)
Wang, J.; Petersen, W. A.; Wolff, D. B.; Kirstetter, P. E.
2016-12-01
The Global Precipitation Measurement (GPM) is an international satellite mission that provides a new-generation of global precipitation observations. A wealth of precipitation products have been generated since the launch of the GPM Core Observatory in February of 2014. However, the accuracy of the satellite-based precipitation products is affected by discrete temporal sampling and remote spaceborne retrieval algorithms. The GPM Ground Validation (GV) program is currently underway to independently verify the satellite precipitation products, which can be carried out by comparing satellite products with ground measurements. This study compares four Day-1 GPM surface precipitation products derived from the GPM Microwave Imager (GMI), Ku-band Precipitation Radar (KU), Dual-Frequency Precipitation Radar (DPR) and DPR-GMI CoMBined (CMB) algorithms, as well as the near-real-time Integrated Multi-satellitE Retrievals for GPM (IMERG) Late Run product and precipitation retrievals from Microwave Humidity Sounders (MHS) flown on NOAA and METOPS satellites, with the NOAA Multi-Radar Multi-Sensor suite (MRMS; now called "Q3"). The comparisons are conducted over the conterminous United States (CONUS) at various spatial and temporal scales with respect to different precipitation intensities, and filtered with radar quality index (RQI) thresholds and precipitation types. Various versions of GPM products are evaluated against Q3. The latest Version-04A GPM products are in reasonably good overall agreement with Q3. Based on the mission-to-date (March 2014 - May 2016) data from all GPM overpasses, the biases relative to Q3 for GMI and DPR precipitation estimates at 0.5o resolution are negative, whereas the biases for CMB and KU precipitation estimates are positive. Based on all available data (March 2015 - April 2016 at this writing), the CONUS-averaged near-real-time IMERG Late Run hourly precipitation estimate is about 46% higher than Q3. Preliminary comparison of 1-year (2015) MHS precipitation estimates with Q3 shows the MHS is bout 30% lower than Q3. Detailed comparison results are available at http://wallops-prf.gsfc.nasa.gov/NMQ/.
Developing an A Priori Database for Passive Microwave Snow Water Retrievals Over Ocean
NASA Astrophysics Data System (ADS)
Yin, Mengtao; Liu, Guosheng
2017-12-01
A physically optimized a priori database is developed for Global Precipitation Measurement Microwave Imager (GMI) snow water retrievals over ocean. The initial snow water content profiles are derived from CloudSat Cloud Profiling Radar (CPR) measurements. A radiative transfer model in which the single-scattering properties of nonspherical snowflakes are based on the discrete dipole approximate results is employed to simulate brightness temperatures and their gradients. Snow water content profiles are then optimized through a one-dimensional variational (1D-Var) method. The standard deviations of the difference between observed and simulated brightness temperatures are in a similar magnitude to the observation errors defined for observation error covariance matrix after the 1D-Var optimization, indicating that this variational method is successful. This optimized database is applied in a Bayesian retrieval snow water algorithm. The retrieval results indicated that the 1D-Var approach has a positive impact on the GMI retrieved snow water content profiles by improving the physical consistency between snow water content profiles and observed brightness temperatures. Global distribution of snow water contents retrieved from the a priori database is compared with CloudSat CPR estimates. Results showed that the two estimates have a similar pattern of global distribution, and the difference of their global means is small. In addition, we investigate the impact of using physical parameters to subset the database on snow water retrievals. It is shown that using total precipitable water to subset the database with 1D-Var optimization is beneficial for snow water retrievals.
Jiang, Yu; Yang, Jiacheng; Cocker, David; Karavalakis, Georgios; Johnson, Kent C; Durbin, Thomas D
2018-04-01
The regulated emissions of five 2012 and newer, low-mileage, heavy-duty Class 8 diesel trucks equipped with diesel particulate filters (DPFs) and selective catalytic reduction (SCR) systems were evaluated over test cycles representing urban, highway, and stop-and-go driving on a chassis dynamometer. NOx emissions over the Urban Dynamometer Driving Schedule (UDDS) ranged from 0.495 to 1.363g/mi (0.136 to 0.387g/bhp-hr) for four of the normal emitting trucks. For those trucks, NOx emissions were lowest over the cruise (0.068 to 0.471g/mi) and high-speed cruise (0.067 to 0.249g/mi) cycles, and highest for the creep cycle (2.131 to 9.468g/mi). A fifth truck showed an anomaly in that it had never regenerated throughout its relatively short operating lifetime due to its unusual, unladed service history. This truck exhibited NOx emissions of 3.519g/mi initially over the UDDS, with UDDS NOx emissions decreasing to 0.39g/mi after a series of parked regenerations. PM, THC, and CO emissions were found to be very low for most of the testing conditions, due to the presence of the DPF/SCR aftertreatment system, and were comparable to background levels in some cases. Copyright © 2017 Elsevier B.V. All rights reserved.
Development of WAIS-III General Ability Index Minus WMS-III memory discrepancy scores.
Lange, Rael T; Chelune, Gordon J; Tulsky, David S
2006-09-01
Analysis of the discrepancy between intellectual functioning and memory ability has received some support as a useful means for evaluating memory impairment. In recent additions to Wechlser scale interpretation, the WAIS-III General Ability Index (GAI) and the WMS-III Delayed Memory Index (DMI) were developed. The purpose of this investigation is to develop base rate data for GAI-IMI, GAI-GMI, and GAI-DMI discrepancy scores using data from the WAIS-III/WMS-III standardization sample (weighted N = 1250). Base rate tables were developed using the predicted-difference method and two simple-difference methods (i.e., stratified and non-stratified). These tables provide valuable data for clinical reference purposes to determine the frequency of GAI-IMI, GAI-GMI, and GAI-DMI discrepancy scores in the WAIS-III/WMS-III standardization sample.
Chlenova, Anna A.; Moiseev, Alexey A.; Derevyanko, Mikhail S.; Semirov, Aleksandr V.; Lepalovsky, Vladimir N.
2017-01-01
Permalloy-based thin film structures are excellent materials for sensor applications. Temperature dependencies of the magnetic properties and giant magneto-impedance (GMI) were studied for Fe19Ni81-based multilayered structures obtained by the ion-plasma sputtering technique. Selected temperature interval of 25 °C to 50 °C corresponds to the temperature range of functionality of many devices, including magnetic biosensors. A (Cu/FeNi)5/Cu/(Cu/FeNi)5 multilayered structure with well-defined traverse magnetic anisotropy showed an increase in the GMI ratio for the total impedance and its real part with temperature increased. The maximum of the GMI of the total impedance ratio ΔZ/Z = 56% was observed at a frequency of 80 MHz, with a sensitivity of 18%/Oe, and the maximum GMI of the real part ΔR/R = 170% at a frequency of 10 MHz, with a sensitivity of 46%/Oe. As the magnetization and direct current electrical resistance vary very little with the temperature, the most probable mechanism of the unexpected increase of the GMI sensitivity is the stress relaxation mechanism associated with magnetoelastic anisotropy. PMID:28817084
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, S. D.; Eggers, T.; Thiabgoh, O.
Understanding the relationship between the surface conditions and giant magneto-impedance (GMI) in Co-rich melt-extracted microwires is key to optimizing their magnetic responses for magnetic sensor applications. The surface magnetic domain structure (SMDS) parameters of ~45 μm diameter Co 69.25Fe 4.25Si 13B 13.5-xZr x (x = 0, 1, 2, 3) microwires, including the magnetic domain period (d) and surface roughness (Rq) as extracted from the magnetic force microscopy (MFM) images, have been correlated with GMI in the range 1–1000 MHz. It was found that substitution of B with 1 at. % Zr increased d of the base alloy from 729 tomore » 740 nm while retaining Rq from ~1 nm to ~3 nm. A tremendous impact on the GMI ratio was found, increasing the ratio from ~360% to ~490% at an operating frequency of 40 MHz. Further substitution with Zr decreased the high frequency GMI ratio, which can be understood by the significant increase in surface roughness evident by force microscopy. Lastly, this study demonstrates the application of the domain period and surface roughness found by force microscopy to the interpretation of the GMI in Co-rich microwires.« less
NASA Astrophysics Data System (ADS)
Yang, Hao; Chen, Lei; Lei, Chong; Zhang, Ju; Li, Ding; Zhou, Zhi-Min; Bao, Chen-Chen; Hu, Heng-Yao; Chen, Xiang; Cui, Feng; Zhang, Shuang-Xi; Zhou, Yong; Cui, Da-Xiang
2010-07-01
Quick and parallel genotyping of human papilloma virus (HPV) type 16/18 is carried out by a specially designed giant magnetoimpedance (GMI) based microchannel system. Micropatterned soft magnetic ribbon exhibiting large GMI ratio serves as the biosensor element. HPV genotyping can be determined by the changes in GMI ratio in corresponding detection region after hybridization. The result shows that this system has great potential in future clinical diagnostics and can be easily extended to other biomedical applications based on molecular recognition.
NASA Astrophysics Data System (ADS)
Belal, F.; Ibrahim, F.; Sheribah, Z. A.; Alaa, H.
2018-06-01
In this paper, novel univariate and multivariate regression methods along with model-updating technique were developed and validated for the simultaneous determination of quaternary mixture of imatinib (IMB), gemifloxacin (GMI), nalbuphine (NLP) and naproxen (NAP). The univariate method is extended derivative ratio (EDR) which depends on measuring every drug in the quaternary mixture by using a ternary mixture of the other three drugs as divisor. Peak amplitudes were measured at 294 nm, 250 nm, 283 nm and 239 nm within linear concentration ranges of 4.0-17.0, 3.0-15.0, 4.0-80.0 and 1.0-6.0 μg mL-1 for IMB, GMI, NLP and NAB, respectively. Multivariate methods adopted are partial least squares (PLS) in original and derivative mode. These models were constructed for simultaneous determination of the studied drugs in the ranges of 4.0-8.0, 3.0-11.0, 10.0-18.0 and 1.0-3.0 μg mL-1 for IMB, GMI, NLP and NAB, respectively, by using eighteen mixtures as a calibration set and seven mixtures as a validation set. The root mean square error of predication (RMSEP) were 0.09 and 0.06 for IMB, 0.14 and 0.13 for GMI, 0.07 and 0.02 for NLP and 0.64 and 0.27 for NAP by PLS in original and derivative mode, respectively. Both models were successfully applied for analysis of IMB, GMI, NLP and NAP in their dosage forms. Updated PLS in derivative mode and EDR were applied for determination of the studied drugs in spiked human urine. The obtained results were statistically compared with those obtained by the reported methods giving a conclusion that there is no significant difference regarding accuracy and precision.
Belal, F; Ibrahim, F; Sheribah, Z A; Alaa, H
2018-06-05
In this paper, novel univariate and multivariate regression methods along with model-updating technique were developed and validated for the simultaneous determination of quaternary mixture of imatinib (IMB), gemifloxacin (GMI), nalbuphine (NLP) and naproxen (NAP). The univariate method is extended derivative ratio (EDR) which depends on measuring every drug in the quaternary mixture by using a ternary mixture of the other three drugs as divisor. Peak amplitudes were measured at 294nm, 250nm, 283nm and 239nm within linear concentration ranges of 4.0-17.0, 3.0-15.0, 4.0-80.0 and 1.0-6.0μgmL -1 for IMB, GMI, NLP and NAB, respectively. Multivariate methods adopted are partial least squares (PLS) in original and derivative mode. These models were constructed for simultaneous determination of the studied drugs in the ranges of 4.0-8.0, 3.0-11.0, 10.0-18.0 and 1.0-3.0μgmL -1 for IMB, GMI, NLP and NAB, respectively, by using eighteen mixtures as a calibration set and seven mixtures as a validation set. The root mean square error of predication (RMSEP) were 0.09 and 0.06 for IMB, 0.14 and 0.13 for GMI, 0.07 and 0.02 for NLP and 0.64 and 0.27 for NAP by PLS in original and derivative mode, respectively. Both models were successfully applied for analysis of IMB, GMI, NLP and NAP in their dosage forms. Updated PLS in derivative mode and EDR were applied for determination of the studied drugs in spiked human urine. The obtained results were statistically compared with those obtained by the reported methods giving a conclusion that there is no significant difference regarding accuracy and precision. Copyright © 2018 Elsevier B.V. All rights reserved.
GPM Data Products, Their Availability, and Production Status
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz; Kelley, Owen; Kwiatkowski, John; Ji, Yimin
2014-01-01
On February 28, 2014, Japan Standard Time, the Global Precipitation Measurement (GPM) mission was launched in a picture-perfect launch activity. On March 4, 2014, the GPM Microwave Imager (GMI) was put into science observation mode. The Dual-Frequency Radar (DPR) was put in science observation mode on March 8, 2014. The Precipitation Processing System (PPS) produced products immediately upon receiving the data. Both regular science products and Near-realtime (NRT) products were produced. These were made immediately available to a group of early adopters. In mid-June 2014, GMI level-1 brightness temperature products were made publicly available. In mid-July 2014, GMI and partner-radiometer precipitation retrievals were made public. GMI public availability was several months ahead of the planned release. The DPR products became publicly available on the planned release date of September 2, 2014. Data continues to be available to any user desiring it.
NASA Technical Reports Server (NTRS)
Cane, H. V.; Richardson, I. G.
2003-01-01
The comment of Gopalswamy et al. (thereafter GMY) relates to a letter discussing coronal mass ejections (CMEs), interplanetary ejecta and geomagnetic storms. GMY contend that Cane et al. incorrectly identified ejecta (interplanetary CMEs) and hypothesize that this is because Cane et al. fail to understand how to separate ejecta from "shock sheaths" when interpreting solar wind and energetic particle data sets. They (GMY) are wrong be cause the relevant section of the paper was concerned with the propagation time to 1 AU of any potentially geoeffective structures caused by CMEs, i.e. upstream compression regions with or without shocks, or ejecta. In other words, the travel times used by Cane et al. were purposefully and deliberately distinct from ejecta travel times (except for those slow ejecta, approx. 30% of their events, which generated no upstream features), and no error in identification was involved. The confusion of GMY stems from the description did not characterize the observations sufficiently clearly.
Early Results from the Global Precipitation Measurement (GPM) Mission in Japan
NASA Astrophysics Data System (ADS)
Kachi, Misako; Kubota, Takuji; Masaki, Takeshi; Kaneko, Yuki; Kanemaru, Kaya; Oki, Riko; Iguchi, Toshio; Nakamura, Kenji; Takayabu, Yukari N.
2015-04-01
The Global Precipitation Measurement (GPM) mission is an international collaboration to achieve highly accurate and highly frequent global precipitation observations. The GPM mission consists of the GPM Core Observatory jointly developed by U.S. and Japan and Constellation Satellites that carry microwave radiometers and provided by the GPM partner agencies. The Dual-frequency Precipitation Radar (DPR) was developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT), and installed on the GPM Core Observatory. The GPM Core Observatory chooses a non-sun-synchronous orbit to carry on diurnal cycle observations of rainfall from the Tropical Rainfall Measuring Mission (TRMM) satellite and was successfully launched at 3:37 a.m. on February 28, 2014 (JST), while the Constellation Satellites, including JAXA's Global Change Observation Mission (GCOM) - Water (GCOM-W1) or "SHIZUKU," are launched by each partner agency sometime around 2014 and contribute to expand observation coverage and increase observation frequency JAXA develops the DPR Level 1 algorithm, and the NASA-JAXA Joint Algorithm Team develops the DPR Level 2 and DPR-GMI combined Level2 algorithms. JAXA also develops the Global Rainfall Map (GPM-GSMaP) algorithm, which is a latest version of the Global Satellite Mapping of Precipitation (GSMaP), as national product to distribute hourly and 0.1-degree horizontal resolution rainfall map. Major improvements in the GPM-GSMaP algorithm is; 1) improvements in microwave imager algorithm based on AMSR2 precipitation standard algorithm, including new land algorithm, new coast detection scheme; 2) Development of orographic rainfall correction method for warm rainfall in coastal area (Taniguchi et al., 2012); 3) Update of database, including rainfall detection over land and land surface emission database; 4) Development of microwave sounder algorithm over land (Kida et al., 2012); and 5) Development of gauge-calibrated GSMaP algorithm (Ushio et al., 2013). In addition to those improvements in the algorithms number of passive microwave imagers and/or sounders used in the GPM-GSMaP was increased compared to the previous version. After the early calibration and validation of the products and evaluation that all products achieved the release criteria, all GPM standard products and the GPM-GSMaP product has been released to the public since September 2014. The GPM products can be downloaded via the internet through the JAXA G-Portal (https://www.gportal.jaxa.jp).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patten, John
Green Manufacturing Initiative (GMI): The initiative provides a conduit between the university and industry to facilitate cooperative research programs of mutual interest to support green (sustainable) goals and efforts. In addition to the operational savings that greener practices can bring, emerging market demands and governmental regulations are making the move to sustainable manufacturing a necessity for success. The funding supports collaborative activities among universities such as the University of Michigan, Michigan State University and Purdue University and among 40 companies to enhance economic and workforce development and provide the potential of technology transfer. WMU participants in the GMI activities includedmore » 20 faculty, over 25 students and many staff from across the College of Engineering and Applied Sciences; the College of Arts and Sciences' departments of Chemistry, Physics, Biology and Geology; the College of Business; the Environmental Research Institute; and the Environmental Studies Program. Many outside organizations also contribute to the GMI's success, including Southwest Michigan First; The Right Place of Grand Rapids, MI; Michigan Department of Environmental Quality; the Michigan Department of Energy, Labor and Economic Growth; and the Michigan Manufacturers Technical Center.« less
Generalized mutual information and Tsirelson's bound
NASA Astrophysics Data System (ADS)
Wakakuwa, Eyuri; Murao, Mio
2014-12-01
We introduce a generalization of the quantum mutual information between a classical system and a quantum system into the mutual information between a classical system and a system described by general probabilistic theories. We apply this generalized mutual information (GMI) to a derivation of Tsirelson's bound from information causality, and prove that Tsirelson's bound can be derived from the chain rule of the GMI. By using the GMI, we formulate the "no-supersignalling condition" (NSS), that the assistance of correlations does not enhance the capability of classical communication. We prove that NSS is never violated in any no-signalling theory.
Generalized mutual information and Tsirelson's bound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wakakuwa, Eyuri; Murao, Mio
2014-12-04
We introduce a generalization of the quantum mutual information between a classical system and a quantum system into the mutual information between a classical system and a system described by general probabilistic theories. We apply this generalized mutual information (GMI) to a derivation of Tsirelson's bound from information causality, and prove that Tsirelson's bound can be derived from the chain rule of the GMI. By using the GMI, we formulate the 'no-supersignalling condition' (NSS), that the assistance of correlations does not enhance the capability of classical communication. We prove that NSS is never violated in any no-signalling theory.
Jiang, S. D.; Eggers, T.; Thiabgoh, O.; ...
2017-04-11
Understanding the relationship between the surface conditions and giant magneto-impedance (GMI) in Co-rich melt-extracted microwires is key to optimizing their magnetic responses for magnetic sensor applications. The surface magnetic domain structure (SMDS) parameters of ~45 μm diameter Co 69.25Fe 4.25Si 13B 13.5-xZr x (x = 0, 1, 2, 3) microwires, including the magnetic domain period (d) and surface roughness (Rq) as extracted from the magnetic force microscopy (MFM) images, have been correlated with GMI in the range 1–1000 MHz. It was found that substitution of B with 1 at. % Zr increased d of the base alloy from 729 tomore » 740 nm while retaining Rq from ~1 nm to ~3 nm. A tremendous impact on the GMI ratio was found, increasing the ratio from ~360% to ~490% at an operating frequency of 40 MHz. Further substitution with Zr decreased the high frequency GMI ratio, which can be understood by the significant increase in surface roughness evident by force microscopy. Lastly, this study demonstrates the application of the domain period and surface roughness found by force microscopy to the interpretation of the GMI in Co-rich microwires.« less
Méndez-Rebolledo, Guillermo; Gatica-Rojas, Valeska; Torres-Cueco, Rafael; Albornoz-Verdugo, María; Guzmán-Muñoz, Eduardo
2017-01-01
Graded motor imagery (GMI) and mirror therapy (MT) is thought to improve pain in patients with complex regional pain syndrome (CRPS) types 1 and 2. However, the evidence is limited and analysis are not independent between types of CRPS. The purpose of this review was to analyze the effects of GMI and MT on pain in independent groups of patients with CRPS types 1 and 2. Searches for literature published between 1990 and 2016 were conducted in databases. Randomized controlled trials that compared GMI or MT with other treatments for CRPS types 1 and 2 were included. Six articles met the inclusion criteria and were classified from moderate to high quality. The total sample was composed of 171 participants with CRPS type 1. Three studies presented GMI with 3 components and three studies only used the MT. The studies were heterogeneous in terms of sample size and the disorders that triggered CRPS type 1. There were no trials that included participants with CRPS type 2. GMI and MT can improve pain in patients with CRPS type 1; however, there is not sufficient evidence to recommend these therapies over other treatments given the small size and heterogeneity of the studied population.
ERIC Educational Resources Information Center
White, Charles V.
A description is provided for a Corrosion and Corrosion Control course offered in the Continuing Engineering Education Program at the General Motors Institute (GMI). GMI is a small cooperative engineering school of approximately 2,000 students who alternate between six-week periods of academic study and six weeks of related work experience in…
NASA Astrophysics Data System (ADS)
Panegrossi, Giulia; Casella, Daniele; Sanò, Paolo; Cinzia Marra, Anna; Dietrich, Stefano; Johnson, Benjamin T.; Kulie, Mark S.
2017-04-01
Snowfall is the main component of the global precipitation amount at mid and high latitudes, and improvement of global spaceborne snowfall quantitative estimation is one of the main goals of the Global Precipitation Measurement (GPM) mission. Advancements in snowfall detection and retrieval accuracy at mid-high latitudes are expected from both instruments on board the GPM Core Observatory (GPM-CO): the GMI, the most advanced conical precipitation radiometer with respect to both channel assortment and spatial resolution; and the Dual-frequency Precipitation Radar (DPR) (Ka and Ku band). Moreover, snowfall monitoring is now possible by exploiting the high frequency channels (i.e. >100 GHz) available from most of the microwave radiometers in the GPM constellation providing good temporal coverage at mid-high latitudes (hourly or less). Among these, the Advanced Technology Microwave Sounder (ATMS) onboard Suomi-NPP is the most advanced polar-orbiting cross track radiometer with 5 channels in the 183 GHz oxygen absorption band. Finally, CloudSat carries the W-band Cloud Profiling Radar (CPR) that has collected data since its launch in 2006. While CPR was primarily designed as a cloud remote sensing mission, its high-latitude coverage (up to 82° latitude) and high radar sensitivity ( -28 dBZ) make it very suitable for snowfall-related research. In this work a number of global datasets made of coincident observations of snowfall producing clouds from the spaceborne radars DPR and CPR and from the most advanced radiometers available (GMI and ATMS) have been created and analyzed. We will show the results of a study where CPR is used to: 1) assess snowfall detection and estimate capabilities of DPR; 2) analyze snowfall signatures in the high frequency channels of the passive microwave radiometers in relation to fundamental environmental conditions. We have estimated that DPR misses a very large fraction of snowfall precipitation (more than 90% of the events and around 70% of the precipitating snowfall mass), mostly because of sensitivity limits of the DPR and secondly because of the effect of side lobe clutter. We will show that improved DPR detection capabilities (> 50%) of the snowfall mass can be achieved by optimally combining Ku-band and Ka-band measured reflectivity and exploiting the weak signals related to snowfall. ATMS-CPR, GMI-CPR, and GMI-DPR coincident observations have been analyzed in order to study the multichannel brightness temperature signal related to snowfall. The main results of this study show that the high frequency channels (and the 183 GHz band channels in particular) can be successfully used to identify snowfall, but results depend strongly on proper identification of surface background and proper estimation of integrated water vapor content. In this context a new algorithm for surface classification using primarily ATMS (and GMI) low frequency channels, and identifying different snow-covered land surfaces and ice or broken-ice over ocean, is proposed and will be presented.
NASA Technical Reports Server (NTRS)
Draper, David W.
2015-01-01
In an inertial hold, the spacecraft does not attempt to maintain geodetic pointing, but rather maintains the same inertial position throughout the orbit. The result is that the spacecraft appears to pitch from 0 to 360 degrees around the orbit. Two inertial holds were performed with the GPM spacecraft: 1) May 20, 2014 16:48:31 UTC-18:21:04 UTC, spacecraft flying forward +X (0yaw), pitch from 55 degrees (FCS) to 415 degrees (FCS) over the orbit2) Dec 9, 2014 01:30:00 UTC-03:02:32 UTC, spacecraft flying backward X (180yaw), pitch from 0 degrees (FCS) to 360 degrees (FCS) over the orbitThe inertial hold affords a view of the earth through the antenna backlobe. The antenna spillover correction may be evaluated based on the inertial hold data.The current antenna pattern correction does not correct for spillover in the 166 and 183 GHz channels. The two inertial holds both demonstrate that there is significant spillover from the 166 and 183 GHz channels. By not correcting the spillover, the 166 and 183 GHz channels are biased low by about 1.8 to 3K. We propose to update the GMI calibration algorithm with the spill-over correction presented in this document for 166 GHz and 183 GHz.
NASA Technical Reports Server (NTRS)
Gray, T. I., Jr.; Mccrary, D. G. (Principal Investigator)
1981-01-01
The NOAA-6 AVHRR data sets acquired over South Texas and Mexico during the spring of 1980 and after Hurricane Allen passed inland are analyzed. These data were processed to produce the Gray-McCrary Index (GMI's) for each pixel location over the selected area, which area contained rangeland and cropland, both irrigated and nonirrigated. The variations in the GMI's appear to reflect well the availability of water for vegetation. The GMI area maps are shown to delineate and to aid in defining the duration of drought; suggesting the possibility that time changes over a selected area could be useful for irrigation management.
WAIS-III FSIQ and GAI in ability-memory discrepancy analysis.
Glass, Laura A; Bartels, Jared M; Ryan, Joseph J
2009-01-01
The present investigation compares WAIS-III FSIQ-WMS-III with GAI-WMS-III discrepancies in 135 male inpatients with suspected memory impairment. Full Scale IQ and GAI scores were highly correlated, r= .96, with mean values of 92.10 and 93.59, respectively. Additional analyses with the ability composites compared to each WMS-III index (IMI, GMI, and DMI), the GAI consistently produced larger difference scores than did the FSIQ; however, effect sizes were relatively small (ES= .12). Lastly, case-by-case analyses demonstrated concordance rates of 86% for the FSIQ-IMI and GAI-IMI comparisons, 85% for the FSIQ-GMI and GAI-GMI, and 82% for the FSIQ-DMI and GAI-DMI.
2012-01-01
Background There is strong evidence to suggest that multiple work-related health problems are preceded by a higher need for recovery. Physical activity and relaxation are helpful in decreasing the need for recovery. This article aims to describe (1) the development and (2) the design of the evaluation of a daily physical activity and relaxation intervention to reduce the need for recovery in office employees. Methods/Design The study population will consist of employees of a Dutch financial service provider. The intervention was systematically developed, based on parts of the Intervention Mapping (IM) protocol. Assessment of employees needs was done by combining results of face-to-face interviews, a questionnaire and focus group interviews. A set of theoretical methods and practical strategies were selected which resulted in an intervention program consisting of Group Motivational Interviewing (GMI) supported by a social media platform, and environmental modifications. The Be Active & Relax program will be evaluated in a modified 2 X 2 factorial design. The environmental modifications will be pre-stratified and GMI will be randomised on department level. The program will be evaluated, using 4 arms: (1) GMI and environmental modifications; (2) environmental modifications; (3) GMI; (4) no intervention (control group). Questionnaire data on the primary outcome (need for recovery) and secondary outcomes (daily physical activity, sedentary behaviour, relaxation/detachment, work- and health-related factors) will be gathered at baseline (T0), at 6 months (T1), and at 12 months (T2) follow-up. In addition, an economic and a process evaluation will be performed. Discussion Reducing the need for recovery is hypothesized to be beneficial for employees, employers and society. It is assumed that there will be a reduction in need for recovery after 6 months and 12 months in the intervention group, compared to the control group. Results are expected in 2013. Trial registration Netherlands Trial Register (NTR): NTR2553 PMID:22852835
Assimilation of Precipitation Measurement Missions Microwave Radiance Observations With GEOS-5
NASA Technical Reports Server (NTRS)
Jin, Jianjun; Kim, Min-Jeong; McCarty, Will; Akella, Santha; Gu, Wei
2015-01-01
The Global Precipitation Mission (GPM) Core Observatory satellite was launched in February, 2014. The GPM Microwave Imager (GMI) is a conically scanning radiometer measuring 13 channels ranging from 10 to 183 GHz and sampling between 65 S 65 N. This instrument is a successor to the Tropical Rainfall Measurement Mission (TRMM) Microwave Imager (TMI), which has observed 9 channels at frequencies ranging 10 to 85 GHz between 40 S 40 N since 1997. This presentation outlines the base procedures developed to assimilate GMI and TMI radiances in clear-sky conditions, including quality control methods, thinning decisions, and the estimation of, observation errors. This presentation also shows the impact of these observations when they are incorporated into the GEOS-5 atmospheric data assimilation system.
40 CFR 86.1866-12 - CO2 fleet average credit programs.
Code of Federal Regulations, 2010 CFR
2010-07-01
... technologies designed to reduce air conditioning refrigerant leakage over the useful life of their passenger... implementing specific air conditioning system technologies designed to reduce air conditioning-related CO2... than 10% when compared to previous industry standard designs): 1.1 g/mi. (viii) Oil separator: 0.6 g/mi...
40 CFR 86.1866-12 - CO2 fleet average credit programs.
Code of Federal Regulations, 2011 CFR
2011-07-01
... technologies designed to reduce air conditioning refrigerant leakage over the useful life of their passenger... implementing specific air conditioning system technologies designed to reduce air conditioning-related CO2... than 10% when compared to previous industry standard designs): 1.1 g/mi. (viii) Oil separator: 0.6 g/mi...
40 CFR 86.708-98 - In-use emission standards for 1998 and later model year light-duty vehicles.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Standards (g/mi) for Light-Duty Vehicles Fuel THC NMHC THCE NMHCE CO NOX PM Gasoline 0.41 0.25 3.4 0.4 0.08... H98-2—Full Useful Life 1 Standards (g/mi) for Light-Duty Vehicles Fuel THC NMHC THCE NMHCE CO NOX PM...
40 CFR 86.708-98 - In-use emission standards for 1998 and later model year light-duty vehicles.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Standards (g/mi) for Light-Duty Vehicles Fuel THC NMHC THCE NMHCE CO NOX PM Gasoline 0.41 0.25 3.4 0.4 0.08... H98-2—Full Useful Life 1 Standards (g/mi) for Light-Duty Vehicles Fuel THC NMHC THCE NMHCE CO NOX PM...
40 CFR 86.708-98 - In-use emission standards for 1998 and later model year light-duty vehicles.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Standards (g/mi) for Light-Duty Vehicles Fuel THC NMHC THCE NMHCE CO NOX PM Gasoline 0.41 0.25 3.4 0.4 0.08... H98-2—Full Useful Life 1 Standards (g/mi) for Light-Duty Vehicles Fuel THC NMHC THCE NMHCE CO NOX PM...
40 CFR 86.708-98 - In-use emission standards for 1998 and later model year light-duty vehicles.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Standards (g/mi) for Light-Duty Vehicles Fuel THC NMHC THCE NMHCE CO NOX PM Gasoline 0.41 0.25 3.4 0.4 0.08... H98-2—Full Useful Life 1 Standards (g/mi) for Light-Duty Vehicles Fuel THC NMHC THCE NMHCE CO NOX PM...
Hail detection algorithm for the Global Precipitation Measuring mission core satellite sensors
NASA Astrophysics Data System (ADS)
Mroz, Kamil; Battaglia, Alessandro; Lang, Timothy J.; Tanelli, Simone; Cecil, Daniel J.; Tridon, Frederic
2017-04-01
By exploiting an abundant number of extreme storms observed simultaneously by the Global Precipitation Measurement (GPM) mission core satellite's suite of sensors and by the ground-based S-band Next-Generation Radar (NEXRAD) network over continental US, proxies for the identification of hail are developed based on the GPM core satellite observables. The full capabilities of the GPM observatory are tested by analyzing more than twenty observables and adopting the hydrometeor classification based on ground-based polarimetric measurements as truth. The proxies have been tested using the Critical Success Index (CSI) as a verification measure. The hail detection algorithm based on the mean Ku reflectivity in the mixed-phase layer performs the best, out of all considered proxies (CSI of 45%). Outside the Dual frequency Precipitation Radar (DPR) swath, the Polarization Corrected Temperature at 18.7 GHz shows the greatest potential for hail detection among all GMI channels (CSI of 26% at a threshold value of 261 K). When dual variable proxies are considered, the combination involving the mixed-phase reflectivity values at both Ku and Ka-bands outperforms all the other proxies, with a CSI of 49%. The best-performing radar-radiometer algorithm is based on the mixed-phase reflectivity at Ku-band and on the brightness temperature (TB) at 10.7 GHz (CSI of 46%). When only radiometric data are available, the algorithm based on the TBs at 36.6 and 166 GHz is the most efficient, with a CSI of 27.5%.
Intersatellite Calibration of Microwave Radiometers for GPM
NASA Astrophysics Data System (ADS)
Wilheit, T. T.
2010-12-01
The aim of the GPM mission is to measure precipitation globally with high temporal resolution by using a constellation of satellites logically united by the GPM Core Satellite which will be in a non-sunsynchronous, medium inclination orbit. The usefulness of the combined product depends on the consistency of precipitation retrievals from the various microwave radiometers. The calibration requirements for this consistency are quite daunting requiring a multi-layered approach. The radiometers can vary considerably in their frequencies, view angles, polarizations and spatial resolutions depending on their primary application and other constraints. The planned parametric algorithms will correct for the varying viewing parameters, but they are still vulnerable to calibration errors, both relative and absolute. The GPM Intersatellite Calibration Working Group (aka X-CAL) will adjust the calibration of all the radiometers to a common consensus standard for the GPM Level 1C product to be used in precipitation retrievals. Finally, each Precipitation Algorithm Working Group must have its own strategy for removing the residual errors. If the final adjustments are small, the credibility of the precipitation retrievals will be enhanced. Before intercomparing, the radiometers must be self consistent on a scan-wise and orbit-wise basis. Pre-screening for this consistency constitutes the first step in the intercomparison. The radiometers are then compared pair-wise with the microwave radiometer (GMI) on the GPM Core Satellite. Two distinct approaches are used for sake of cross-checking the results. On the one hand, nearly simultaneous observations are collected at the cross-over points of the orbits and the observations of one are converted to virtual observations of the other using a radiative transfer model to permit comparisons. The complementary approach collects histograms of brightness temperature from each instrument. In each case a model is needed to translate the observations from one set of viewing parameters to those of the GMI. For the conically scanning window channel radiometers, the models are reasonably complete. Currently we have compared TMI with Windsat and arrived at a preliminary consensus calibration based on the pair. This consensus calibration standard has been applied to TMI and is currently being compared with AMSR-E on the Aqua satellite. In this way we are implementing a rolling wave spin-up of X-CAL. In this sense, the launch of GPM core will simply provide one more radiometer to the constellation; one hopes it will be the best calibrated. Water vapor and temperature sounders will use a different scenario. Some of the precipitation retrieval algorithms will use sounding channels. The GMI will include typical water vapor sounding channels. The radiances are ingested directly via 3DVAR and 4DVAR techniques into forecast models by many operational weather forecast agencies. The residuals and calibration adjustments of this process will provide a measure of the relative calibration errors throughout the constellation. The use of the ARM Southern Great Plains site as a benchmark for calibrating the more opaque channels is also being investigated.
Braden, B; Braden, C P; Klutz, M; Lembcke, B
1993-04-01
Breath hydrogen (H2) analysis, as used in gastroenterologic function tests, requires a stationary analysis system equipped with a gaschromatograph or an electrochemical sensor cell. Now a portable breath H2-analyzer has been miniaturized to pocket size (104 mm x 62 mm x 29 mm). The application of this device in clinical practice has been assessed in comparison to the standard GMI-exhaled monitor. The pocket analyzer showed a linear response to standards with H2-concentrations ranging from 0-100 ppm (n = 7), which was not different from the GMI-apparatus. The correlation of both methods during clinical application (lactose tolerance tests, mouth-to-coecum transit time determined with lactulose) was excellent (Y = 1.08 X + 0.96; r = 0.959). Using the new device, both, analysis (3 s vs. 90 s) and the reset-time (43 s vs. 140 s) were shorter whereas calibration was more feasible with the GMI-apparatus. It is concluded, that the considerably cheaper pocket-sized breath H2-analyzer is as precise and sensitive as the GMI-exhaled monitor, and thus presents a valid alternative for H2-breath tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Exhaust emission and fuel economy tests (1975 Federal Test Procedure) were performed on a 1972 Plymouth Cricket equipped with a turbocharged four-cylinder stratified charge engine (Texaco Controlled Combustion System) and an exhaust catalyst. The tests were conducted for three different fuels; unleaded gasoline, number 2 diesel fuel, and a wide boiling range distillate fuel supplied by Texaco. Average hydrocarbon, carbon monoxide, and nitrogen oxide emissions (without throttling) obtained with diesel fuel were 0.89, 1.88, and 1.91 g/mi, respectively. Hydrocarbon, carbon monoxide and nitrogen oxide levels of 0.88, 0.97, and 1.61 g/mi, respectively, were obtained with the wide boiling range fuel;more » and emission levels of 1.37, 0.50, and 1.84 g/mi, respectively, were obtained with the unleaded gasoline. Average fuel economies for the diesel fuel, wide boiling range fuel, and unleaded gasoline were 30.8, 29.7, and 28.4 mi/gal., respectively. Thus, the turbocharged catalyst equipped stratified charge engine demonstrated the ability to meet 1975 interim levels on three different fuels with high fuel economy. Compliance with the 1977 hydrocarbon standard of 0.41 g/mi will require additional control devices or basic combustion improvement.« less
GMI-1070, a novel pan-selectin antagonist, reverses acute vascular occlusions in sickle cell mice
Chang, Jungshan; Patton, John T.; Sarkar, Arun; Ernst, Beat
2010-01-01
Leukocyte adhesion in the microvasculature influences blood rheology and plays a key role in vaso-occlusive manifestations of sickle cell disease. Notably, polymorphonuclear neutrophils (PMNs) can capture circulating sickle red blood cells (sRBCs) in inflamed venules, leading to critical reduction in blood flow and vaso-occlusion. Recent studies have suggested that E-selectin expression by endothelial cells plays a key role by sending activating signals that lead to the activation of Mac-1 at the leading edge of PMNs, thereby allowing RBC capture. Thus, the inhibition of E-selectin may represent a valuable target in this disease. Here, we have tested the biologic properties of a novel synthetic pan-selectin inhibitor, GMI-1070, with in vitro assays and in a humanized model of sickle cell vaso-occlusion analyzed by intravital microscopy. We have found that GMI-1070 predominantly inhibited E-selectin–mediated adhesion and dramatically inhibited sRBC-leukocyte interactions, leading to improved microcirculatory blood flow and improved survival. These results suggest that GMI-1070 may represent a valuable novel therapeutic intervention for acute sickle cell crises that should be further evaluated in a clinical trial. PMID:20508165
Giant magnetic impedance of wires with a thin magnetic coating
NASA Astrophysics Data System (ADS)
Kurlyandskaya, G. V.; Bebenin, N. G.; Vas'kovsky, V. O.
2011-02-01
In this review, we analyzed and generalized the results of experimental investigations of physical processes that occur in composite wires with a thin magnetic coating under the conditions of the appearance in them of a giant magnetoimpedance (GMI) effect. Principles of the measurements of high-frequency impedance are described in short; basic definitions are given, and the differences in the linear and nonlinear GMI regimes are described. Data are systemized on the giant magnetic impedance of wires with a thin magnetic coating (composite materials) under the conditions of a strong nonlinearity of the GMI effect, which is accompanied by the appearance of higher harmonics in the output signal. The extremely high susceptibility of the harmonic parameters to external actions can be used in the technical applications for creating ultrasensitive detectors of low magnetic fields. Special attention is paid to model calculations, which confirm the fact that the experimentally observed features of a nonlinear GMI effect are connected with the high sensitivity of the magnetic system to a circular magnetic field near the spin-reorientation phase transitions. Fine features of the effective magnetic anisotropy can play the key role and therefore cannot be ignored in the general case.
NASA Astrophysics Data System (ADS)
Jiang, S. D.; Eggers, T.; Thiabgoh, O.; Xing, D. W.; Fang, W. B.; Sun, J. F.; Srikanth, H.; Phan, M. H.
2018-02-01
Two soft ferromagnetic Co68.25Fe4.25Si12.25B15.25 microwires with the same diameter of 50 ± 1 μm but different fabrication processes were placed in series and in parallel circuit configurations to investigate their giant magneto-impedance (GMI) responses in a frequency range of 1-100 MHz for low-field sensing applications. We show that, while the low-field GMI response is significantly reduced in the parallel configuration, it is greatly enhanced in the series connection. These results suggest that a highly sensitive GMI sensor can be designed by arranging multi-wires in a saw-shaped fashion to optimize the sensing area, and soldered together in series connection to maintain the excellent magnetic field sensitivity.
Rain Rate and DSD Retrievals at Kwajalein Atoll
NASA Astrophysics Data System (ADS)
Wolff, David; Marks, David; Tokay, Ali
2010-05-01
The dual-polarization weather radar on Kwajalein Atoll in the Republic of the Marshall Islands (KPOL) is one of the only full-time (24/7) operational S-band dual-polarimetric (DP) radars in the tropics. Using the DP data from KPOL, as well as data from a Joss-Waldvogel disdrometer on Kwajalein Island, algorithms for quality control, as well as calibration of reflectivity and differential reflectivity have been developed and adapted for application in a near real-time operational environment. Observations during light rain and drizzle show that KPOL measurements (since 2006) meet or exceed quality thresholds for these applications (as determined by consensus of the radar community). While the methodology for development of such applications is well documented, tuning of specific algorithms to a particular regime and observed raindrop size distributions requires a comprehensive testing and adjustment period to ensure high quality products. Upon application of these data quality techniques to the KPOL data, we have tested and compared several different rain retrieval algorithms. These include conventional Z-R, DP hybrid techniques, as well as polarimetrically-tuned Z-R described by Bringi et al. 2004. One of the major benefits of the polarimetrically tuned Z-R technique, is its ability to use the DP observations to retrieve key parameters of the drop size distribution (DSD), such as the median drop diameter, and the intercept and shape parameter of the assumed gammaDSD. We will show several such retrievals for different rain systems, as well as their distribution with height below the melting layer. From a physical validation perspective, such DSD parameter retrievals provide an important means to cross-validate microphysical parameterizations in GPM Dual-frequency Precipitation Radar (DPR) and GPM Microwave Imager (GMI) retrieval algorithms.
Dilek, Burcu; Ayhan, Cigdem; Yagci, Gozde; Yakut, Yavuz
Single-blinded randomized controlled trial. Pain management is essential in the early stages of the rehabilitation of distal radius fractures (DRFx). Pain intensity at the acute stage is considered important for determining the individual recovery process, given that higher pain intensity and persistent pain duration negatively affect the function and cortical activity of pain response. Graded motor imagery (GMI) and its components are recent pain management strategies, established on a neuroscience basis. To investigate the effectiveness of GMI in hand function in patients with DRFx. Thirty-six participants were randomly allocated to either GMI (n = 17; 52.59 [9.8] years) or control (n = 19; 47.16 [10.5] years) groups. The GMI group received imagery treatment in addition to traditional rehabilitation, and the control group received traditional rehabilitation for 8 weeks. The assessments included pain at rest and during activity using the visual analog scale, wrist and forearm active range of motion (ROM) with universal goniometer, grip strength with the hydraulic dynamometer (Jamar; Bolingbrook, IL), and upper extremity functional status using the Disability of the Arm, Shoulder and Hand Questionnaire, and the Michigan Hand Questionnaire. Assessments were performed twice at baseline and at the end of the eighth week. The GMI group showed greater improvement in pain intensity (during rest, 2.24; activity, 6.18 points), wrist ROM (flexion, -40.59; extension, -45.59; radial deviation, -25.59; and ulnar deviation, -26.77 points) and forearm ROM (supination, -43.82 points), and functional status (Disability of the Arm, Shoulder and Hand Questionnaire, 38.00; Michigan Hand Questionnaire, -32.53 points) when compared with the control group (for all, P < .05). The cortical model of pathological pain suggests new strategies established on a neuroscience basis. These strategies aim to normalize the cortical proprioceptive representation and reduce pain. One of these recent strategies, GMI appears to provide beneficial effects to control pain, improve grip strength, and increase upper extremity functions in patients with DRFx. Copyright © 2017 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.
Lin, Ying-Hui; Tang, Pei-Fang; Wang, Yao-Hung; Eng, Janice J; Lin, Keh-Chung; Lu, Lu; Jeng, Jiann-Shing; Chen, Shih-Ching
2014-10-01
The purpose of this study was to investigate the ways in which stroke-induced posterior parietal cortex (PPC) lesions affect reactive postural responses and whether providing auditory cues modulates these responses. Seventeen hemiparetic patients after stroke, nine with PPC lesions (PPCLesion) and eight with intact PPCs (PPCSpared), and nine age-matched healthy adults completed a lateral-pull perturbation experiment under noncued and cued conditions. The activation rates of the gluteus medius muscle ipsilateral (GMi) and contralateral to the pull direction, the rates of occurrence of three types of GM activation patterns, and the GMi contraction latency were investigated. In noncued pulls toward the paretic side, of the three groups, the PPCLesion group exhibited the lowest activation rate (56%) of the GMi (P < 0.05), which is the primary postural muscle involved in this task, and the highest rate of occurrence (33%) of the gluteus medius muscle contralateral-activation-only pattern (P < 0.05), which is a compensatory activation pattern. In contrast, in cued pulls toward the paretic side, the PPCLesion group was able to increase the activation rate of the GMi to a level (81%) such that there became no significant differences in activation rate of the GMi among the three groups (P > 0.05). However, there were no significant differences in the GM activation patterns and GMi contraction latency between the noncued and cued conditions for the PPCLesion group (P > 0.05). The PPCLesion patients had greater deficits in recruiting paretic muscles and were more likely to use the compensatory muscle activation pattern for postural reactions than the PPCSpared patients, suggesting that PPC is part of the neural circuitry involved in reactive postural control in response to lateral perturbations. The auditory cueing used in this study, however, did not significantly modify the muscle activation patterns in the PPCLesion patients. More research is needed to explore the type and structure of cueing that could effectively improve patterns and speed of postural responses in these patients.
GPM and TRMM Radar Vertical Profiles and Impact on Large-scale Variations of Surface Rain
NASA Astrophysics Data System (ADS)
Wang, J. J.; Adler, R. F.
2017-12-01
Previous studies by the authors using Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) data have shown that TRMM Precipitation Radar (PR) and GPM Dual-Frequency Precipitation Radar (DPR) surface rain estimates do not have corresponding amplitudes of inter-annual variations over the tropical oceans as do passive microwave observations by TRMM Microwave Imager (TMI) and GPM Microwave Imager (GMI). This includes differences in surface temperature-rainfall variations. We re-investigate these relations with the new GPM Version 5 data with an emphasis on understanding these differences with respect to the DPR vertical profiles of reflectivity and rainfall and the associated convective and stratiform proportions. For the inter-annual variation of ocean rainfall from both passive microwave (TMI and GMI) and active microwave (PR and DPR) estimates, it is found that for stratiform rainfall both TMI-PR and GMI-DPR show very good correlation. However, the correlation of GMI-DPR is much higher than TMI-PR in convective rainfall. The analysis of vertical profile of PR and DPR rainfall during the TRMM and GPM overlap period (March-August, 2014) reveals that PR and DPR have about the same rainrate at 4km and above, but PR rainrate is more than 10% lower that of DPR at the surface. In other words, it seems that convective rainfall is better defined with DPR near surface. However, even though the DPR results agree better with the passive microwave results, there still is a significant difference, which may be a result of DPR retrieval error, or inherent passive/active retrieval differences. Monthly and instantaneous GMI and DPR data need to be analyzed in details to better understand the differences.
NASA Technical Reports Server (NTRS)
Olsen, Mark A.; Douglass, Anne R.; Newman, Paul A.; Gille, John C.; Nardi, Bruno; Yudin, Valery A.; Kinnison, Douglas E.; Khosravi, Rashid
2008-01-01
On 26 January 2006, the High Resolution Dynamic Limb Sounder (HIRDLS) observed low mixing ratios of ozone and nitric acid in an approximately 2 km vertical layer near 100 hPa extending from the subtropics to 55 degrees N over North America. The subsequent evolution of the layer is simulated with the Global Modeling Initiative (GMI) model and substantiated with HIRDLS observations. Air with low mixing ratios of ozone is transported poleward to 80 degrees N. Although there is evidence of mixing with extratropical air and diabatic descent, much of the tropical intrusion returns to the subtropics. This study demonstrates that HIRDLS and the GMI model are capable of resolving thin intrusion events. The observations combined with simulation are a first step towards development of a quantitative understanding of the lower stratospheric ozone budget.
Design, Development and Testing of the GMI Launch Locks
NASA Technical Reports Server (NTRS)
Sexton, Adam; Dayton, Chris; Wendland, Ron; Pellicciotti, Joseph
2011-01-01
Ball Aerospace will deliver the GPM Microwave Imager (GMI), to NASA as one of the 3 instruments to fly on the Global Precipitation Measurement (GPM) mission, for launch in 2013. The radiometer, when deployed, is over 8 feet tall and rotates at 32 revolutions per minute (RPM) can be described as a collection of mechanisms working to achieve its scientific objectives. This collection precisely positions a 1.2 meter reflector to a 48.5 degree off nadir angle while rotating, transferring electrical power and signals to and from the RF receivers, designs two very stable calibration sources, and provides the structural integrity of all the components. There are a total of 7 launch restraints coupling across the moving and stationary elements of the structure,. Getting from design to integration will be the focus of this paper.
NASA Technical Reports Server (NTRS)
Draper, David W.; Newell, David A.; Wentz, Frank J.; Krimchansky, Sergey; Jackson, Gail
2015-01-01
The Global Precipitation Measurement (GPM) mission is an international satellite mission that uses measurements from an advanced radar/radiometer system on a core observatory as reference standards to unify and advance precipitation estimates made by a constellation of research and operational microwave sensors. The GPM core observatory was launched on February 27, 2014 at 18:37 UT in a 65? inclination nonsun-synchronous orbit. GPM focuses on precipitation as a key component of the Earth's water and energy cycle, and has the capability to provide near-real-time observations for tracking severe weather events, monitoring freshwater resources, and other societal applications. The GPM microwave imager (GMI) on the core observatory provides the direct link to the constellation radiometer sensors, which fly mainly in polar orbits. The GMI sensitivity, accuracy, and stability play a crucial role in unifying the measurements from the GPM constellation of satellites. The instrument has exhibited highly stable operations through the duration of the calibration/validation period. This paper provides an overview of the GMI instrument and a report of early on-orbit commissioning activities. It discusses the on-orbit radiometric sensitivity, absolute calibration accuracy, and stability for each radiometric channel. Index Terms-Calibration accuracy, passive microwave remote sensing, radiometric sensitivity.
Calibration Plans for the Global Precipitation Measurement (GPM)
NASA Technical Reports Server (NTRS)
Bidwell, S. W.; Flaming, G. M.; Adams, W. J.; Everett, D. F.; Mendelsohn, C. R.; Smith, E. A.; Turk, J.
2002-01-01
The Global Precipitation Measurement (GPM) is an international effort led by the National Aeronautics and Space Administration (NASA) of the U.S.A. and the National Space Development Agency of Japan (NASDA) for the purpose of improving research into the global water and energy cycle. GPM will improve climate, weather, and hydrological forecasts through more frequent and more accurate measurement of precipitation world-wide. Comprised of U.S. domestic and international partners, GPM will incorporate and assimilate data streams from many spacecraft with varied orbital characteristics and instrument capabilities. Two of the satellites will be provided directly by GPM, the core satellite and a constellation member. The core satellite, at the heart of GPM, is scheduled for launch in November 2007. The core will carry a conical scanning microwave radiometer, the GPM Microwave Imager (GMI), and a two-frequency cross-track-scanning radar, the Dual-frequency Precipitation Radar (DPR). The passive microwave channels and the two radar frequencies of the core are carefully chosen for investigating the varying character of precipitation over ocean and land, and from the tropics to the high-latitudes. The DPR will enable microphysical characterization and three-dimensional profiling of precipitation. The GPM-provided constellation spacecraft will carry a GMI radiometer identical to that on the core spacecraft. This paper presents calibration plans for the GPM, including on-board instrument calibration, external calibration methods, and the role of ground validation. Particular emphasis is on plans for inter-satellite calibration of the GPM constellation. With its Unique instrument capabilities, the core spacecraft will serve as a calibration transfer standard to the GPM constellation. In particular the Dual-frequency Precipitation Radar aboard the core will check the accuracy of retrievals from the GMI radiometer and will enable improvement of the radiometer retrievals. Observational intersections of the core with the constellation spacecraft are essential in applying this technique to the member satellites. Information from core spacecraft retrievals during intersection events will be transferred to the constellation radiometer instruments in the form of improved calibration and, with experience, improved radiometric algorithms. In preparation for the transfer standard technique, comparisons using the Tropical Rainfall Measuring Mission (TRMM) with sun-synchronous radiometers have been conducted. Ongoing research involves study of critical variables in the inter-comparison, such as correlation with spatial-temporal separation of intersection events, frequency of intersection events, variable azimuth look angles, and variable resolution cells for the various sensors.
Magnetic nanoparticle detection method employing non-linear magnetoimpedance effects
NASA Astrophysics Data System (ADS)
Beato-López, J. J.; Pérez-Landazábal, J. I.; Gómez-Polo, C.
2017-04-01
In this work, a sensitive tool to detect magnetic nanoparticles (Fe3O4) based on a non-linear Giant Magnetoimpedance (GMI) effect is presented. The GMI sensor is designed with four nearly zero magnetostrictive ribbons connected in series and was analysed as a function of a constant external magnetic field and exciting frequency. The influence of the magnetic nanoparticles deposited on the ribbon surface was characterized using the first (fundamental) and second (non-linear) harmonics of the magnetoinductive voltage. The results show a clear enhancement of the sensor response in the high magnetic field region (H = 1.5 kA/m) as a consequence of the stray field generated by the magnetic nanoparticles on the GMI ribbons' surface. The highest sensitivity ratios are obtained for the non-linear component in comparison with the fundamental response. The results open a new research strategy in magnetic nanoparticle detection.
Walsh, Thomas J.; Shoham, Shmuel; Petraitiene, Ruta; Sein, Tin; Schaufele, Robert; Kelaher, Amy; Murray, Heidi; Mya-San, Christine; Bacher, John; Petraitis, Vidmantas
2004-01-01
Recent case reports describe patients receiving piperacillin-tazobactam who were found to have circulating galactomannan detected by the double sandwich enzyme-linked immunosorbent assay (ELISA) system, leading to the false presumption of invasive aspergillosis. Since this property of piperacillin-tazobactam and galactomannan ELISA is not well understood, we investigated the in vitro, in vivo, and clinical properties of this interaction. Among the 12 reconstituted antibiotics representing four classes of antibacterial compounds that are commonly used in immunocompromised patients, piperacillin-tazobactam expressed a distinctively high level of galactomannan antigen in vitro (P = 0.001). After intravenous infusion of piperacillin-tazobactam into rabbits, the serum galactomannan index (GMI) in vivo changed significantly (P = 0.0007) from a preinfusion mean baseline value of 0.27 to a mean GMI of 0.83 by 30 min to slowly decline to a mean GMI of 0.44 24 h later. Repeated administration of piperacillin-tazobactam over 7 days resulted in accumulation of circulating galactomannan to a mean peak GMI of 1.31 and a nadir of 0.53. Further studies revealed that the antigen reached a steady state by the third day of administration of piperacillin-tazobactam. Twenty-six hospitalized patients with no evidence of invasive aspergillosis who were receiving antibiotics and ten healthy blood bank donors were studied for expression of circulating galactomannan. Patients (n = 13) receiving piperacillin-tazobactam had significantly greater mean serum GMI values (0.74 ± 0.14) compared to patients (n = 13) receiving other antibiotics (0.14 ± 0.08) and compared to healthy blood bank donors (0.14 ± 0.06) (P < 0.001). Five (38.5%) of thirteen patients receiving piperacillin-tazobactam had serum GMI values > 0.5 compared to none of thirteen subjects receiving other antibiotics (P = 0.039) and to none of ten healthy blood bank donors (P = 0.046). These data demonstrate that among antibiotics that are commonly used in immunocompromised patients, only piperacillin-tazobactam contains significant amounts of galactomannan antigen in vitro, that in animals receiving piperacillin-tazobactam circulating galactomannan antigen accumulates in vivo to significantly increased and sustained levels, and that some but not all patients receiving this antibiotic will demonstrate circulating galactomannan above the threshold considered positive for invasive aspergillosis by the recently licensed double sandwich ELISA. PMID:15472335
40 CFR 86.097-9 - Emission standards for 1997 and later model year light-duty trucks.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Standards (g/mi) for Light Light-Duty Trucks Fuel LVW (lbs) THC NMHC THCE NMHCE CO NOX PM Gasoline 0-3750 0... LVW (lbs) THC 1 NMHC THCE 1 NMHCE CO NOX PM Gasoline 0-3750 0.80 0.31 4.2 0.6 0.10 Gasoline 3751-5750...—Intermediate Useful Life Standards (g/mi) for Heavy Light-Duty Trucks Fuel ALVW (lbs) THC NMHC THCE NMHCE CO...
NASA Technical Reports Server (NTRS)
Strahan, Susan E.; Douglass, Anne R.
2003-01-01
The Global Modeling Initiative has integrated two 35-year simulations of an ozone recovery scenario with an offline chemistry and transport model using two different meteorological inputs. Physically based diagnostics, derived from satellite and aircraft data sets, are described and then used to evaluate the realism of temperature and transport processes in the simulations. Processes evaluated include barrier formation in the subtropics and polar regions, and extratropical wave-driven transport. Some diagnostics are especially relevant to simulation of lower stratospheric ozone, but most are applicable to any stratospheric simulation. The temperature evaluation, which is relevant to gas phase chemical reactions, showed that both sets of meteorological fields have near climatological values at all latitudes and seasons at 30 hPa and below. Both simulations showed weakness in upper stratospheric wave driving. The simulation using input from a general circulation model (GMI(sub GCM)) showed a very good residual circulation in the tropics and northern hemisphere. The simulation with input from a data assimilation system (GMI(sub DAS)) performed better in the midlatitudes than at high latitudes. Neither simulation forms a realistic barrier at the vortex edge, leading to uncertainty in the fate of ozone-depleted vortex air. Overall, tracer transport in the offline GMI(sub GCM) has greater fidelity throughout the stratosphere than the GMI(sub DAS).
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz; Kelley, Owen
2017-01-01
This presentation will summarize the changes in the products for the GPM V05 reprocessing cycle. It will concentrate on discussing the gridded text product from the core satellite retrievals. However, all aspects of the GPROF GMI changes in this product are equally appropriate to the other two gridded text products. The GPM mission reprocessed its products in May of 2017 as part of a continuing improvement of precipitation retrievals. This lead to important improvement in the retrievals and therefore also necessitated reprocessing the gridded test products. The V05 GPROF changes not only improved the retrievals but substantially alerted the format and this compelled changes to the gridded text products. Especially important in this regard is the GPROF2017 (used in V05) change from reporting the fraction of the total precipitation rate that occurring as convection or in liquid phase. Instead, GPROF2017, and therefore V05 gridded text products, report the rate of convective precipitation in mm/hr. The GPROF2017 algorithm now reports the frozen precipitation rate in mm/hr rather than the fraction of total precipitation that is liquid. Because of the aim of the gridded text product is to remain simple the radar and combined results will also change in V05 to reflect this change in the GMI retrieval. The presentation provides an analysis of these changes as well as presenting a comparison with the swath products from which the hourly text grids were derived.
40 CFR 86.709-99 - In-use emission standards for 1999 and later model year light-duty trucks.
Code of Federal Regulations, 2012 CFR
2012-07-01
...—Intermediate Useful Life 1 Standards (g/mi) for Light Light-Duty Trucks Fuel LVW (lbs) THC NMHC THCE NMHCE CO... Fuel LVW (lbs) THC 2 NMHC 1 THCE 2 NMHCE 1 CO 1 NOX 1 PM 1 Gasoline 0-3750 0.80 0.31 4.2 0.6 0.10... 1 Standards (g/mi) for Heavy Light-Duty Trucks Fuel ALVW (lbs) THC NMHC THCE NMHCE CO NOX PM...
40 CFR 86.709-99 - In-use emission standards for 1999 and later model year light-duty trucks.
Code of Federal Regulations, 2011 CFR
2011-07-01
...—Intermediate Useful Life 1 Standards (g/mi) for Light Light-Duty Trucks Fuel LVW (lbs) THC NMHC THCE NMHCE CO... Fuel LVW (lbs) THC 2 NMHC 1 THCE 2 NMHCE 1 CO 1 NOX 1 PM 1 Gasoline 0-3750 0.80 0.31 4.2 0.6 0.10... 1 Standards (g/mi) for Heavy Light-Duty Trucks Fuel ALVW (lbs) THC NMHC THCE NMHCE CO NOX PM...
40 CFR 86.709-99 - In-use emission standards for 1999 and later model year light-duty trucks.
Code of Federal Regulations, 2013 CFR
2013-07-01
...—Intermediate Useful Life 1 Standards (g/mi) for Light Light-Duty Trucks Fuel LVW (lbs) THC NMHC THCE NMHCE CO... Fuel LVW (lbs) THC 2 NMHC 1 THCE 2 NMHCE 1 CO 1 NOX 1 PM 1 Gasoline 0-3750 0.80 0.31 4.2 0.6 0.10... 1 Standards (g/mi) for Heavy Light-Duty Trucks Fuel ALVW (lbs) THC NMHC THCE NMHCE CO NOX PM...
40 CFR 86.709-99 - In-use emission standards for 1999 and later model year light-duty trucks.
Code of Federal Regulations, 2010 CFR
2010-07-01
...—Intermediate Useful Life 1 Standards (g/mi) for Light Light-Duty Trucks Fuel LVW (lbs) THC NMHC THCE NMHCE CO... Fuel LVW (lbs) THC 2 NMHC 1 THCE 2 NMHCE 1 CO 1 NOX 1 PM 1 Gasoline 0-3750 0.80 0.31 4.2 0.6 0.10... 1 Standards (g/mi) for Heavy Light-Duty Trucks Fuel ALVW (lbs) THC NMHC THCE NMHCE CO NOX PM...
NASA Technical Reports Server (NTRS)
Liu, Z.; Ostrenga, D.; Vollmer, B.; Kempler, S.; Deshong, B.; Greene, M.
2015-01-01
The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is also home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 17 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available: -Level-1 GPM Microwave Imager (GMI) and partner radiometer products, DPR products -Level-2 Goddard Profiling Algorithm (GPROF) GMI and partner products, DPR products -Level-3 daily and monthly products, DPR products -Integrated Multi-satellitE Retrievals for GPM (IMERG) products (early, late, and final) A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http://disc.sci.gsfc.nasa.gov/gpm). Data services that are currently and to-be available include Google-like Mirador (http://mirador.gsfc.nasa.gov/) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http://giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications. The United User Interface (UUI) is the next step in the evolution of the GES DISC web site. It attempts to provide seamless access to data, information and services through a single interface without sending the user to different applications or URLs (e.g., search, access, subset, Giovanni, documents).
NASA Technical Reports Server (NTRS)
Douglass, A. R.; Stolarski, R. S.; Strahan, S. E.; Polansky, B. C.
2006-01-01
The sensitivity of Arctic ozone loss to polar stratospheric cloud volume (V(sub PSC)) and chlorine and bromine loading is explored using chemistry and transport models (CTMs). A simulation using multi-decadal output from a general circulation model (GCM) in the Goddard Space Flight Center (GSFC) CTM complements one recycling a single year s GCM output in the Global Modeling Initiative (GMI) CTM. Winter polar ozone loss in the GSFC CTM depends on equivalent effective stratospheric chlorine (EESC) and polar vortex characteristics (temperatures, descent, isolation, polar stratospheric cloud amount). Polar ozone loss in the GMI CTM depends only on changes in EESC as the dynamics repeat annually. The GSFC CTM simulation reproduces a linear relationship between ozone loss and Vpsc derived from observations for 1992 - 2003 which holds for EESC within approx.85% of its maximum (approx.1990 - 2020). The GMI simulation shows that ozone loss varies linearly with EESC for constant, high V(sub PSC).
NASA Technical Reports Server (NTRS)
Strahan, Susan E.; Douglass, Anne R.
2004-01-01
The Global Modeling Initiative (GMI) has integrated two 36-year simulations of an ozone recovery scenario with an offline chemistry and tra nsport model using two different meteorological inputs. Physically ba sed diagnostics, derived from satellite and aircraft data sets, are d escribed and then used to evaluate the realism of temperature and transport processes in the simulations. Processes evaluated include barri er formation in the subtropics and polar regions, and extratropical w ave-driven transport. Some diagnostics are especially relevant to sim ulation of lower stratospheric ozone, but most are applicable to any stratospheric simulation. The global temperature evaluation, which is relevant to gas phase chemical reactions, showed that both sets of me teorological fields have near climatological values at all latitudes and seasons at 30 hPa and below. Both simulations showed weakness in upper stratospheric wave driving. The simulation using input from a g eneral circulation model (GMI(GCM)) showed a very good residual circulation in the tropics and Northern Hemisphere. The simulation with inp ut from a data assimilation system (GMI(DAS)) performed better in the midlatitudes than it did at high latitudes. Neither simulation forms a realistic barrier at the vortex edge, leading to uncertainty in the fate of ozone-depleted vortex air. Overall, tracer transport in the offline GML(GCM) has greater fidelity throughout the stratosphere tha n it does in the GMI(DAS)
Assimilation of all-weather GMI and ATMS observations into HWRF
NASA Astrophysics Data System (ADS)
Moradi, I.; Evans, F.; McCarty, W.; Marks, F.; Eriksson, P.
2017-12-01
We propose a novel Bayesian Monte Carlo Integration (BMCI) technique to retrieve the profiles of temperature, water vapor, and cloud liquid/ice water content from microwave cloudy measurements in the presence of TCs. These retrievals then can either be directly used by meteorologists to analyze the structure of TCs or be assimilated to provide accurate initial conditions for the NWP models. The technique is applied to the data from the Advanced Technology Microwave Sounder (ATMS) onboard Suomi National Polar-orbiting Partnership (NPP) and Global Precipitation Measurement (GPM) Microwave Imager (GMI).
NASA Astrophysics Data System (ADS)
Panegrossi, Giulia; Casella, Daniele; Cinzia Marra, Anna; Petracca, Marco; Sanò, Paolo; Dietrich, Stefano
2015-04-01
The ongoing NASA/JAXA Global Precipitation Measurement mission (GPM) requires the full exploitation of the complete constellation of passive microwave (PMW) radiometers orbiting around the globe for global precipitation monitoring. In this context the coherence of the estimates of precipitation using different passive microwave radiometers is a crucial need. We have developed two different passive microwave precipitation retrieval algorithms: one is the Cloud Dynamics Radiation Database algorithm (CDRD), a physically ¬based Bayesian algorithm for conically scanning radiometers (i.e., DMSP SSMIS); the other one is the Passive microwave Neural network Precipitation Retrieval (PNPR) algorithm for cross¬-track scanning radiometers (i.e., NOAA and MetOp¬A/B AMSU-¬A/MHS, and NPP Suomi ATMS). The algorithms, originally created for application over Europe and the Mediterranean basin, and used operationally within the EUMETSAT Satellite Application Facility on Support to Operational Hydrology and Water Management (H-SAF, http://hsaf.meteoam.it), have been recently modified and extended to Africa and Southern Atlantic for application to the MSG full disk area. The two algorithms are based on the same physical foundation, i.e., the same cloud-radiation model simulations as a priori information in the Bayesian solver and as training dataset in the neural network approach, and they also use similar procedures for identification of frozen background surface, detection of snowfall, and determination of a pixel based quality index of the surface precipitation retrievals. In addition, similar procedures for the screening of not ¬precipitating pixels are used. A novel algorithm for the detection of precipitation in tropical/sub-tropical areas has been developed. The precipitation detection algorithm shows a small rate of false alarms (also over arid/desert regions), a superior detection capability in comparison with other widely used screening algorithms, and it is applicable to all available PMW radiometers in the GPM constellation of satellites (including NPP Suomi ATMS, and GMI). Three years of SSMIS and AMSU/MHS data have been considered to carry out a verification study over Africa of the retrievals from the CDRD and PNPR algorithms. The precipitation products from the TRMM ¬Precipitation radar (PR) (TRMM product 2A25 and 2A23) have been used as ground truth. The results of this study aimed at assessing the accuracy of the precipitation retrievals in different climatic regions and precipitation regimes will be presented. Particular emphasis will be given to the analysis of the level of coherence of the precipitation estimates and patterns between the two algorithms exploiting different radiometers. Recent developments aimed at the full exploitation of the GPM constellation of satellites for optimal precipitation/drought monitoring will be also presented.
Radicals and Reservoirs in the GMI Chemistry and Transport Model: Comparison to Measurements
NASA Technical Reports Server (NTRS)
Douglass, Anne R.; Stolarski, Richard S.; Strahan, Susan E.; Connell, Peter S.
2004-01-01
We have used a three-dimensional chemistry and transport model (CTM), developed under the Global Modeling Initiative (GMI), to carry out two simulations of the composition of the stratosphere under changing halogen loading for 1995 through 2030. The two simulations differ only in that one uses meteorological fields from a general circulation model while the other uses meteorological fields from a data assimilation system. A single year's winds and temperatures are repeated for each 36-year simulation. We compare results from these two simulations with an extensive collection of data from satellite and ground-based measurements for 1993-2000. Comparisons of simulated fields with observations of radical and reservoir species for some of the major ozone-destroying compounds are of similar quality for both simulations. Differences in the upper stratosphere, caused by transport of total reactive nitrogen and methane, impact the balance among the ozone loss processes and the sensitivity of the two simulations to the change in composition.
Modeling the Frozen-In Anticyclone in the 2005 Arctic Summer Stratosphere
NASA Technical Reports Server (NTRS)
Allen, D. R.; Douglass, A. R.; Manney, G. L.; Strahan, S. E.; Krosschell, J. C.; Trueblood, J.
2010-01-01
Immediately following the breakup of the 2005 Arctic spring stratospheric vortex, a tropical air mass, characterized by low potential vorticity (PV) and high nitrous oxide (N2O), was advected poleward and became trapped in the easterly summer polar vortex. This feature, known as a "Frozen-In Anticyclone (FrIAC)", was observed in Earth Observing System (EOS) Aura Microwave Limb Sounder (MLS) data to span the potential temperature range from approximately 580 to 1100 K (approximately 25 to 40 km altitude) and to persist from late March to late August 2005. This study compares MLS N2O observations with simulations from the Global Modeling Initiative (GMI) chemistry and transport model, the GEOS-5/MERRA Replay model, and the VanLeer Icosahedral Triangular Advection isentropic transport model to elucidate the processes involved in the lifecycle of the FrIAC which is here divided into three distinct phases. During the "spin-up phase" (March to early April), strong poleward flow resulted in a tight isolated anticyclonic vortex at approximately 70-90 deg N, marked with elevated N2O. GMI, Replay, and VITA all reliably simulted the spin-up of the FrIAC, although the GMI and Replay peak N2O values were too low. The FrIAC became trapped in the developing summer easterly flow and circulated around the polar region during the "anticyclonic phase" (early April to the end of May). During this phase, the FrIAC crossed directly over the pole between the 7th and 14th of April. The VITA and Replay simulations transported the N2O anomaly intact during this crossing, in agreement with MLS, but unrealistic dispersion of the anomaly occurred in the GMI simulation due to excessive numerical mixing of the polar cap. The vortex associated with the FrIAC was apparently resistant to the weak vertical hear during the anticyclonic phase, and it thereby protected the embedded N20 anomaly from stretching. The vortex decayed in late May due to diabatic processes, leaving the N2O anomaly exposed to horizontal and vertical wind shears during the "shearing phase" (June to August). The observed lifetime of the FrIAC during this phase is consistent with time-scales calculated from the ambient horizontal and vertical wind shear. Replay maintained the horizontal structure of the N2O anomaly similar to NILS well into August. The VITA simulation also captured the horizontal structure of the FrIAC during this phase, but VITA eventually developed fine-scale N2O structure not observed in MLS data.
NASA Astrophysics Data System (ADS)
Oman, L.; Strahan, S. E.
2017-12-01
The Quasi-Biennial Oscillation (QBO) is the dominant mode of variability in the tropical stratosphere on interannual time scales. It has been shown to impact both stratospheric dynamics and important trace gas constituent distributions. The QBO timing with respect to the seasonal cycle in each hemisphere is significant in determining its impact on up to decadal scale variability. The composition response to the QBO is examined using the new MERRA-2 GMI "Replay" simulation, an atmospheric composition community resource, run at the native MERRA-2 approximately ½° horizontal resolution on the cubed sphere. MERRA-2 GMI is driven by the online use of key MERRA-2 meteorological quantities (i.e. U, V, T, and P) with all other variables calculated in response to those and boundary condition forcings from 1980-2016. The simulation combined with NASA's UARS and Aura satellite measurements have allowed us to quantify the impact of the QBO on stratospheric composition in more detail than was ever possible before. Revealing preferential pathways and transport timings necessary in understanding the QBO impact on composition throughout the stratosphere.
A Detailed Examination of the GPM Core Satellite Gridded Text Product
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz; Kelley, Owen A.; Kummerow, C.; Huffman, George; Olson, William S.; Kwiatowski, John M.
2015-01-01
The Global Precipitation Measurement (GPM) mission quarter-degree gridded-text product has a similar file format and a similar purpose as the Tropical Rainfall Measuring Mission (TRMM) 3G68 quarter-degree product. The GPM text-grid format is an hourly summary of surface precipitation retrievals from various GPM instruments and combinations of GPM instruments. The GMI Goddard Profiling (GPROF) retrieval provides the widest swath (800 km) and does the retrieval using the GPM Microwave Imager (GMI). The Ku radar provides the widest radar swath (250 km swath) and also provides continuity with the TRMM Ku Precipitation Radar. GPM's Ku+Ka band matched swath (125 km swath) provides a dual-frequency precipitation retrieval. The "combined" retrieval (125 km swath) provides a multi-instrument precipitation retrieval based on the GMI, the DPR Ku radar, and the DPR Ka radar. While the data are reported in hourly grids, all hours for a day are packaged into a single text file that is g-zipped to reduce file size and to speed up downloading. The data are reported on a 0.25deg x 0.25 deg grid.
High-Resolution Autoradiography
1955-01-01
alloy the tungsten concontrationl of it 1-mnicron culbe is: (8,9 gmI) (8.88 mcg m1-2nl/micron gradient will probably not be so sharp am fit( gradients ...phases of Ilite work: (a) Applicattion and( develop- lie( iiirkeh used. ment (If the( wet-process autorodiographic method will (b)i Trwo methods exist...34 concentration gradients are sufficiently large, the same solution since the range of beta particles in water Wet-process autoradiography as developed for
Enhanced giant magnetoimpedance in heterogeneous nanobrush
2012-01-01
A highly sensitive and large working range giant magnetoimpedance (GMI) effect is found in the novel nanostructure: nanobrush. The nanostructure is composed of a soft magnetic nanofilm and a nanowire array, respectively fabricated by RF magnetron sputtering and electrochemical deposition. The optimal GMI ratio of nanobrush is promoted to more than 250%, higher than the pure FeNi film and some sandwich structures at low frequency. The design of this structure is based on the vortex distribution of magnetic moments in thin film, and it can be induced by the exchange coupling effect between the interfaces of nanobrush. PMID:22963551
NASA Astrophysics Data System (ADS)
Kim, M. J.; Jin, J.; McCarty, W.; Todling, R.; Holdaway, D. R.; Gelaro, R.
2014-12-01
The NASA Global Modeling and Assimilation Office (GMAO) works to maximize the impact of satellite observations in the analysis and prediction of climate and weather through integrated Earth system modeling and data assimilation. To achieve this goal, the GMAO undertakes model and assimilation development, generates products to support NASA instrument teams and the NASA Earth science program. Currently Atmospheric Data Assimilation System (ADAS) in the Goddard Earth Observing System Model, Version 5(GEOS-5) system combines millions of observations and short-term forecasts to determine the best estimate, or analysis, of the instantaneous atmospheric state. However, ADAS has been geared towards utilization of observations in clear sky conditions and the majority of satellite channel data affected by clouds are discarded. Microwave imager data from satellites can be a significant source of information for clouds and precipitation but the data are presently underutilized, as only surface rain rates from the Tropical Rainfall Measurement Mission (TRMM) Microwave Imager (TMI) are assimilated with small weight assigned in the analysis process. As clouds and precipitation often occur in regions with high forecast sensitivity, improvements in the temperature, moisture, wind and cloud analysis of these regions are likely to contribute to significant gains in numerical weather prediction accuracy. This presentation is intended to give an overview of GMAO's recent progress in assimilating the all-sky GPM Microwave Imager (GMI) radiance data in GEOS-5 system. This includes development of various new components to assimilate cloud and precipitation affected data in addition to data in clear sky condition. New observation operators, quality controls, moisture control variables, observation and background error models, and a methodology to incorporate the linearlized moisture physics in the assimilation system are described. In addition preliminary results showing impacts of assimilating all-sky GMI data on GEOS-5 forecasts are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shekiro, Joe; Elander, Richard
2015-12-01
The purpose of this cooperative work agreement between General Mills Inc. (GMI) and NREL is to determine the feasibility of producing a valuable food ingredient (xylo-oligosaccharides or XOS), a highly soluble fiber material, from agricultural waste streams, at an advantaged cost level relative to similar existing ingredients. The scope of the project includes pilot-scale process development (Task 1), compositional analysis (Task 2), and techno-economic analysis (Task 3).
NASA Astrophysics Data System (ADS)
Chen, S.; Chen, H.; Hu, J.; Zhang, A.; Min, C.
2017-12-01
It is more than 3 years since the launch of Global Precipitation Measurement (GPM) core satellite on February 27 2014. This satellite carries two core sensors, i.e. dual-frequency precipitation radar (DPR) and microwave imager (GMI). These two sensors are of the state-of- the-art sensors that observe the precipitation over the globe. The DPR level-2 product provides both precipitation rates and phases. The precipitation phase information can help advance global hydrological cycle modeling, particularly crucial for high-altitude and high latitude regions where solid precipitation is the dominated source of water. However, people are still in short of the reliability and accuracy of DPR level-2 product. Assess the performance and uncertainty of precipitation retrievals derived from the core sensor dual-frequency precipitation radar (DPR) on board the satellite is needed for the precipitation algorithm developers and the end users in hydrology, weather, meteorology, and hydro-related communities. In this study, the precipitation estimation derived from DPR is compared with that derived from CSU-CHILL National Weather Radar from March 2014 to October 2017. The CSU-CHILL radar is located in Greeley, CO, and is an advanced, transportable dual-polarized dual-wavelength (S- and X-band) weather radar. The system and random errors of DPR in measuring precipitation will be analyzed as a function of the precipitation rate and precipitation type (liquid and solid). This study is expected to offer insights into performance of the most advanced sensor and thus provide useful feedback to the algorithm developers as well as the GPM data end users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-12-03
Under the amended Clean Air Act of 1978, the U.S. Environmental Protection Agency has granted a waiver of the 3.4 g/mi CO emission standard to Toyo Kogyo Co. Ltd.'s 91 and 120 CID (cubic inch displacement) 1981 model year light-duty motor vehicles and has established an interim standard of 7.0 g/mi, because these models will be unable to incorporate an effective CO control technology to meet the statutory standard by 1981 and because the public health will not be unduly threatened by non-attainment of the 3.4 g/mi standard. This decision should enable Toyo Kogyo to market two of its enginesmore » without catalyst changes. CO emission standard waivers were denied to Fuji Heavy Industries Ltd., Nissan Motor Co. Ltd., and Renault for their respective 1981 light-duty motor vehicles, and to Toyo Kogyo for two 1982 vehicles and a rotary engine, mainly because these vehicles are thought able to meet the statutory standard for 1981 and 1982 even if costs, drivability, and fuel economy are considered.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Jingshun, E-mail: jingshun-liu@163.com, E-mail: faxiang.qin@gmail.com; School of Materials Science and Engineering, Inner Mongolia University of Technology, Hohhot 010051; Qin, Faxiang, E-mail: jingshun-liu@163.com, E-mail: faxiang.qin@gmail.com
2014-05-07
We report on a combined current-modulation annealing (CCMA) method, which integrates the optimized pulsed current (PC) and DC annealing techniques, for improving the giant magnetoimpedance (GMI) effect and its field sensitivity of Co-rich amorphous microwires. Relative to an as-prepared Co{sub 68.2}Fe{sub 4.3}B{sub 15}Si{sub 12.5} wire, CCMA is shown to remarkably improve the GMI response of the wire. At 10 MHz, the maximum GMI ratio and its field sensitivity of the as-prepared wire were, respectively, increased by 3.5 and 2.28 times when subjected to CCMA. CCMA increased atomic order orientation and circumferential permeability of the wire by the co-action of high-density pulsedmore » magnetic field energy and thermal activation energy at a PC annealing stage, as well as the formation of uniform circular magnetic domains by a stable DC magnetic field at a DC annealing stage. The magnetic moment can overcome eddy-current damping or nail-sticked action in rotational magnetization, giving rise to a double-peak feature and wider working field range (up to ±2 Oe) at relatively higher frequency (f ≥ 1 MHz)« less
NASA Technical Reports Server (NTRS)
Ostrenga, D.; Liu, Z.; Vollmer, B.; Teng, W.; Kempler, S.
2014-01-01
On February 27, 2014, the NASA Global Precipitation Measurement (GPM) mission was launched to provide the next-generation global observations of rain and snow (http:pmm.nasa.govGPM). The GPM mission consists of an international network of satellites in which a GPM Core Observatory satellite carries both active and passive microwave instruments to measure precipitation and serve as a reference standard, to unify precipitation measurements from a constellation of other research and operational satellites. The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 16 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available include the following:Level-1 GPM Microwave Imager (GMI) and partner radiometer productsLevel-2 Goddard Profiling Algorithm (GPROF) GMI and partner productsLevel-3 daily and monthly productsIntegrated Multi-satellitE Retrievals for GPM (IMERG) products (early, late, and final) A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http:disc.sci.gsfc.nasa.govgpm). Data services that are currently and to-be available include Google-like Mirador (http:mirador.gsfc.nasa.gov) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http:giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications.
NASA Technical Reports Server (NTRS)
Liu, Zhong; Ostrenga, D.; Vollmer, B.; Deshong, B.; Greene, M.; Teng, W.; Kempler, S. J.
2015-01-01
On February 27, 2014, the NASA Global Precipitation Measurement (GPM) mission was launched to provide the next-generation global observations of rain and snow (http:pmm.nasa.govGPM). The GPM mission consists of an international network of satellites in which a GPM Core Observatory satellite carries both active and passive microwave instruments to measure precipitation and serve as a reference standard, to unify precipitation measurements from a constellation of other research and operational satellites. The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 16 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available include the following: 1. Level-1 GPM Microwave Imager (GMI) and partner radiometer products. 2. Goddard Profiling Algorithm (GPROF) GMI and partner products. 3. Integrated Multi-satellitE Retrievals for GPM (IMERG) products. (early, late, and final)A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http:disc.sci.gsfc.nasa.govgpm). Data services that are currently and to-be available include Google-like Mirador (http:mirador.gsfc.nasa.gov) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http:giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications.In this presentation, we will present GPM data products and services with examples.
NASA Astrophysics Data System (ADS)
Ostrenga, D.; Liu, Z.; Vollmer, B.; Teng, W. L.; Kempler, S. J.
2014-12-01
On February 27, 2014, the NASA Global Precipitation Measurement (GPM) mission was launched to provide the next-generation global observations of rain and snow (http://pmm.nasa.gov/GPM). The GPM mission consists of an international network of satellites in which a GPM "Core Observatory" satellite carries both active and passive microwave instruments to measure precipitation and serve as a reference standard, to unify precipitation measurements from a constellation of other research and operational satellites. The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 16 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available include the following: Level-1 GPM Microwave Imager (GMI) and partner radiometer products Goddard Profiling Algorithm (GPROF) GMI and partner products Integrated Multi-satellitE Retrievals for GPM (IMERG) products (early, late, and final) A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http://disc.sci.gsfc.nasa.gov/gpm). Data services that are currently and to-be available include Google-like Mirador (http://mirador.gsfc.nasa.gov/) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http://giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications. In this presentation, we will present GPM data products and services with examples.
NASA Technical Reports Server (NTRS)
Liu, Zhong; Ostrenga, D.; Vollmer, B.; Deshong, B.; MacRitchie, K.; Greene, M.; Kempler, S.
2016-01-01
Precipitation is an important dataset in hydrometeorological research and applications such as flood modeling, drought monitoring, etc. On February 27, 2014, the NASA Global Precipitation Measurement (GPM) mission was launched to provide the next-generation global observations of rain and snow (http:pmm.nasa.govGPM). The GPM mission consists of an international network of satellites in which a GPM Core Observatory satellite carries both active and passive microwave instruments to measure precipitation and serve as a reference standard, to unify precipitation measurements from a constellation of other research and operational satellites. The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data. The GES DISC is home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). GPM products currently available include the following:1. Level-1 GPM Microwave Imager (GMI) and partner radiometer products2. Goddard Profiling Algorithm (GPROF) GMI and partner products (Level-2 and Level-3)3. GPM dual-frequency precipitation radar and their combined products (Level-2 and Level-3)4. Integrated Multi-satellitE Retrievals for GPM (IMERG) products (early, late, and final run)GPM data can be accessed through a number of data services (e.g., Simple Subset Wizard, OPeNDAP, WMS, WCS, ftp, etc.). A newly released Unified User Interface or UUI is a single interface to provide users seamless access to data, information and services. For example, a search for precipitation products will not only return TRMM and GPM products, but also other global precipitation products such as MERRA (Modern Era Retrospective-Analysis for Research and Applications), GLDAS (Global Land Data Assimilation Systems), etc.New features and capabilities have been recently added in GIOVANNI to allow exploring and inter-comparing GPM IMERG (Integrated Multi-satelliE Retrievals for GPM) half-hourly and monthly precipitation products as well as other precipitation products such as TRMM, MERRA, NLDAS, GLDAS, etc. GIOVANNI is a web-based tool developed by the GES DISC, to visualize and analyze Earth science data without having to download data and software. During the GPM era, the GES DISC will continue to develop and provide data services for supporting applications. We will update and enhance existing TRMM applications (Current Conditions, the USDA Crop Explorer, etc.) with higher spatial resolution IMERG products. In this presentation, we will present GPM data products and services with examples.
A multi-sensor data-driven methodology for all-sky passive microwave inundation retrieval
NASA Astrophysics Data System (ADS)
Takbiri, Zeinab; Ebtehaj, Ardeshir M.; Foufoula-Georgiou, Efi
2017-06-01
We present a multi-sensor Bayesian passive microwave retrieval algorithm for flood inundation mapping at high spatial and temporal resolutions. The algorithm takes advantage of observations from multiple sensors in optical, short-infrared, and microwave bands, thereby allowing for detection and mapping of the sub-pixel fraction of inundated areas under almost all-sky conditions. The method relies on a nearest-neighbor search and a modern sparsity-promoting inversion method that make use of an a priori dataset in the form of two joint dictionaries. These dictionaries contain almost overlapping observations by the Special Sensor Microwave Imager and Sounder (SSMIS) on board the Defense Meteorological Satellite Program (DMSP) F17 satellite and the Moderate Resolution Imaging Spectroradiometer (MODIS) on board the Aqua and Terra satellites. Evaluation of the retrieval algorithm over the Mekong Delta shows that it is capable of capturing to a good degree the inundation diurnal variability due to localized convective precipitation. At longer timescales, the results demonstrate consistency with the ground-based water level observations, denoting that the method is properly capturing inundation seasonal patterns in response to regional monsoonal rain. The calculated Euclidean distance, rank-correlation, and also copula quantile analysis demonstrate a good agreement between the outputs of the algorithm and the observed water levels at monthly and daily timescales. The current inundation products are at a resolution of 12.5 km and taken twice per day, but a higher resolution (order of 5 km and every 3 h) can be achieved using the same algorithm with the dictionary populated by the Global Precipitation Mission (GPM) Microwave Imager (GMI) products.
Chen, Jiawen; Li, Jianhua; Li, Yiyuan; Chen, Yulong
2018-01-01
A miniaturized Co-based amorphous wire GMI (Giant magneto-impedance) magnetic sensor was designed and fabricated in this paper. The Co-based amorphous wire was used as the sense element due to its high sensitivity to the magnetic field. A three-dimensional micro coil surrounding the Co-based amorphous wire was fabricated by MEMS (Micro-Electro-Mechanical System) technology, which was used to extract the electrical signal. The three-dimensional micro pick-up coil was designed and simulated with HFSS (High Frequency Structure Simulator) software to determine the key parameters. Surface micro machining MEMS (Micro-Electro-Mechanical System) technology was employed to fabricate the three-dimensional coil. The size of the developed amorphous wire magnetic sensor is 5.6 × 1.5 × 1.1 mm3. Helmholtz coil was used to characterize the performance of the device. The test results of the sensor sample show that the voltage change is 130 mV/Oe and the linearity error is 4.83% in the range of 0~45,000 nT. The results indicate that the developed miniaturized magnetic sensor has high sensitivity. By testing the electrical resistance of the samples, the results also showed high uniformity of each device. PMID:29494477
The Global Precipitation Measurement (GPM) Project
NASA Technical Reports Server (NTRS)
Azarbarzin, Ardeshir; Carlisle, Candace
2010-01-01
The Global Precipitation Measurement (GP!v1) mission is an international cooperative effort to advance the understanding of the physics of the Earth's water and energy cycle. Accurate and timely knowledge of global precipitation is essential for understanding the weather/climate/ecological system, for improving our ability to manage freshwater resources, and for predicting high-impact natural hazard events including floods, droughts, extreme weather events, and landslides. The GPM Core Observatory will be a reference standard to uniformly calibrate data from a constellation of spacecraft with passive microwave sensors. GPM is being developed under a partnership between the United States (US) National Aeronautics and Space Administration (NASA) and the Japanese Aerospace and Exploration Agency (JAXA). NASA's Goddard Space Flight Center (GSFC), in Greenbelt, MD is developing the Core Observatory, two GPM Microwave Imager (GMI) instruments, Ground Validation System and Precipitation Processing System for the GPM mission. JAXA will provide a Dual-frequency Precipitation Radar (DPR) for installation on the Core satellite and launch services for the Core Observatory. The second GMI instrument will be flown on a partner-provided spacecraft. Other US agencies and international partners contribute to the GPM mission by providing precipitation measurements obtained from their own spacecraft and/or providing ground-based precipitation measurements to support ground validation activities. The Precipitation Processing System will provide standard data products for the mission.
NASA Astrophysics Data System (ADS)
Buettel, G.; Joppich, J.; Hartmann, U.
2017-12-01
Giant magnetoimpedance (GMI) measurements in the high-frequency regime utilizing a coplanar waveguide with an integrated Permalloy multilayer and micromachined on a silicon cantilever are reported. The fabrication process is described in detail. The aspect ratio of the magnetic multilayer in the magnetoresistive and magnetostrictive device was varied. Tensile strain and compressive strain were applied. Vector network analyzer measurements in the range from the skin effect to ferromagnetic resonance confirm the technological potential of GMI-based micro-electro-mechanical devices for strain and magnetic field sensing applications. The strain-impedance gauge factor was quantified by finite element strain calculations and reaches a maximum value of almost 200.
2009-07-31
CAPE CANAVERAL, Fla. – NASA Administrator Charles Bolden signs an agreement defining the terms of cooperation between NASA and JAXA on the Global Precipitation Measurement, or GPM, mission. The ceremony took place July 30 at the Kennedy Space Center Visitor Complex, Fla. Through the agreement, NASA is responsible for the GPM core observatory spacecraft bus, the GPM Microwave Imager, or GMI, carried by it, and a second GMI to be flown on a partner-provided Low-Inclination Observatory. JAXA will supply the Dual-frequency Precipitation Radar for the core observatory, an H-IIA rocket for the core observatory's launch in July 2013, and data from a conical-scanning microwave imager on the upcoming Global Change Observation Mission satellite. Photo credit: NASA/Jack Pfaller
NASA Astrophysics Data System (ADS)
Zhang, K.; Gasiewski, A. J.
2017-12-01
A horizontally inhomogeneous unified microwave radiative transfer (HI-UMRT) model based upon a nonspherical hydrometeor scattering model is being developed at the University of Colorado at Boulder to facilitate forward radiative simulations for 3-dimensionally inhomogeneous clouds in severe weather. The HI-UMRT 3-D analytical solution is based on incorporating a planar-stratified 1-D UMRT algorithm within a horizontally inhomogeneous iterative perturbation scheme. Single-scattering parameters are computed using the Discrete Dipole Scattering (DDSCAT v7.3) program for hundreds of carefully selected nonspherical complex frozen hydrometeors from the NASA/GSFC DDSCAT database. The required analytic factorization symmetry of transition matrix in a normalized RT equation was analytically proved and validated numerically using the DDSCAT-based full Stokes matrix of randomly oriented hydrometeors. The HI-UMRT model thus inherits the properties of unconditional numerical stability, efficiency, and accuracy from the UMRT algorithm and provides a practical 3-D two-Stokes parameter radiance solution with Jacobian to be used within microwave retrievals and data assimilation schemes. In addition, a fast forward radar reflectivity operator with Jacobian based on DDSCAT backscatter efficiency computed for large hydrometeors is incorporated into the HI-UMRT model to provide applicability to active radar sensors. The HI-UMRT will be validated strategically at two levels: 1) intercomparison of brightness temperature (Tb) results with those of several 1-D and 3-D RT models, including UMRT, CRTM and Monte Carlo models, 2) intercomparison of Tb with observed data from combined passive and active spaceborne sensors (e.g. GPM GMI and DPR). The precise expression for determining the required number of 3-D iterations to achieve an error bound on the perturbation solution will be developed to facilitate the numerical verification of the HI-UMRT code complexity and computation performance.
Consistent radiative transfer modeling of active and passive observations of precipitation
NASA Astrophysics Data System (ADS)
Adams, Ian
2016-04-01
Spaceborne platforms such as the Tropical Rainfall Measurement Mission (TRMM) and the Global Precipitation Measurement (GPM) mission exploit a combination of active and passive sensors to provide a greater understanding of the three-dimensional structure of precipitation. While "operationalized" retrieval algorithms require fast forward models, the ability to perform higher fidelity simulations is necessary in order to understand the physics of remote sensing problems by testing assumptions and developing parameterizations for the fast models. To ensure proper synergy between active and passive modeling, forward models must be consistent when modeling the responses of radars and radiometers. This work presents a self-consistent transfer model for simulating radar reflectivities and millimeter wave brightness temperatures for precipitating scenes. To accomplish this, we extended the Atmospheric Radiative Transfer Simulator (ARTS) version 2.3 to solve the radiative transfer equation for active sensors and multiple scattering conditions. Early versions of ARTS (1.1) included a passive Monte Carlo solver, and ARTS is capable of handling atmospheres of up to three dimensions with ellipsoidal planetary geometries. The modular nature of ARTS facilitates extensibility, and the well-developed ray-tracing tools are suited for implementation of Monte Carlo algorithms. Finally, since ARTS handles the full Stokes vector, co- and cross-polarized reflectivity products are possible for scenarios that include nonspherical particles, with or without preferential alignment. The accuracy of the forward model will be demonstrated with precipitation events observed by TRMM and GPM, and the effects of multiple scattering will be detailed. The three-dimensional nature of the radiative transfer model will be useful for understanding the effects of nonuniform beamfill and multiple scattering for spatially heterogeneous precipitation events. The targets of this forward model are GPM (the Dual-wavelength Precipitation Radar (DPR) and GPM Microwave Imager (GMI)).
Clinical evaluation of a miniaturized desktop breath hydrogen analyzer.
Duan, L P; Braden, B; Clement, T; Caspary, W F; Lembcke, B
1994-10-01
A small desktop electrochemical H2 analyzer (EC-60-Hydrogen monitor) was compared with a stationary electrochemical H2 monitor (GMI-exhaled Hydrogen monitor). The EC-60-H2 monitor shows a high degree of precision for repetitive (n = 10) measurements of standard hydrogen mixtures (CV 1-8%). The response time for completion of measurement is shorter than that of the GMI-exhaled H2 monitor (37 sec. vs 53 sec.; p < 0.0001), while reset times are almost identical (54 sec. vs 51 sec. n.s). In a clinical setting, breath H2-concentrations measured with the EC-60-H2 monitor and the GMI-exhaled H2 monitor were in excellent agreement with a linear correlation (Y = 1.12X + 1.022, r2 = 0.9617, n = 115). With increasing H2-concentrations the EC-60-H2 monitor required larger sample volumes for maintaining sufficient precision, and sample volumes greater than 200 ml were required with H2-concentrations > 30 ppm. For routine gastrointestinal function testing, the EC-60-H2 monitor is an satisfactory and reliable, easy to use and inexpensive desktop breath hydrogen analyzer, whereas in patients with difficulty in cooperating (children, people with severe pulmonary insufficiency), special care has to be applied to obtain sufficiently large breath samples.
NASA Astrophysics Data System (ADS)
Ahn, S. J.; Rheem, Y. W.; Yoon, S. S.; Lee, B. S.; Kim, C. G.; Kim, C. O.
2003-04-01
A commercial glass-covered, Co-based amorphous microwire (Co 67Fe 3.8Ni 1.4B 11.5Si 14.6Mo 1.7) is etched in order to remove its glass cover in a 60.51% hydrofluoric acid solution, and annealed in air by illuminating with a pulsed Nd:YAG laser beams with an energy E of 48 mJ/pulse. Giant magnetoimpedance (GMI) profiles at a frequency f are measured as function of the angle θ of the external field, H, with respect to the wire axis. The sign of the H values at the peak of the GMI profiles for f=100 kHz and at the dip of the GMI profiles for f=10 MHz, Hp and Hd, respectively, change at θ=85-90°, reflecting that the tilt angle of the helical domains is between 0° and 5° from the circular direction for the as-etched samples. The variations of Hp and Hd with θ for the sample with E=48 mJ/pulse and Ha=20 Oe show a change in sign of Hp and Hd at θ=90-100°, reflecting that the tilt angle of the helical domains is decreased by about -10° compared to that of the as-etched sample.
Lundgren, Benjamin R; Connolly, Morgan P; Choudhary, Pratibha; Brookins-Little, Tiffany S; Chatterjee, Snigdha; Raina, Ramesh; Nomura, Christopher T
2015-01-01
The alternative sigma factor RpoN is a unique regulator found among bacteria. It controls numerous processes that range from basic metabolism to more complex functions such as motility and nitrogen fixation. Our current understanding of RpoN function is largely derived from studies on prototypical bacteria such as Escherichia coli. Bacillus subtilis and Pseudomonas putida. Although the extent and necessity of RpoN-dependent functions differ radically between these model organisms, each bacterium depends on a single chromosomal rpoN gene to meet the cellular demands of RpoN regulation. The bacterium Ralstonia solanacearum is often recognized for being the causative agent of wilt disease in crops, including banana, peanut and potato. However, this plant pathogen is also one of the few bacterial species whose genome possesses dual rpoN genes. To determine if the rpoN genes in this bacterium are genetically redundant and interchangeable, we constructed and characterized ΔrpoN1, ΔrpoN2 and ΔrpoN1 ΔrpoN2 mutants of R. solanacearum GMI1000. It was found that growth on a small range of metabolites, including dicarboxylates, ethanol, nitrate, ornithine, proline and xanthine, were dependent on only the rpoN1 gene. Furthermore, the rpoN1 gene was required for wilt disease on tomato whereas rpoN2 had no observable role in virulence or metabolism in R. solanacearum GMI1000. Interestingly, plasmid-based expression of rpoN2 did not fully rescue the metabolic deficiencies of the ΔrpoN1 mutants; full recovery was specific to rpoN1. In comparison, only rpoN2 was able to genetically complement a ΔrpoN E. coli mutant. These results demonstrate that the RpoN1 and RpoN2 proteins are not functionally equivalent or interchangeable in R. solanacearum GMI1000.
Meng, Fanhong; Babujee, Lavanya; Jacobs, Jonathan M; Allen, Caitilyn
2015-01-01
While most strains of the plant pathogenic bacterium Ralstonia solanacearum are tropical, the race 3 biovar 2 (R3bv2) subgroup attacks plants in cooler climates. To identify mechanisms underlying this trait, we compared the transcriptional profiles of R. solanacearum R3bv2 strain UW551 and tropical strain GMI1000 at 20°C and 28°C, both in culture and during tomato pathogenesis. 4.2% of the ORFs in the UW551 genome and 7.9% of the GMI1000 ORFs were differentially expressed by temperature in planta. The two strains had distinct transcriptional responses to temperature change. GMI1000 up-regulated several stress response genes at 20°C, apparently struggling to cope with plant defenses. At the cooler temperature, R3bv2 strain UW551 up-regulated a cluster encoding a mannose-fucose binding lectin, LecM; a quorum sensing-dependent protein, AidA; and a related hypothetical protein, AidC. The last two genes are absent from the GMI1000 genome. In UW551, all three genes were positively regulated by the adjacent SolI/R quorum sensing system. These temperature-responsive genes were required for full virulence in R3bv2. Mutants lacking lecM, aidA, or aidC were each significantly more reduced in virulence on tomato at 20°C than at 28°C in both a naturalistic soil soak inoculation assay and when they were inoculated directly into tomato stems. The lecM and aidC mutants also survived poorly in potato tubers at the seed tuber storage temperature of 4°C, and the lecM mutant was defective in biofilm formation in vitro. Together, these results suggest novel mechanisms, including a lectin, are involved in the unique temperate epidemiology of R3bv2.
Limakatso, Katleho; Corten, Lieselotte; Parker, Romy
2016-09-01
Phantom limb pain (PLP) is characterized by the anatomical shifting of neighbouring somatosensory and motor areas into a deafferented cortical area of the brain contralateral to the amputated limb. It has been shown that maladaptive neuroplasticity is positively correlated to the perception of PLP in amputees. Recent studies support the use of graded motor imagery (GMI) and its component to alleviate the severity of PLP and disability. However, there is insufficient collective empirical evidence exploring the effectiveness of these treatment modalities in amputees with PLP. This systematic review will therefore explore the effects of GMI and its individual components on PLP and disability in upper and lower limb amputees. We will utilize a customized search strategy to search PubMed, Cochrane Central register of Controlled Trials, MEDLINE, Embase, PsycINFO, PEDro, Scopus, CINAHL, LILACS, DARE, Africa-Wide Information and Web of Science. We will also look at clinicaltrials.gov ( http://www.clinicaltrials.gov/ ), Pactr.gov ( http://www.pactr.org/ ) and EU Clinical trials register ( https://www.clinicaltrialsregister.eu/ ) for ongoing research. Two independent reviewers will screen articles for methodological validity. Thereafter, data from included studies will be extracted by two independent reviewers through a customized pre-set data extraction sheet. Studies with a comparable intervention and outcome measure will be pooled for meta-analysis. Studies with high heterogeneity will be analysed through random effects model. A narrative data analysis will be considered where there is insufficient data to perform a meta-analysis. Several studies investigating the effectiveness of GMI and its different components on PLP have drawn contrasting conclusions regarding the efficacy and applicability of GMI in clinical practice. This systematic review will therefore gather and critically appraise all relevant data, to generate a substantial conclusion and recommendations for clinical practice and research on this subject. PROSPERO CRD42016036471.
Impact of GPM Rainrate Data Assimilation on Simulation of Hurricane Harvey (2017)
NASA Technical Reports Server (NTRS)
Li, Xuanli; Srikishen, Jayanthi; Zavodsky, Bradley; Mecikalski, John
2018-01-01
Built upon Tropical Rainfall Measuring Mission (TRMM) legacy for next-generation global observation of rain and snow. The GPM was launched in February 2014 with Dual-frequency Precipitation Radar (DPR) and GPM Microwave Imager (GMI) onboard. The GPM has a broad global coverage approximately 70deg S -70deg N with a swath of 245/125-km for the Ka (35.5 GHz)/Ku (13.6 GHz) band radar, and 850-km for the 13-channel GMI. GPM also features better retrievals for heavy, moderate, and light rain and snowfall To develop methodology to assimilate GPM surface precipitation data with Grid-point Statistical Interpolation (GSI) data assimilation system and WRF ARW model To investigate the potential and the value of utilizing GPM observation into NWP for operational environment The GPM rain rate data has been successfully assimilated using the GSI rain data assimilation package. Impacts of rain rate data have been found in temperature and moisture fields of initial conditions. 2.Assimilation of either GPM IMERG or GPROF rain product produces significant improvement in precipitation amount and structure for Hurricane Harvey (2017) forecast. Since IMERG data is available half-hourly, further forecast improvement is expected with continuous assimilation of IMERG data
2009-07-31
CAPE CANAVERAL, Fla. – Japan Aerospace Exploration Agency, or JAXA, President Keiji Tachikawa signs an agreement defining the terms of cooperation between NASA and JAXA on the Global Precipitation Measurement, or GPM, mission. The ceremony took place July 30 at the Kennedy Space Center Visitor Complex, Fla. Through the agreement, NASA is responsible for the GPM core observatory spacecraft bus, the GPM Microwave Imager, or GMI, carried by it, and a second GMI to be flown on a partner-provided Low-Inclination Observatory. JAXA will supply the Dual-frequency Precipitation Radar for the core observatory, an H-IIA rocket for the core observatory's launch in July 2013, and data from a conical-scanning microwave imager on the upcoming Global Change Observation Mission satellite. Photo credit: NASA/Jack Pfaller
NASA Astrophysics Data System (ADS)
Marra, A. C.; Porcù, F.; Baldini, L.; Petracca, M.; Casella, D.; Dietrich, S.; Mugnai, A.; Sanò, P.; Vulpiani, G.; Panegrossi, G.
2017-08-01
On 5 September 2015 a violent hailstorm hit the Gulf and the city of Naples in Italy. The storm originated over the Tyrrhenian Sea dropping 7-10 cm diameter hailstones along its path. During its mature phase, at 08:47 UTC, the hailstorm was captured by one overpass of the Global Precipitation Measurement mission Core Observatory (GPM-CO) embarking the GPM Microwave Imager (GMI) and the Ka/Ku-band Dual-frequency Precipitation Radar (DPR). In this paper, observations by both GMI and DPR are thoroughly analyzed in conjunction with other spaceborne and ground-based measurements, to show how the GPM-CO integrates established observational tools in monitoring, understanding, and characterizing severe weather. Rapid-scan MSG SEVIRI images show an extremely rapid development, with 10.8 μm cloud-top temperatures dropping by 65 K in 40 min down to 198 K. The LIghtning NETwork registered over 37,000 strokes in 5 h, with intracloud positive stroke fraction increasing during the regeneration phases, when ground-based polarimetric radar and DPR support the presence of large graupel/hail particles. DPR Ku 40 dBZ and 20 dBZ echo top heights at 14 km and 16 km, respectively, indicate strong updraft and deep overshooting. GMI extremely low brightness temperatures (TBs) in correspondence of the convective core (158, 97, 67, and 87 K at 18.7, 36.5, 89 and 166 GHz) are compatible with the presence of massive ice particles. In two years of GPM global observations the storm ranks as fourth and first in terms of minimum 36.5 and 18.7 GHz (V-pol) TBs, respectively. This study illustrates GPM-CO sensing capabilities for characterizing the structure of such severe hailstorm, while providing observational evidence of its intensity and rarity, both globally and over the Mediterranean area.
The Global Precipitation Measurement (GPM) Mission: Overview and Status
NASA Technical Reports Server (NTRS)
Hou, Arthur
2008-01-01
The Global Precipitation Measurement (GPM) Mission is an international satellite mission to unify and advance global precipitation measurements from a constellation of dedicated and operational microwave sensors. The GPM concept centers on the deployment of a Core Spacecraft in a non-Sun-synchronous orbit at 65 degrees inclination carrying a dual-frequency precipitation radar (DPR) and a multi-frequency passive microwave radiometer (GMI) with high-frequency capabilities to serve as a precipitation physics observatory and calibration standard for the constellation radiometers. The baseline GPM constellation is envisioned to comprise conical-scanning microwave imagers (e.g., GMI, SSMIS, AMSR, MIS, MADRAS, GPM-Brazil) augmented with cross-track microwave temperature/humidity sounders (e.g., MHS, ATMS) over land. In addition to the Core Satellite, the GPM Mission will contribute a second GMI to be flown in a low-inclination (approximately 40 deg.) non-Sun-synchronous orbit to improve near real-time monitoring of hurricanes. GPM is a science mission with integrated applications goals aimed at (1) advancing the knowledge of the global water/energy cycle variability and freshwater availability and (2) improving weather, climate, and hydrological prediction capabilities through more accurate and frequent measurements of global precipitation. The GPM Mission is currently a partnership between NASA and the Japan Aerospace Exploration Agency (JAXA), with opportunities for additional partners in satellite constellation and ground validation activities. Within the framework of the inter-governmental Group ob Earth Observations (GEO) and Global Earth Observation System of Systems (GEOSS), GPM has been identified as a cornerstone for the Precipitation Constellation (PC) being developed under the auspices of Committee of Earth Observation Satellites (CEOS). The GPM Core Observatory is scheduled for launch in 2013, followed by the launch of the GPM Low-Inclination Observatory in 2014. An overview of the GPM mission status, instrument capabilities, ground validation plans, and anticipated scientific and societal benefits will be presented.
NASA Astrophysics Data System (ADS)
Chen, S.; Qi, Y.; Hu, B.; Hu, J.; Hong, Y.
2015-12-01
The Global Precipitation Measurement (GPM) mission is composed of an international network of satellites that provide the next-generation global observations of rain and snow. Integrated Multi-satellitE Retrievals for GPM (IMERG) is the state-of-art precipitation products with high spatio-temporal resolution of 0.1°/30min. IMERG unifies precipitation measurements from a constellation of research and operational satellites with the core sensors dual-frequency precipitation radar (DPR) and microwave imager (GMI) on board a "Core" satellite. Additionally, IMERG blends the advantages of currently most popular satellite-based quantitative precipitation estimates (QPE) algorithms, i.e. TRMM Multi-satellite Precipitation Analysis (TMPA), Climate Prediction Center morphing technique (CMORPH), Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS). The real-time and post real-time IMERG products are now available online at https://stormpps.gsfc.nasa.gov/storm. In this study, the final run post real-time IMERG is evaluated with all-weather manual gauge observations over CONUS from June 2014 through May 2015. Relative Bias (RB), Root-Mean-Squared Error (RMSE), Correlation Coefficient (CC), Probability Of Detection (POD), False Alarm Ratio (FAR), and Critical Success Index (CSI) are used to quantify the performance of IMERG. The performance of IMERG in estimating snowfall precipitation is highlighted in the study. This timely evaluation with all-weather gauge observations is expected to offer insights into performance of IMERG and thus provide useful feedback to the algorithm developers as well as the GPM data users.
Impact of Active Climate Control Seats on Energy Use, Fuel Use, and CO2 Emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreutzer, Cory J; Rugh, John P; Titov, Eugene V
A project was developed through collaboration between Gentherm and NREL to determine the impact of climate control seats for light-duty vehicles in the United States. The project used a combination of experimentation and analysis, with experimental results providing critical input to the analysis process. First, outdoor stationary vehicle testing was performed at NREL's facility in Golden, CO using multiple occupants. Two pre-production Ford Focus electric vehicles were used for testing; one containing a standard inactive seat and the second vehicle containing a Gentherm climate control seat. Multiple maximum cool-down and steady-state cooling tests were performed in late summer conditions. Themore » two vehicles were used to determine the increase in cabin temperature when using the climate control seat in comparison to the baseline vehicle cabin temperature with a standard seat at the equivalent occupant whole-body sensation. The experiments estimated that on average, the climate control seats allowed for a 2.61 degrees Celsius increase in vehicle cabin temperature at equivalent occupant body sensation compared to the baseline vehicle. The increased cabin air temperature along with their measured energy usage were then used as inputs to the national analysis process. The national analysis process was constructed from full vehicle cabin, HVAC, and propulsion models previously developed by NREL. In addition, three representative vehicle platforms, vehicle usage patterns, and vehicle registration weighted environmental data were integrated into the analysis process. Both the baseline vehicle and the vehicle with climate control seats were simulated, using the experimentally determined cabin temperature offset of 2.61degrees Celsius and added seat energy as inputs to the climate control seat vehicle model. The U.S. composite annual fuel use savings for the climate control seats over the baseline A/C system was determined to be 5.1 gallons of gasoline per year per vehicle, corresponding to 4.0 grams of CO2/mile savings. Finally, the potential impact of 100 percent adoption of climate control seats on U.S. light-duty fleet A/C fuel use was calculated to be 1.3 billion gallons of gasoline annually with a corresponding CO2 emissions reduction of 12.7 million tons. Direct comparison of the impact of the CCS to the ventilated seat off-cycle credit was not possible because the NREL analysis calculated a combined car/truck savings and the baseline A/C CO2 emissions were higher than EPA. To enable comparison, the CCS national A/C CO2 emissions were split into car/truck components and the ventilated seat credit was scaled up. The split CO2 emissions savings due to the CCS were 3.5 g/mi for a car and 4.4 g/mi for a truck. The CCS saved an additional 2.0 g/mi and 2.5 g/mi over the adjusted ventilated seat credit for a car and truck, respectively.« less
Low Ozone over Europe Doesn't Mean the Sky Is Falling, Its Actually Rising
NASA Technical Reports Server (NTRS)
Strahan, Susan; Newman, Paul; Steenrod, Stephen
2016-01-01
Data Sources: NASA Aura Microwave Limb Sounder (MLS) (O3 profiles and columns), NASA Global Modeling Initiative (GMI) Chemistry and Transport Model (calculated O3depletion), and MERRA Tropopause Heights. Technical Description of Figures: The left graphics show MLS northern hemisphere stratospheric column ozone on Feb. 1, 2016. Very low columns are seen over the UK and Europe (<225 DU, inside dashed circle). The lower graphic shows the GMI-calculated O3 depletion. It's very small, suggesting the low O3 does not indicate significant depletion. The right graphics show how the high tropopause height in this region explains the observed low ozone. The lower panel shows that the high tropopause on Feb. 1 lifts the O3 profile compared to a typical profile found earlier in winter. This motion lifts the profile to lower pressures thus reducing the total column. The GMI Model shows only 4 Dobson Units (DU) of O3 depletion even though the column is more than 100 DU lower than one month earlier. Scientific significant and societal relevance: To quantitatively understand anthropogenic impacts to the stratospheric ozone layer, we must be able to distinguish between low ozone caused by ozone depleting substances and that caused by natural dynamical variability in the atmosphere. Observations and realistic simulations of atmospheric composition are both required in order to separate natural and anthropogenic ozone variability.
2009-07-31
CAPE CANAVERAL, Fla. – NASA Administrator Charles Bolden (left) and Japan Aerospace Exploration Agency, or JAXA, President Keiji Tachikawa sign an agreement defining the terms of cooperation between the agencies on the Global Precipitation Measurement, or GPM, mission. The ceremony took place July 30 at the Kennedy Space Center Visitor Complex, Fla. Through the agreement, NASA is responsible for the GPM core observatory spacecraft bus, the GPM Microwave Imager, or GMI, carried by it, and a second GMI to be flown on a partner-provided Low-Inclination Observatory. JAXA will supply the Dual-frequency Precipitation Radar for the core observatory, an H-IIA rocket for the core observatory's launch in July 2013, and data from a conical-scanning microwave imager on the upcoming Global Change Observation Mission satellite. Photo credit: NASA/Jack Pfaller
2009-07-31
CAPE CANAVERAL, Fla. – NASA Administrator Charles Bolden (left) and Japan Aerospace Exploration Agency, or JAXA, President Keiji Tachikawa pose for photographers after signing an agreement defining the terms of cooperation between NASA and JAXA on the Global Precipitation Measurement, or GPM, mission. The ceremony took place July 30 at the Kennedy Space Center Visitor Complex, Fla. Through the agreement, NASA is responsible for the GPM core observatory spacecraft bus, the GPM Microwave Imager, or GMI, carried by it, and a second GMI to be flown on a partner-provided Low-Inclination Observatory. JAXA will supply the Dual-frequency Precipitation Radar for the core observatory, an H-IIA rocket for the core observatory's launch in July 2013, and data from a conical-scanning microwave imager on the upcoming Global Change Observation Mission satellite. Photo credit: NASA/Jack Pfaller
The 20-22 January 2007 Snow Events over Canada: Microphysical Properties
NASA Technical Reports Server (NTRS)
Tao. W.K.; Shi, J.J.; Matsui, T.; Hao, A.; Lang, S.; Peters-Lidard, C.; Skofronick-Jackson, G.; Petersen, W.; Cifelli, R.; Rutledge, S.
2009-01-01
One of the grand challenges of the Global Precipitation Measurement (GPM) mission is to improve precipitation measurements in mid- and high-latitudes during cold seasons through the use of high-frequency passive microwave radiometry. Toward this end, the Weather Research and Forecasting (WRF) model with the Goddard microphysics scheme is coupled with a Satellite Data Simulation Unit (WRF-SDSU) that has been developed to facilitate over-land snowfall retrieval algorithms by providing a virtual cloud library and microwave brightness temperature (Tb) measurements consistent with the GPM Microwave Imager (GMI). This study tested the Goddard cloud microphysics scheme in WRF for snowstorm events (January 20-22, 2007) that took place over the Canadian CloudSAT/CALIPSO Validation Project (C3VP) ground site (Centre for Atmospheric Research Experiments - CARE) in Ontario, Canada. In this paper, the performance of the Goddard cloud microphysics scheme both with 2ice (ice and snow) and 3ice (ice, snow and graupel) as well as other WRF microphysics schemes will be presented. The results are compared with data from the Environment Canada (EC) King Radar, an operational C-band radar located near the CARE site. In addition, the WRF model output is used to drive the Goddard SDSU to calculate radiances and backscattering signals consistent with direct satellite observations for evaluating the model results.
NASA Astrophysics Data System (ADS)
Gong, Jie; Wu, Dong L.
2017-02-01
Scattering differences induced by frozen particle microphysical properties are investigated, using the vertically (V) and horizontally (H) polarized radiances from the Global Precipitation Measurement (GPM) Microwave Imager (GMI) 89 and 166 GHz channels. It is the first study on frozen particle microphysical properties on a global scale that uses the dual-frequency microwave polarimetric signals.From the ice cloud scenes identified by the 183.3 ± 3 GHz channel brightness temperature (Tb), we find that the scattering by frozen particles is highly polarized, with V-H polarimetric differences (PDs) being positive throughout the tropics and the winter hemisphere mid-latitude jet regions, including PDs from the GMI 89 and 166 GHz TBs, as well as the PD at 640 GHz from the ER-2 Compact Scanning Submillimeter-wave Imaging Radiometer (CoSSIR) during the TC4 campaign. Large polarization dominantly occurs mostly near convective outflow regions (i.e., anvils or stratiform precipitation), while the polarization signal is small inside deep convective cores as well as at the remote cirrus region. Neglecting the polarimetric signal would easily result in as large as 30 % error in ice water path retrievals. There is a universal bell curve
in the PD-TBV relationship, where the PD amplitude peaks at ˜ 10 K for all three channels in the tropics and increases slightly with latitude (2-4 K). Moreover, the 166 GHz PD tends to increase in the case where a melting layer is beneath the frozen particles aloft in the atmosphere, while 89 GHz PD is less sensitive than 166 GHz to the melting layer. This property creates a unique PD feature for the identification of the melting layer and stratiform rain with passive sensors.Horizontally oriented non-spherical frozen particles are thought to produce the observed PD because of different ice scattering properties in the V and H polarizations. On the other hand, turbulent mixing within deep convective cores inevitably promotes the random orientation of these particles, a mechanism that works effectively in reducing the PD. The current GMI polarimetric measurements themselves cannot fully disentangle the possible mechanisms.
NASA Technical Reports Server (NTRS)
Gong, Jie; Wu, Dongliang
2017-01-01
Scattering differences induced by frozen particle microphysical properties are investigated, using the vertically (V) and horizontally (H) polarized radiances from the Global Precipitation Measurement (GPM) Microwave Imager (GMI) 89 and 166GHz channels. It is the first study on global frozen particle microphysical properties that uses the dual-frequency microwave polarimetric signals. From the ice cloud scenes identified by the 183.3 3GHz channel brightness temperature (TB), we find that the scatterings of frozen particles are highly polarized with V-H polarimetric differences (PD) being positive throughout the tropics and the winter hemisphere mid-latitude jet regions, including PDs from the GMI 89 and 166GHz TBs, as well as the PD at 640GHz from the ER-2 Compact Scanning Submillimeter-wave Imaging Radiometer (CoSSIR) during the TC4 campaign. Large polarization dominantly occurs mostly near convective outflow region (i.e., anvils or stratiform precipitation), while the polarization signal is small inside deep convective cores as well as at the remote cirrus region. Neglecting the polarimetric signal would result in as large as 30 error in ice water path retrievals. There is a universal bell-curve in the PD TB relationship, where the PD amplitude peaks at 10K for all three channels in the tropics and increases slightly with latitude. Moreover, the 166GHz PD tends to increase in the case where a melting layer is beneath the frozen particles aloft in the atmosphere, while 89GHz PD is less sensitive than 166GHz to the melting layer. This property creates a unique PD feature for the identification of the melting layer and stratiform rain with passive sensors. Horizontally oriented non-spherical frozen particles are thought to produce the observed PD because of different ice scattering properties in the V and H polarizations. On the other hand, changes in the ice microphysical habitats or orientation due to turbulence mixing can also lead to a reduced PD in the deep convective cores. The current GMI polarimetric measurements themselves cannot fully disentangle the possible mechanisms.
Leelahavanichkul, Asada; Pongpirul, Krit; Thongbor, Nisa; Worasilchai, Navaporn; Petphuak, Kwanta; Thongsawang, Bussakorn; Towannang, Piyaporn; Lorvinitnun, Pichet; Sukhontasing, Kanya; Katavetin, Pisut; Praditpornsilpa, Kearkiat; Eiam-Ong, Somchai; Chindamporn, Ariya; Kanjanabuch, Talerngsak
2016-01-01
♦ Aseptic, sheet-like foreign bodies observed inside Tenckhoff (TK) catheter lumens (referred to as "black particles") are, on gross morphology, hardly distinguishable from fungal colonization because these contaminants adhere tightly to the catheter. Detection of fungal cell wall components using (1→3)-β-d-glucan (BG) and galactomannan index (GMI) might be an alternative method for differentiating the particles. ♦ Foreign particles retrieved from TK catheters in 19 peritoneal dialysis patients were examined microscopically and cultured for fungi and bacteria. Simultaneously, a Fungitell test (Associates of Cape Cod, Falmouth, MA, USA) and a Platelia Aspergillus ELISA assay (Bio-Rad Laboratories, Marnes-La-Coquette, France) were used to test the spent dialysate for BG and GMI respectively. ♦ Of the 19 patients, 9 had aseptic black particles and 10 had fungal particles in their tubing. The fungal particles looked grainy, were tightly bound to the catheter, and appeared more "colorful" than the black particles, which looked sheet-like and could easily be removed by milking the tubing. Compared with effluent from patients having aseptic particles, effluent from patients with fungal particles had significantly higher levels of BG (501 ± 70 pg/mL vs. 46 ± 10 pg/mL) and GMI (10.98 ± 2.17 vs. 0.25 ± 0.05). Most of the fungi that formed colonies inside the catheter lumen were molds not usually found in clinical practice, but likely from water or soil, suggesting environmental contamination. Interestingly, in all 10 patients with fungal colonization, visualization of black particles preceded a peritonitis episode and TK catheter removal by approximately 1-3 weeks; in patients with aseptic particles, a 17-week onset to peritonitis was observed. ♦ In all patients with particle-coated peritoneal dialysis tubing, spent dialysate should be screened for BG and GMI. Manipulation of the TK catheter by squeezing, hard flushing, or even brushing to dislodge black particles should be avoided. Replacement of the TK catheter should be suspended until a cause for the particles is determined. Copyright © 2016 International Society for Peritoneal Dialysis.
Tsai, Yu-Shuen; Aguan, Kripamoy; Pal, Nikhil R.; Chung, I-Fang
2011-01-01
Informative genes from microarray data can be used to construct prediction model and investigate biological mechanisms. Differentially expressed genes, the main targets of most gene selection methods, can be classified as single- and multiple-class specific signature genes. Here, we present a novel gene selection algorithm based on a Group Marker Index (GMI), which is intuitive, of low-computational complexity, and efficient in identification of both types of genes. Most gene selection methods identify only single-class specific signature genes and cannot identify multiple-class specific signature genes easily. Our algorithm can detect de novo certain conditions of multiple-class specificity of a gene and makes use of a novel non-parametric indicator to assess the discrimination ability between classes. Our method is effective even when the sample size is small as well as when the class sizes are significantly different. To compare the effectiveness and robustness we formulate an intuitive template-based method and use four well-known datasets. We demonstrate that our algorithm outperforms the template-based method in difficult cases with unbalanced distribution. Moreover, the multiple-class specific genes are good biomarkers and play important roles in biological pathways. Our literature survey supports that the proposed method identifies unique multiple-class specific marker genes (not reported earlier to be related to cancer) in the Central Nervous System data. It also discovers unique biomarkers indicating the intrinsic difference between subtypes of lung cancer. We also associate the pathway information with the multiple-class specific signature genes and cross-reference to published studies. We find that the identified genes participate in the pathways directly involved in cancer development in leukemia data. Our method gives a promising way to find genes that can involve in pathways of multiple diseases and hence opens up the possibility of using an existing drug on other diseases as well as designing a single drug for multiple diseases. PMID:21909426
NASA Astrophysics Data System (ADS)
Zhenqing, L.; Sheng, C.; Chaoying, H.
2017-12-01
The core satellite of Global Precipitation Measurement (GPM) mission was launched on 27 February2014 with two core sensors dual-frequency precipitation radar (DPR) and microwave imager (GMI). The algorithm of Integrated Multi-satellitE Retrievals for the Global Precipitation Measurement (GPM) mission (IMERG) blends the advantages of currently most popular satellite-based quantitative precipitation estimates (QPE) algorithms, i.e. TRMM Multi-satellite Precipitation Analysis (TMPA), Climate Prediction Center morphing technique (CMORPH) ADDIN EN.CITE ADDIN EN.CITE.DATA , Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS).Therefore, IMERG is deemed to be the state-of-art precipitation product with high spatio-temporal resolution of 0.1°/30min. The real-time and post real-time IMERG products are now available online at https://stormpps.gsfc.nasa.gov/storm. Early studies about assessment of IMERG with gauge observations or analysis products show that the current version GPM Day-1 product IMERG demonstrates promising performance over China [1], Europe [2], and United States [3]. However, few studies are found to study the IMERG' potentials of hydrologic utility.In this study, the real-time and final run post real-time IMERG products are hydrologically evaluated with gauge analysis product as reference over Nanliu River basin (Fig.1) in Southern China since March 2014 to February 2017 with Xinanjiang model. Statistics metrics Relative Bias (RB), Root-Mean-Squared Error (RMSE), Correlation Coefficient (CC), Probability Of Detection (POD), False Alarm Ratio (FAR), Critical Success Index (CSI), and Nash-Sutcliffe (NSCE) index will be used to compare the stream flow simulated with IMERG to the observed stream flow. This timely hydrologic evaluation is expected to offer insights into IMERG' potentials in hydrologic utility and thus provide useful feedback to the IMERG algorithm developers and the hydrologic users.
NASA Astrophysics Data System (ADS)
Derin, Y.; Anagnostou, E. N.; Anagnostou, M.; Kalogiros, J. A.; Casella, D.; Marra, A. C.; Panegrossi, G.; Sanò, P.
2017-12-01
Difficulties in representation of high rainfall variability over mountainous areas using ground based sensors make satellite remote sensing techniques attractive for hydrologic studies over these regions. Even though satellite-based rainfall measurements are quasi global and available at high spatial resolution, these products have uncertainties that necessitate use of error characterization and correction procedures based upon more accurate in situ rainfall measurements. Such measurements can be obtained from field campaigns facilitated by research quality sensors such as locally deployed weather radar and in situ weather stations. This study uses such high quality and resolution rainfall estimates derived from dual-polarization X-band radar (XPOL) observations from three field experiments in Mid-Atlantic US East Coast (NASA IPHEX experiment), the Olympic Peninsula of Washington State (NASA OLYMPEX experiment), and the Mediterranean to characterize the error characteristics of multiple passive microwave (PMW) sensor retrievals. The study first conducts an independent error analysis of the XPOL radar reference rainfall fields against in situ rain gauges and disdrometer observations available by the field experiments. Then the study evaluates different PMW precipitation products using the XPOL datasets (GR) over the three aforementioned complex terrain study areas. We extracted matchups of PMW/GR rainfall based on a matching methodology that identifies GR volume scans coincident with PMW field-of-view sampling volumes, and scaled GR parameters to the satellite products' nominal spatial resolution. The following PMW precipitation retrieval algorithms are evaluated: the NASA Goddard PROFiling algorithm (GPROF), standard and climatology-based products (V 3, 4 and 5) from four PMW sensors (SSMIS, MHS, GMI, and AMSR2), and the precipitation products based on the algorithms Cloud Dynamics and Radiation Database (CDRD) for SSMIS and Passive microwave Neural network Precipitation Retrieval (PNPR) for AMSU/MHS, developed at ISAC-CNR within the EUMETSAT H-SAF. We will present error analysis results for the different PMW rainfall retrievals and discuss dependences on precipitation type, elevation and precipitation microphysics (derived from XPOL).
Borrow, Ray; Lee, Jin-Soo; Vázquez, Julio A; Enwere, Godwin; Taha, Muhamed-Kheir; Kamiya, Hajime; Kim, Hwang Min; Jo, Dae Sun
2016-11-21
The Global Meningococcal Initiative (GMI) is a global expert group that includes scientists, clinicians, and public health officials with a wide range of specialties. The purpose of the Initiative is to promote the global prevention of meningococcal disease (MD) through education, research, and cooperation. The first Asia-Pacific regional meeting was held in November 2014. The GMI reviewed the epidemiology of MD, surveillance, and prevention strategies, and outbreak control practices from participating countries in the Asia-Pacific region.Although, in general, MD is underreported in this region, serogroup A disease is most prominent in low-income countries such as India and the Philippines, while Taiwan, Japan, and Korea reported disease from serogroups C, W, and Y. China has a mixed epidemiology of serogroups A, B, C, and W. Perspectives from countries outside of the region were also provided to provide insight into lessons learnt. Based on the available data and meeting discussions, a number of challenges and data gaps were identified and, as a consequence, several recommendations were formulated: strengthen surveillance; improve diagnosis, typing and case reporting; standardize case definitions; develop guidelines for outbreak management; and promote awareness of MD among healthcare professionals, public health officials, and the general public. Copyright © 2016. Published by Elsevier Ltd.
Borrow, Ray; Caugant, Dominique A; Ceyhan, Mehmet; Christensen, Hannah; Dinleyici, Ener Cagri; Findlow, Jamie; Glennie, Linda; Von Gottberg, Anne; Kechrid, Amel; Vázquez Moreno, Julio; Razki, Aziza; Smith, Vincent; Taha, Muhamed-Kheir; Tali-Maamar, Hassiba; Zerouali, Khalid
2017-07-01
The Global Meningococcal Initiative (GMI) has recently considered current issues in Middle Eastern and African countries, and produced two recommendations: (i) that vaccination of attendees should be considered for some types of mass-gathering events, as some countries mandate for the Hajj, and (ii) vaccination of people with human immunodeficiency virus should be used routinely, because of increased meningococcal disease (MD) risk. Differences exist between Middle Eastern and African countries regarding case and syndrome definitions, surveillance, and epidemiologic data gaps. Sentinel surveillance provides an overview of trends and prevalence of different capsular groups supporting vaccine selection and planning, whereas cost-effectiveness decisions require comprehensive disease burden data, ideally counting every case. Surveillance data showed importance of serogroup B MD in North Africa and serogroup W expansion in Turkey and South Africa. Success of MenAfriVac ® in the African "meningitis belt" was reviewed; the GMI believes similar benefits may follow development of a low-cost meningococcal pentavalent vaccine, currently in phase 1 clinical trial, by 2022. The importance of carriage and herd protection for controlling invasive MD and the importance of advocacy and awareness campaigns were also highlighted. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Global Precipitation Measurement (GPM) Mission Development Status
NASA Technical Reports Server (NTRS)
Azarbarzin, Ardeshir Art
2011-01-01
Mission Objective: (1) Improve scientific understanding of the global water cycle and fresh water availability (2) Improve the accuracy of precipitation forecasts (3) Provide frequent and complete sampling of the Earth s precipitation Mission Description (Class B, Category I): (1) Constellation of spacecraft provide global precipitation measurement coverage (2) NASA/JAXA Core spacecraft: Provides a microwave radiometer (GMI) and dual-frequency precipitation radar (DPR) to cross-calibrate entire constellation (3) 65 deg inclination, 400 km altitude (4) Launch July 2013 on HII-A (5) 3 year mission (5 year propellant) (6) Partner constellation spacecraft.
Production of NOx by Lightning and its Effects on Atmospheric Chemistry
NASA Technical Reports Server (NTRS)
Pickering, Kenneth E.
2009-01-01
Production of NO(x) by lightning remains the NO(x) source with the greatest uncertainty. Current estimates of the global source strength range over a factor of four (from 2 to 8 TgN/year). Ongoing efforts to reduce this uncertainty through field programs, cloud-resolved modeling, global modeling, and satellite data analysis will be described in this seminar. Representation of the lightning source in global or regional chemical transport models requires three types of information: the distribution of lightning flashes as a function of time and space, the production of NO(x) per flash, and the effective vertical distribution of the lightning-injected NO(x). Methods of specifying these items in a model will be discussed. For example, the current method of specifying flash rates in NASA's Global Modeling Initiative (GMI) chemical transport model will be discussed, as well as work underway in developing algorithms for use in the regional models CMAQ and WRF-Chem. A number of methods have been employed to estimate either production per lightning flash or the production per unit flash length. Such estimates derived from cloud-resolved chemistry simulations and from satellite NO2 retrievals will be presented as well as the methodologies employed. Cloud-resolved model output has also been used in developing vertical profiles of lightning NO(x) for use in global models. Effects of lightning NO(x) on O3 and HO(x) distributions will be illustrated regionally and globally.
On-road heavy-duty diesel particulate matter emissions modeled using chassis dynamometer data.
Kear, Tom; Niemeier, D A
2006-12-15
This study presents a model, derived from chassis dynamometer test data, for factors (operational correction factors, or OCFs) that correct (g/mi) heavy-duty diesel particle emission rates measured on standard test cycles for real-world conditions. Using a random effects mixed regression model with data from 531 tests of 34 heavy-duty vehicles from the Coordinating Research Council's E55/E59 research project, we specify a model with covariates that characterize high power transient driving, time spent idling, and average speed. Gram per mile particle emissions rates were negatively correlated with high power transient driving, average speed, and time idling. The new model is capable of predicting relative changes in g/mi on-road heavy-duty diesel particle emission rates for real-world driving conditions that are not reflected in the driving cycles used to test heavy-duty vehicles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rugh, John P; Kekelia, Bidzina; Kreutzer, Cory J
The U.S. uses 7.6 billion gallons of fuel per year for vehicle air conditioning (A/C), equivalent to 5.7 percent of the total national light-duty vehicle (LDV) fuel use. This equates to 30 gallons/year per vehicle, or 23.5 grams (g) of carbon dioxide (CO2) per mile, for an average U.S. vehicle. A/C is a significant contribution to national fuel use; therefore, technologies that reduce A/C loads may reduce operational costs, A/C fuel use, and CO2 emissions. Since A/C is not operated during standard EPA fuel economy testing protocols, EPA provides off-cycle credits to encourage OEMs to implement advanced A/C technologies thatmore » reduce fuel use in the real world. NREL researchers assessed thermal/solar off-cycle credits available in the U.S. Environmental Protection Agency's (EPA's) Final Rule for Model Year 2017 and Later Light-Duty Vehicle Greenhouse Gas Emissions and Corporate Average Fuel Economy. Credits include glazings, solar reflective paint, and passive and active cabin ventilation. Implementing solar control glass reduced CO2 emissions by 2.0 g/mi, and solar reflective paint resulted in a reduction of 0.8 g/mi. Active and passive ventilation strategies only reduced emissions by 0.1 and 0.2 g/mi, respectively. The national-level analysis process is powerful and general; it can be used to determine the impact of a wide range of new vehicle thermal technologies on fuel use, EV range, and CO2 emissions.« less
Moran-Gilad, Jacob; Sintchenko, Vitali; Pedersen, Susanne Karlsmose; Wolfgang, William J; Pettengill, James; Strain, Errol; Hendriksen, Rene S
2015-04-03
The advent of next-generation sequencing (NGS) has revolutionised public health microbiology. Given the potential impact of NGS, it is paramount to ensure standardisation of 'wet' laboratory and bioinformatic protocols and promote comparability of methods employed by different laboratories and their outputs. Therefore, one of the ambitious goals of the Global Microbial Identifier (GMI) initiative (http://www.globalmicrobialidentifier.org/) has been to establish a mechanism for inter-laboratory NGS proficiency testing (PT). This report presents findings from the survey recently conducted by Working Group 4 among GMI members in order to ascertain NGS end-use requirements and attitudes towards NGS PT. The survey identified the high professional diversity of laboratories engaged in NGS-based public health projects and the wide range of capabilities within institutions, at a notable range of costs. The priority pathogens reported by respondents reflected the key drivers for NGS use (high burden disease and 'high profile' pathogens). The performance of and participation in PT was perceived as important by most respondents. The wide range of sequencing and bioinformatics practices reported by end-users highlights the importance of standardisation and harmonisation of NGS in public health and underpins the use of PT as a means to assuring quality. The findings of this survey will guide the design of the GMI PT program in relation to the spectrum of pathogens included, testing frequency and volume as well as technical requirements. The PT program for external quality assurance will evolve and inform the introduction of NGS into clinical and public health microbiology practice in the post-genomic era.
ENSO Precipitation Variations as Seen by GPM and TRMM Radar and Passive Microwave Observations
NASA Astrophysics Data System (ADS)
Adler, R. F.; Wang, J. J.
2017-12-01
Tropical precipitation variations related to ENSO are the largest-scale such variations both spatially and in magnitude and are also the main driver of surface temperature-surface rainfall relationships on the inter-annual scale. GPM (and TRMM before it) provide a unique capability to examine these relations with both the passive and active microwave approaches. Documenting the phase and magnitudes of these relationships are important to understand these large-scale processes and to validate climate models. However, as past research by the authors have shown, the results of these relations have been different for passive vs. radar retrievals. In this study we re-examine these relations with the new GPM Version 5 products, focusing on the 2015-2016 El Nino event. The recent El Nino peaked in Dec. 2015 through Feb. 2016 with the usual patterns of precipitation anomalies across the Tropics as evident in both the GPM GMI and the Near Surface (NS) DPR (single frequency) retrievals. Integrating both the rainfall anomalies and the SST anomalies over the entire tropical ocean area (25N-25S) and comparing how they vary as a function of time on a monthly scale during the GPM era (2014-2017), the radar-based results show contrasting results to those from the GMI-based (and GPCP) results. The passive microwave data (GMI and GPCP) indicates a slope of 17%/C for the precipitation variations, while the radar NS indicates about half that ( 8%/C). This NS slope is somewhat less than calculated before with GPM's V4 data, but is larger than obtained with TRMM PR data ( 0%/C) for an earlier period during the TRMM era. Very similar results as to the DPR NS calculations are also obtained for rainfall at 2km and 4km altitude and for the Combined (DPR + GMI) product. However, at 6km altitude, although the reflectivity and rainfall magnitudes are much less than at lower altitudes, the slope of the rainfall/SST relation is 17%/C, the same as calculated with the passive microwave data. The reasons for these differences are explored and lead to conclusions that the radar-based estimates of surface rainfall with GPM have limitations (and are negatively biased) in relatively intense rainfall and this leads to an underestimation of large-scale rainfall under El Nino conditions, where more oceanic rainfall, and more intense rainfall are prevalent.
Global Precipitation Measurement (GPM) Safety Inhibit Timeline Tool
NASA Technical Reports Server (NTRS)
Dion, Shirley
2012-01-01
The Global Precipitation Measurement (GPM) Observatory is a joint mission under the partnership by National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA), Japan. The NASA Goddard Space Flight Center (GSFC) has the lead management responsibility for NASA on GPM. The GPM program will measure precipitation on a global basis with sufficient quality, Earth coverage, and sampling to improve prediction of the Earth's climate, weather, and specific components of the global water cycle. As part of the development process, NASA built the spacecraft (built in-house at GSFC) and provided one instrument (GPM Microwave Imager (GMI) developed by Ball Aerospace) JAXA provided the launch vehicle (H2-A by MHI) and provided one instrument (Dual-Frequency Precipitation Radar (DPR) developed by NTSpace). Each instrument developer provided a safety assessment which was incorporated into the NASA GPM Safety Hazard Assessment. Inhibit design was reviewed for hazardous subsystems which included the High Gain Antenna System (HGAS) deployment, solar array deployment, transmitter turn on, propulsion system release, GMI deployment, and DPR radar turn on. The safety inhibits for these listed hazards are controlled by software. GPM developed a "pathfinder" approach for reviewing software that controls the electrical inhibits. This is one of the first GSFC in-house programs that extensively used software controls. The GPM safety team developed a methodology to document software safety as part of the standard hazard report. As part of this process a new tool "safety inhibit time line" was created for management of inhibits and their controls during spacecraft buildup and testing during 1& Tat GSFC and at the Range in Japan. In addition to understanding inhibits and controls during 1& T the tool allows the safety analyst to better communicate with others the changes in inhibit states with each phase of hardware and software testing. The tool was very useful for communicating compliance with safety requirements especially when working with a foreign partner.
TRMM Data Improvement as Part of the GPM Data Processing
NASA Technical Reports Server (NTRS)
Stocker, Erich F.; Ji, Y.; Kwiatkowski, J.; Kelley, O.; Stout, J.; Woltz, L.
2016-01-01
NASA has a long standing commitment to the improvement of its mission datasets. Indeed, data reprocessing is always built into the plans, schedule and budget for the mission data processing system. However, in addition to these ongoing mission reprocessing, NASA also supports a final reprocessing of all the data for a mission upon its completion (known as Phase F). TRMM Phase F started with the end of the TRMM mission in June of 2015. This last reprocessing has two overall goals: improvement of the TRMM mission data products; incorporation of the 17+ years of TRMM data into the ongoing NASA/JAXA GPM data processing. The first goal guarantees that the latest algorithms used for precipitation retrievals will also be used in reprocessing the TRMM data. The second goal ensures that as GPM algorithms are improved, the entire TRMM data will always be reprocessed with each GPM reprocessing. In essence TRMM becomes another of the GPM constellation satellites. This paper will concentrate on presenting the improvements to TMI level 1 data including calibration, geolocation, and emissive antenna corrections. It will describe the format changes that will occur how the TMI level 1C product will be intercalibrated using GMI as the reference calibration. It will also provide an overview of changes in the precipitation radar products as well as the combined TMIPR product.
NASA Technical Reports Server (NTRS)
Shi, J. J.; Tao, W.-K.; Matsui, T.; Cifelli, R.; Huo, A.; Lang, S.; Tokay, A.; Peters-Lidard, C.; Jackson, G.; Rutledge, S.;
2009-01-01
One of the grand challenges of the Global Precipitation Measurement (GPM) mission is to improve cold season precipitation measurements in middle and high latitudes through the use of high-frequency passive microwave radiometry. For this, the Weather Research and Forecasting (WRF) model with the Goddard microphysics scheme is coupled with a satellite data simulation unit (WRF-SDSU) that has been developed to facilitate over-land snowfall retrieval algorithms by providing a virtual cloud library and microwave brightness temperature (Tb) measurements consistent with the GPM Microwave Imager (GMI). This study tested the Goddard cloud microphysics scheme in WRF for two snowstorm events, a lake effect and a synoptic event, that occurred between 20 and 22 January 2007 over the Canadian CloudSAT/CALIPSO Validation Project (C3VP) site in Ontario, Canada. The 24h-accumulated snowfall predicted by the WRF model with the Goddard microphysics was comparable to the observed accumulated snowfall by the ground-based radar for both events. The model correctly predicted the onset and ending of both snow events at the CARE site. WRF simulations capture the basic cloud properties as seen by the ground-based radar and satellite (i.e., CloudSAT, AMSU-B) observations as well as the observed cloud streak organization in the lake event. This latter result reveals that WRF was able to capture the cloud macro-structure reasonably well.
NASA Astrophysics Data System (ADS)
Santos, Sílvia; Cardoso, Joana F. M. F.; Carvalho, Célia; Luttikhuizen, Pieternella C.; van der Veer, Henk W.
2011-03-01
Monthly investment in soma and gonads in the bivalve Scrobicularia plana is described for three populations along its distributional range: Minho estuary, Portugal; Westerschelde estuary, The Netherlands and Buvika estuary, Norway. Seasonal cycles in body mass (BMI), somatic mass (SMI) and gonadal mass (GMI) indices were observed for all populations. In Portugal, BMI and SMI peaked in mid-autumn, while in The Netherlands both indices were at their highest in mid-spring. Norway showed a different pattern with two distinct peaks: one in mid-autumn and a second peak in spring. GMI reached maximum values in July in Portugal and Netherlands and in June in Norway. Overall, mean BMI and SMI were lower in Portugal while mean GMI was lower in Norway. The spawning period lasted the whole summer in Portugal, but was shorter (only two months) in The Netherlands and Norway. The reproductive investment in The Netherlands was significantly higher than in Portugal and Norway, with the lowest values being observed in Norway. Differences in annual cycles between populations were attributed to environmental factors, namely temperature and food availability. Temperature seems important in shaping the reproductive pattern with more northern populations showing shorter reproductive periods starting later in the year, and a lower reproductive output. In addition, winter water temperatures can explain the lower mean body and somatic mass values observed in Portugal. Food availability influenced the physiological performance of the species with peaks in somatic mass coinciding with phytoplankton blooms. This relation between physiological performance and environmental factors influences S. plana distribution, densities and even survival, with natural consequences on its commercial importance.
NASA Technical Reports Server (NTRS)
Huang, Lei; Jiang, Jonathan H.; Murray, Lee T.; Damon, Megan R.; Su, Hui; Livesey, Nathaniel J.
2016-01-01
This study evaluates the distribution and variation of carbon monoxide (CO) in the upper troposphere and lower stratosphere (UTLS) during 2004-2012 as simulated by two chemical transport models, using the latest version of Aura Microwave Limb Sounder (MLS) observations. The simulated spatial distributions, temporal variations and vertical transport of CO in the UTLS region are compared with those observed by MLS. We also investigate the impact of surface emissions and deep convection on CO concentrations in the UTLS over different regions, using both model simulations and MLS observations. Global Modeling Initiative (GMI) and GEOS-Chem simulations of UTLS CO both show similar spatial distributions to observations. The global mean CO values simulated by both models agree with MLS observations at 215 and 147 hPa, but are significantly underestimated by more than 40% at 100 hPa. In addition, the models underestimate the peak CO values by up to 70% at 100 hPa, 60% at 147 hPa and 40% at 215 hPa, with GEOS-Chem generally simulating more CO at 100 hPa and less CO at 215 hPa than GMI. The seasonal distributions of CO simulated by both models are in better agreement with MLS in the Southern Hemisphere (SH) than in the Northern Hemisphere (NH), with disagreements between model and observations over enhanced CO regions such as southern Africa. The simulated vertical transport of CO shows better agreement with MLS in the tropics and the SH subtropics than the NH subtropics. We also examine regional variations in the relationships among surface CO emission, convection and UTLS CO concentrations. The two models exhibit emission-convection- CO relationships similar to those observed by MLS over the tropics and some regions with enhanced UTLS CO.
Liu, Ying; Tang, Yuanman; Qin, Xiyun; Yang, Liang; Jiang, Gaofei; Li, Shili; Ding, Wei
2017-01-01
Ralstonia solanacearum, an agent of bacterial wilt, is a highly variable species with a broad host range and wide geographic distribution. As a species complex, it has extensive genetic diversity and its living environment is polymorphic like the lowland and the highland area, so more genomes are needed for studying population evolution and environment adaptation. In this paper, we reported the genome sequencing of R. solanacearum strain CQPS-1 isolated from wilted tobacco in Pengshui, Chongqing, China, a highland area with severely acidified soil and continuous cropping of tobacco more than 20 years. The comparative genomic analysis among different R. solanacearum strains was also performed. The completed genome size of CQPS-1 was 5.89 Mb and contained the chromosome (3.83 Mb) and the megaplasmid (2.06 Mb). A total of 5229 coding sequences were predicted (the chromosome and megaplasmid encoded 3573 and 1656 genes, respectively). A comparative analysis with eight strains from four phylotypes showed that there was some variation among the species, e.g., a large set of specific genes in CQPS-1. Type III secretion system gene cluster (hrp gene cluster) was conserved in CQPS-1 compared with the reference strain GMI1000. In addition, most genes coding core type III effectors were also conserved with GMI1000, but significant gene variation was found in the gene ripAA: the identity compared with strain GMI1000 was 75% and the hrpII box promoter in the upstream had significantly mutated. This study provided a potential resource for further understanding of the relationship between variation of pathogenicity factors and adaptation to the host environment. PMID:28620361
NASA Technical Reports Server (NTRS)
Munchak, S. Joseph; Skofronick-Jackson, Gail
2012-01-01
During the middle part of this decade a wide variety of passive microwave imagers and sounders will be unified in the Global Precipitation Measurement (GPM) mission to provide a common basis for frequent (3 hr), global precipitation monitoring. The ability of these sensors to detect precipitation by discerning it from non-precipitating background depends upon the channels available and characteristics of the surface and atmosphere. This study quantifies the minimum detectable precipitation rate and fraction of precipitation detected for four representative instruments (TMI, GMI, AMSU-A, and AMSU-B) that will be part of the GPM constellation. Observations for these instruments were constructed from equivalent channels on the SSMIS instrument on DMSP satellites F16 and F17 and matched to precipitation data from NOAA's National Mosaic and QPE (NMQ) during 2009 over the continuous United States. A variational optimal estimation retrieval of non-precipitation surface and atmosphere parameters was used to determine the consistency between the observed brightness temperatures and these parameters, with high cost function values shown to be related to precipitation. The minimum detectable precipitation rate, defined as the lowest rate for which probability of detection exceeds 50%, and the detected fraction of precipitation, are reported for each sensor, surface type (ocean, coast, bare land, snow cover) and precipitation type (rain, mix, snow). The best sensors over ocean and bare land were GMI (0.22 mm/hr minimum threshold and 90% of precipitation detected) and AMSU (0.26 mm/hr minimum threshold and 81% of precipitation detected), respectively. Over coasts (0.74 mm/hr threshold and 12% detected) and snow-covered surfaces (0.44 mm/hr threshold and 23% detected), AMSU again performed best but with much lower detection skill, whereas TMI had no skill over these surfaces. The sounders (particularly over water) benefited from the use of re-analysis data (vs. climatology) to set the a-priori atmospheric state and all instruments benefit from the use of a conditional snow cover emissivity database over land. It is recommended that real-time sources of these data be used in the operational GPM precipitation algorithms.
Borrow, Ray; Alarcón, Pedro; Carlos, Josefina; Caugant, Dominique A; Christensen, Hannah; Debbag, Roberto; De Wals, Philippe; Echániz-Aviles, Gabriela; Findlow, Jamie; Head, Chris; Holt, Daphne; Kamiya, Hajime; Saha, Samir K; Sidorenko, Sergey; Taha, Muhamed-Kheir; Trotter, Caroline; Vázquez Moreno, Julio A; von Gottberg, Anne; Sáfadi, Marco A P
2017-04-01
The 2015 Global Meningococcal Initiative (GMI) meeting discussed the global importance of meningococcal disease (MD) and its continually changing epidemiology. Areas covered: Although recent vaccination programs have been successful in reducing incidence in many countries (e.g. Neisseria meningitidis serogroup [Men]C in Brazil, MenA in the African meningitis belt), new clones have emerged, causing outbreaks (e.g. MenW in South America, MenC in Nigeria and Niger). The importance of herd protection was highlighted, emphasizing the need for high vaccination uptake among those with the highest carriage rates, as was the need for boosters to maintain individual and herd protection following decline of immune response after primary immunization. Expert commentary: The GMI Global Recommendations for Meningococcal Disease were updated to include a recommendation to enable access to whole-genome sequencing as for surveillance, guidance on strain typing to guide use of subcapsular vaccines, and recognition of the importance of advocacy and awareness campaigns.
NASA Astrophysics Data System (ADS)
Galantowicz, J. F.; Picton, J.; Root, B.
2017-12-01
Passive microwave remote sensing can provided a distinct perspective on flood events by virtue of wide sensor fields of view, frequent observations from multiple satellites, and sensitivity through clouds and vegetation. During Hurricanes Harvey and Irma, we used AMSR2 (Advanced Microwave Scanning Radiometer 2, JAXA) data to map flood extents starting from the first post-storm rain-free sensor passes. Our standard flood mapping algorithm (FloodScan) derives flooded fraction from 22-km microwave data (AMSR2 or NASA's GMI) in near real time and downscales it to 90-m resolution using a database built from topography, hydrology, and Global Surface Water Explorer data and normalized to microwave data footprint shapes. During Harvey and Irma we tested experimental versions of the algorithm designed to map the maximum post-storm flood extent rapidly and made a variety of map products available immediately for use in storm monitoring and response. The maps have several unique features including spanning the entire storm-affected area and providing multiple post-storm updates as flood water shifted and receded. From the daily maps we derived secondary products such as flood duration, maximum flood extent (Figure 1), and flood depth. In this presentation, we describe flood extent evolution, maximum extent, and local details as detected by the FloodScan algorithm in the wake of Harvey and Irma. We compare FloodScan results to other available flood mapping resources, note observed shortcomings, and describe improvements made in response. We also discuss how best-estimate maps could be updated in near real time by merging FloodScan products and data from other remote sensing systems and hydrological models.
The GPM Common Calibrated Brightness Temperature Product
NASA Technical Reports Server (NTRS)
Stout, John; Berg, Wesley; Huffman, George; Kummerow, Chris; Stocker, Erich
2005-01-01
The Global Precipitation Measurement (GPM) project will provide a core satellite carrying the GPM Microwave Imager (GMI) and will use microwave observations from a constellation of other satellites. Each partner with a satellite in the constellation will have a calibration that meets their own requirements and will decide on the format to archive their brightness temperature (Tb) record in GPM. However, GPM multi-sensor precipitation algorithms need to input intercalibrated Tb's in order to avoid differences among sensors introducing artifacts into the longer term climate record of precipitation. The GPM Common Calibrated Brightness Temperature Product is intended to address this problem by providing intercalibrated Tb data, called "Tc" data, where the "c" stands for common. The precipitation algorithms require a Tc file format that is both generic and flexible enough to accommodate the different passive microwave instruments. The format will provide detailed information on the processing history in order to allow future researchers to have a record of what was done. The format will be simple, including the main items of scan time, latitude, longitude, and Tc. It will also provide spacecraft orientation, spacecraft location, orbit, and instrument scan type (cross-track or conical). Another simplification is to store data in real numbers, avoiding the ambiguity of scaled data. Finally, units and descriptions will be provided in the product. The format is built on the concept of a swath, which is a series of scans that have common geolocation and common scan geometry. Scan geometry includes pixels per scan, sensor orientation, scan type, and incidence angles. The Tc algorithm and data format are being tested using the pre-GPM Precipitation Processing System (PPS) software to generate formats and 1/0 routines. In the test, data from SSM/I, TMI, AMSR-E, and WindSat are being processed and written as Tc products.
NASA Astrophysics Data System (ADS)
Matsuoka, Satoshi; Tsutsumi, Jun'ya; Kamata, Toshihide; Hasegawa, Tatsuo
2018-04-01
In this work, a high-resolution microscopic gate-modulation imaging (μ-GMI) technique is successfully developed to visualize inhomogeneous charge and electric field distributions in operating organic thin-film transistors (TFTs). We conduct highly sensitive and diffraction-limit gate-modulation sensing for acquiring difference images of semiconducting channels between at gate-on and gate-off states that are biased at an alternate frequency of 15 Hz. As a result, we observe unexpectedly inhomogeneous distribution of positive and negative local gate-modulation (GM) signals at a probe photon energy of 1.85 eV in polycrystalline pentacene TFTs. Spectroscopic analyses based on a series of μ-GMI at various photon energies reveal that two distinct effects appear, simultaneously, within the polycrystalline pentacene channel layers: Negative GM signals at 1.85 eV originate from the second-derivative-like GM spectrum which is caused by the effect of charge accumulation, whereas positive GM signals originate from the first-derivative-like GM spectrum caused by the effect of leaked gate fields. Comparisons with polycrystalline morphologies indicate that grain centers are predominated by areas with high leaked gate fields due to the low charge density, whereas grain edges are predominantly high-charge-density areas with a certain spatial extension as associated with the concentrated carrier traps. Consequently, it is reasonably understood that larger grains lead to higher device mobility, but with greater inhomogeneity in charge distribution. These findings provide a clue to understand and improve device characteristics of polycrystalline TFTs.
The geospatial modeling interface (GMI) framework for deploying and assessing environmental models
USDA-ARS?s Scientific Manuscript database
Geographical information systems (GIS) software packages have been used for close to three decades as analytical tools in environmental management for geospatial data assembly, processing, storage, and visualization of input data and model output. However, with increasing availability and use of ful...
USDA-ARS?s Scientific Manuscript database
Geographical information systems (GIS) software packages have been used for nearly three decades as analytical tools in natural resource management for geospatial data assembly, processing, storage, and visualization of input data and model output. However, with increasing availability and use of fu...
40 CFR 86.1868-12 - CO2 credits for improving the efficiency of air conditioning systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Creditvalue (g/mi) Reduced reheat, with externally-controlled, variable-displacement compressor (e.g. a compressor that controls displacement based on temperature setpoint and/or cooling demand of the air...-controlled, fixed-displacement or pneumatic variable displacement compressor (e.g. a compressor that controls...
40 CFR 86.1868-12 - CO2 credits for improving the efficiency of air conditioning systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., engine displacement, transmission class and configuration, interior volume, climate control system type... Creditvalue (g/mi) Reduced reheat, with externally-controlled, variable-displacement compressor (e.g. a compressor that controls displacement based on temperature setpoint and/or cooling demand of the air...
Update on GOCART Model Development and Applications
NASA Technical Reports Server (NTRS)
Kim, Dongchul
2013-01-01
Recent results from the GOCART and GMI models are reported. They include: Updated emission inventories for anthropogenic and volcano sources, satellite-derived vegetation index for seasonal variations of dust emission, MODIS-derived smoke AOT for assessing uncertainties of biomass-burning emissions, long-range transport of aerosol across the Pacific Ocean, and model studies on the multi-decadal trend of regional and global aerosol distributions from 1980 to 2010, volcanic aerosols, and nitrate aerosols. The document was presented at the 2013 AEROCENTER Annual Meeting held at the GSFC Visitors Center, May 31, 2013. The Organizers of the meeting are posting the talks to the public Aerocentr website, after the meeting.
Assimilation of GPM GMI Rainfall Product with WRF GSI
NASA Technical Reports Server (NTRS)
Li, Xuanli; Mecikalski, John; Zavodsky, Bradley
2015-01-01
The Global Precipitation Measurement (GPM) is an international mission to provide next-generation observations of rain and snow worldwide. The GPM built on Tropical Rainfall Measuring Mission (TRMM) legacy, while the core observatory will extend the observations to higher latitudes. The GPM observations can help advance our understanding of precipitation microphysics and storm structures. Launched on February 27th, 2014, the GPM core observatory is carrying advanced instruments that can be used to quantify when, where, and how much it rains or snows around the world. Therefore, the use of GPM data in numerical modeling work is a new area and will have a broad impact in both research and operational communities. The goal of this research is to examine the methodology of assimilation of the GPM retrieved products. The data assimilation system used in this study is the community Gridpoint Statistical Interpolation (GSI) system for the Weather Research and Forecasting (WRF) model developed by the Development Testbed Center (DTC). The community GSI system runs in independently environment, yet works functionally equivalent to operational centers. With collaboration with the NASA Short-term Prediction Research and Transition (SPoRT) Center, this research explores regional assimilation of the GPM products with case studies. Our presentation will highlight our recent effort on the assimilation of the GPM product 2AGPROFGMI, the retrieved Microwave Imager (GMI) rainfall rate data for initializing a real convective storm. WRF model simulations and storm scale data assimilation experiments will be examined, emphasizing both model initialization and short-term forecast of precipitation fields and processes. In addition, discussion will be provided on the development of enhanced assimilation procedures in the GSI system with respect to other GPM products. Further details of the methodology of data assimilation, preliminary result and test on the impact of GPM data and the influence on precipitation forecast will be presented at the conference.
Investigations on structural and giant magneto impedance properties of Zn3(VO4)2 nanorods
NASA Astrophysics Data System (ADS)
Malaidurai, M.; Bulusu, Venkat; De, Sourodeep; Thangavel, R.
2018-05-01
In this paper, we successfully synthesized Zn3(VO4)2 novel nanorods by hydrothermal method. As mixed phase of Zn3(VO4)2 structural and phase transformations were monitored in crystal lattice with different ionic strength by X-ray diffraction(XRD). The Zn3(VO4)2 thin film formation validated through qualitative and quantitative analysis by FESEM and it is clearly depicted the formation of the Zn3(VO4)2 nanorods varied from ˜100nm in lengths and ˜30 nm in widths. The Zn precursor's anions directly influence the composition and shape of the resultant hydrated Zn3(VO4)2. Impedance analysis were closely studied with Impedance-Frequency characterization, which was then followed by a dielectric measurement. The analysis of GMI effect was carried out with the help of the model equivalent circuit at low frequencies, constant phase element (CPE). GMI effect and the sensitivity are calculated for the sample by appling magnetic field and driving frequency in order to analyze the giant magnetoimpedance resistance of grain boundaries for spintronics applications.
The Ozone Budget in the Upper Troposphere from Global Modeling Initiative (GMI)Simulations
NASA Technical Reports Server (NTRS)
Rodriquez, J.; Duncan, Bryan N.; Logan, Jennifer A.
2006-01-01
Ozone concentrations in the upper troposphere are influenced by in-situ production, long-range tropospheric transport, and influx of stratospheric ozone, as well as by photochemical removal. Since ozone is an important greenhouse gas in this region, it is particularly important to understand how it will respond to changes in anthropogenic emissions and changes in stratospheric ozone fluxes.. This response will be determined by the relative balance of the different production, loss and transport processes. Ozone concentrations calculated by models will differ depending on the adopted meteorological fields, their chemical scheme, anthropogenic emissions, and treatment of the stratospheric influx. We performed simulations using the chemical-transport model from the Global Modeling Initiative (GMI) with meteorological fields from (It)h e NASA Goddard Institute for Space Studies (GISS) general circulation model (GCM), (2) the atmospheric GCM from NASA's Global Modeling and Assimilation Office(GMAO), and (3) assimilated winds from GMAO . These simulations adopt the same chemical mechanism and emissions, and adopt the Synthetic Ozone (SYNOZ) approach for treating the influx of stratospheric ozone -. In addition, we also performed simulations for a coupled troposphere-stratosphere model with a subset of the same winds. Simulations were done for both 4degx5deg and 2degx2.5deg resolution. Model results are being tested through comparison with a suite of atmospheric observations. In this presentation, we diagnose the ozone budget in the upper troposphere utilizing the suite of GMI simulations, to address the sensitivity of this budget to: a) the different meteorological fields used; b) the adoption of the SYNOZ boundary condition versus inclusion of a full stratosphere; c) model horizontal resolution. Model results are compared to observations to determine biases in particular simulations; by examining these comparisons in conjunction with the derived budgets, we may pinpoint deficiencies in the representation of chemical/dynamical processes.
NASA Astrophysics Data System (ADS)
Logan, J. A.; Megretskaia, I.; Liu, J.; Rodriguez, J. M.; Strahan, S. E.; Damon, M.; Steenrod, S. D.
2012-12-01
Simulations of atmospheric composition in the recent past (hindcasts) are a valuable tool for determining the causes of interannual variability (IAV) and trends in tropospheric ozone, including factors such as anthropogenic emissions, biomass burning, stratospheric input, and variability in meteorology. We will review the ozone data sets (balloon, satellite, and surface) that are the most reliable for evaluating hindcasts, and demonstrate their application with the GMI model. The GMI model is driven by the GEOS-5/MERRA reanalysis and includes both stratospheric and tropospheric chemistry. Preliminary analysis of a simulation for 1990-2010 using constant fossil fuel emissions is promising. The model reproduces the recent interannual variability (IAV) in ozone in the lowermost stratosphere seen in MLS and sonde data, as well as the IAV seen in sonde data in the lower stratosphere since 1995, and captures much of the IAV and short-term trends in surface ozone at remote sites, showing the influence of variability in dynamics. There was considerable IAV in ozone in the lowermost stratosphere in the Aura period, but almost none at European alpine sites in winter/spring, when ozone at 150 hPa has been shown to be correlated with that at 700 hPa in earlier years. The model matches the IAV in alpine ozone in Europe in July-September, including the high values in heat-waves, showing the role of variability in meteorology. A focus on IAV in each season is essential. The model matches IAV in MLS in the upper troposphere, TES tropical ozone, and the tropospheric ozone column (OMI/MLS) the best in tSropical regions controlled by ENSO related changes in dynamics. This study, combined with sensitivity simulations with changes to emissions, and simulations with passive tracers (see Abstract by Rodriguez et al. Session A76), lays the foundations for assessment of the mechanisms that have influenced tropospheric ozone in the past two decades.
GPM observations of a tropical-like hailstorm over the Mediterranean Sea
NASA Astrophysics Data System (ADS)
Cinzia Marra, Anna; Panegrossi, Giulia; Casella, Daniele; Sanò, Paolo; Dietrich, Stefano; Baldini, Luca; Petracca, Marco; Porcù, Federico
2016-04-01
In the last years tropical-like precipitation systems, i.e., with large horizontal extent, tropical cyclone features (i.e., Medicanes), or characterized by very deep and intense convection, have become more and more frequent also at mid-latitudes. On September 05, 2015 a violent hailstorm hit the Gulf and the city of Naples in Italy. The storm was caused by a southward plunge of the jet stream that carved into Western Europe, sending an upper disturbance into the Italian peninsula. That instability, associated with high Sea Surface Temperature (SST), and low-level convergence, stirred up an impressive severe thunderstorm with intense lightning activity and strong winds, that started developing around 0600 UTC over the Thyrrenian Sea off the coast of Naples, and reached maturity by 0637 UTC, hitting the coast around 0900 UTC, moving inland afterwards, until its complete dissipation around 1200 UTC. The storm dropped 5-8 cm diameter hailstones along its path over the sea, and in Pozzuoli, near Naples. Meteosat Second Generation (MSG) SEVIRI VIS/IR images show the extremely rapid development of the thunderstorm, with cloud-top temperatures (at 10.8 μm) dropping from 270 K at 0657 UTC to the extremely low value of 205 K at 0637 UTC (65 K in 40 minutes). The occurrence of a very well defined convective overshooting top is evidenced by the VIS images. Sounding at Pratica di Mare station (180 km NE of Naples) at 0000 UTC shows the tropopause height at about 13.5 km and the typical "loaded gun" features providing a strong capping inversion inhibiting the premature release of the convective instability: moist air in the boundary layer, due to the low-level southerly flow, with warm and dry air aloft. The LINET ground-based lightning detection network registered over 37000 strokes between 0500 and 1200 UTC. During its mature phase, at 0845 UTC, the hailstorm was captured by one overpass of Global Precipitation Measurement (GPM) satellite launched in February 2014. The GPM Core Observatory (GPM-CO), equipped with the GPM Microwave Imager (GMI), the most advanced multichannel conical-scanning microwave radiometer available, and with the Ka/Ku band Dual-frequency Precipitation Radar (DPR), provides unique measurements of extremely rare, tropical-like features of the storm. Close-in-time observations of the hailstorm are also available from the AMSU/MHS radiometers (MetOp-A overpass at 0834 UTC and MetOp-B overpass at 0929 UTC). DPR shows vertical extension of more than 16 km a.s.l. and with tropical-like reflectivity values (40dBZ top height at 14 km and 20 dBZ top height at 16 km, sign of strong updraft, supporting large ice hydrometeors), confirming the presence of a deep overshooting above the 13.5 km tropopause. GMI observations show strong brightness temperature (TB) depressions, with the 37GHz, 89GHz, and 166GHz as low as 97K, 67K, and 87K, respectively, similar in both V and H channels (sign of round shaped ice hydrometeors). Such low values of TB are extremely unusual ad mid latitudes, and can be measured only thanks to the high-resolution capability of GMI. The analysis of the TB differences in the three AMSU/MHS 183 GHz water vapor channels, usually applied to tropical convective clouds, confirms the presence of convective overshooting. Around the time of the GMI (and AMSU/MHS) overpass (between 08:30 and 09:00 UTC), the LINET registered about 5000 lightning strokes (3500 intracloud), another indication of the severity of the storm. In this study GPM observations will be thoroughly analyzed and discussed, along with the analysis of other spaceborne and ground-based measurements, providing observational evidence of the severity and rarity of this type of storm at mid-latitudes.
1982-06-01
January 1979. 53. Levis, Stanley P. Testimony before the U.S. Senate Committee on Irmed Forces, Ninety-Fifth Congress, 23 march 1978. 54. "Advanced...34Face-Off on the High Seas," 11&90 19 April 1982. 91. "Search for a Way Out," ZAjj, 26 April 1982. 92 92. Sloyan., Patrick J. wine aai Britaints gmi
FPGA-based LDPC-coded APSK for optical communication systems.
Zou, Ding; Lin, Changyu; Djordjevic, Ivan B
2017-02-20
In this paper, with the aid of mutual information and generalized mutual information (GMI) capacity analyses, it is shown that the geometrically shaped APSK that mimics an optimal Gaussian distribution with equiprobable signaling together with the corresponding gray-mapping rules can approach the Shannon limit closer than conventional quadrature amplitude modulation (QAM) at certain range of FEC overhead for both 16-APSK and 64-APSK. The field programmable gate array (FPGA) based LDPC-coded APSK emulation is conducted on block interleaver-based and bit interleaver-based systems; the results verify a significant improvement in hardware efficient bit interleaver-based systems. In bit interleaver-based emulation, the LDPC-coded 64-APSK outperforms 64-QAM, in terms of symbol signal-to-noise ratio (SNR), by 0.1 dB, 0.2 dB, and 0.3 dB at spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz, respectively. It is found by emulation that LDPC-coded 64-APSK for spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz is 1.6 dB, 1.7 dB, and 2.2 dB away from the GMI capacity.
In vitro and in vivo antifungal activity of Cassia surattensis flower against Aspergillus niger.
Sumathy, Vello; Zakaria, Zuraini; Jothy, Subramanion L; Gothai, Sivapragasam; Vijayarathna, Soundararajan; Yoga Latha, Lachimanan; Chen, Yeng; Sasidharan, Sreenivasan
2014-12-01
Invasive aspergillosis (IA) in immunocompromised host is a major infectious disease leading to reduce the survival rate of world population. Aspergillus niger is a causative agent causing IA. Cassia surattensis plant is commonly used in rural areas to treat various types of disease. C. surattensis flower extract was evaluated against the systemic aspergillosis model in this study. Qualitative measurement of fungal burden suggested a reduction pattern in the colony forming unit (CFU) of lung, liver, spleen and kidney for the extract treated group. Galactomannan assay assessment showed a decrease of fungal load in the treatment and positive control group with galactomannan index (GMI) value of 1.27 and 0.25 on day 28 but the negative control group showed high level of galactomannan in the serum with GMI value of 3.58. Histopathology examinations of the tissues featured major architecture modifications in the tissues of negative control group. Tissue reparation and recovery from infection were detected in extract treated and positive control group. Time killing fungicidal study of A. niger revealed dependence of the concentration of C. surattensis flower extract. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gong, Jie; Zeng, Xiping; Wu, Dong L.; Li, Xiaowen
2018-01-01
The diurnal variation of tropical ice clouds has been well observed and examined in terms of the occurring frequency and total mass but rarely from the viewpoint of ice microphysical parameters. It accounts for a large portion of uncertainties in evaluating ice clouds' role on global radiation and hydrological budgets. Owing to the advantage of precession orbit design and paired polarized observations at a high-frequency microwave band that is particularly sensitive to ice particle microphysical properties, 3 years of polarimetric difference (PD) measurements using the 166 GHz channel of Global Precipitation Measurement Microwave Imager (GPM-GMI) are compiled to reveal a strong diurnal cycle over tropical land (30°S-30°N) with peak amplitude varying up to 38%. Since the PD signal is dominantly determined by ice crystal size, shape, and orientation, the diurnal cycle observed by GMI can be used to infer changes in ice crystal properties. Moreover, PD change is found to lead the diurnal changes of ice cloud occurring frequency and total ice mass by about 2 h, which strongly implies that understanding ice microphysics is critical to predict, infer, and model ice cloud evolution and precipitation processes.
NASA Astrophysics Data System (ADS)
Herawati, T.; Yustiati, A.; Diliana, S. Y.; Adhardiansyah
2018-03-01
The research was conducted to find out the growth and productive pattern of Seren fish in Jatigede Sumedang Reservoir of West Java during the period of November 2016 to December 2016. The research used survey method with quantitative descriptive data analysis. Seren fish samples used in the study was 30 individuals collected in November 2016 and 41 individuals collected December 2016. The parameters observed namely were sex, gonads maturity level and index and fecundity. The results show that the average length of seren fish was 147 mm and the largest one was 273 mm. The fish growth patterns were allometrically negative. The condition factor ranged from 0.973 to 1.105. The ratio of nutritional fish seren was balanced between male and female fish. Gonad Maturity Index of male seren fish and female seren were relatively similar. The GMI male fish ranged between 0.285 to 11.055 % while the female GMI fish ranged from 1.23 to 11.76 %. The Seren fish of 225 mm has average fecundity of 10,032 grains, fish fitting 260 mm average 23,471 grains, there was a relationship between the addition of length size and fecundity.
Next-Generation Aura/OMI NO2 and SO2 Products
NASA Technical Reports Server (NTRS)
Krotkov, Nickolay; Yang, Kai; Bucsela, Eric; Lamsal, Lok; Celarier, Edward; Swartz, William; Carn, Simon; Bhartia, Pawan; Gleason, James; Pickering, Ken;
2011-01-01
The measurement of both SO2 and NO2 gases are recognized as an essential component of atmospheric composition missions. We describe current capabilities and limitations of the operational Aura/OMI NO2 and SO2 data that have been used by a large number of researchers. Analyses of the data and validation studies have brought to light a number of areas in which these products can be expanded and improved. Major improvements for new NASA standard (SP) NO2 product include more accurate tropospheric and stratospheric column amounts, along with much improved error estimates and diagnostics. Our approach uses a monthly NO2 climatology based on the NASA Global Modeling Initiative (GMI) chemistry-transport model and takes advantage of OMI data from cloudy scenes to find clean areas where the contribution from the trap NO2 column is relatively small. We then use a new filtering, interpolation and smoothing techniques for separating the stratospheric and tropospheric components of NO2, minimizing the influence of a priori information. The new algorithm greatly improves the structure of stratospheric features relative to the original SP. For the next-generation OMI SO2 product we plan to implement operationally the offline iterative spectral fitting (ISF) algorithm and re-process the OMI Level-2 SO2 dataset using a priori SO2 and aerosol profiles, clouds, and surface reflectivity appropriate for observation conditions. This will improve the ability to detect and quantify weak tropospheric SO2 loadings. The new algorithm is validated using aircraft in-situ data during field campaigns in China (2005 and 2008) and in Maryland (Frostburg, 2010 and DISCOVER-AQ in July 2011). The height of the SO2 plumes will also be estimated for high SO2 loading cases (e.g., volcanic eruptions). The same SO2 algorithm will be applied to the data from OMPS sensor to be launched on NPP satellite later this year. The next-generation NO2 and SO2 products will provide critical information (e.g., averaging kernels) for evaluation of chemistry-transport models, for data assimilation, and to impose top-down constraints on the SO2 and NO2 emission sources.
NASA Astrophysics Data System (ADS)
Guilloteau, C.; Foufoula-Georgiou, E.; Kummerow, C.; Kirstetter, P. E.
2017-12-01
A multiscale approach is used to compare precipitation fields retrieved from GMI using the last version of the GPROF algorithm (GPROF-2017) to the DPR fields all over the globe. Using a wavelet-based spectral analysis, which renders the multi-scale decompositions of the original fields independent of each other spatially and across scales, we quantitatively assess the various scales of variability of the retrieved fields, and thus define the spatially-variable "effective resolution" (ER) of the retrievals. Globally, a strong agreement is found between passive microwave and radar patterns at scales coarser than 80km. Over oceans the patterns match down to the 20km scale. Over land, comparison statistics are spatially heterogeneous. In most areas a strong discrepancy is observed between passive microwave and radar patterns at scales finer than 40-80km. The comparison is also supported by ground-based observations over the continental US derived from the NOAA/NSSL MRMS suite of products. While larger discrepancies over land than over oceans are classically explained by land complex surface emissivity perturbing the passive microwave retrieval, other factors are investigated here, such as intricate differences in the storm structure over oceans and land. Differences in term of statistical properties (PDF of intensities and spatial organization) of precipitation fields over land and oceans are assessed from radar data, as well as differences in the relation between the 89GHz brightness temperature and precipitation. Moreover, the multiscale approach allows quantifying the part of discrepancies caused by miss-match of the location of intense cells and instrument-related geometric effects. The objective is to diagnose shortcomings of current retrieval algorithms such that targeted improvements can be made to achieve over land the same retrieval performance as over oceans.
Online devices and measuring systems for the automatic control of newspaper printing
NASA Astrophysics Data System (ADS)
Marszalec, Elzbieta A.; Heikkila, Ismo; Juhola, Helene; Lehtonen, Tapio
1999-09-01
The paper reviews the state-of-the-art color measuring systems used for the control of newspaper printing. The printing process requirements are specified and different off-line and on-line color quality control systems, commercially available and under development, are evaluated. Recent market trends in newspaper printing are discussed based on the survey. The study was made on information derived from: conference proceedings (TAGA, IARIGAI, SPIE and IS&T), journals (American Printer, Applied Optics), discussions with experts (GMI, QTI, HONEYWELL, TOBIAS, GretagMacbeth), IFRA Expo'98/Quality Measuring Technologies, commercial brochures, and the Internet. On the background of this review, three different measuring principles, currently, under investigation at VTT Information Technology, are described and their applicability to newspaper printing is evaluated.
Assessing Irregular Warfare: A Framework for Intelligence Analysis
2008-01-01
AND HEALTH CARE INTERNATIONAL AFFAIRS NATIONAL SECURITY POPULATION AND AGING PUBLIC SAFETY SCIENCE AND TECHNOLOGY SUBSTANCE ABUSE TERRORISM AND...Interim FSTC Foreign Science and Technology Center GMI general military intelligence IED improvised explosive device INSCOM Intelligence and Security...ground forces intelligence in the Department of Defense (DoD).1 NGIC was created in March 1995, when the U.S. Army Foreign Science and Technology
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Fleet average non-methane organic gas....1710-99 Fleet average non-methane organic gas exhaust emission standards for light-duty vehicles and... follows: Table R99-15—Fleet Average Non-Methane Organic Gas Standards (g/mi) for Light-Duty Vehicles and...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Fleet average non-methane organic gas....1710-99 Fleet average non-methane organic gas exhaust emission standards for light-duty vehicles and... follows: Table R99-15—Fleet Average Non-Methane Organic Gas Standards (g/mi) for Light-Duty Vehicles and...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 20 2013-07-01 2013-07-01 false Fleet average non-methane organic gas....1710-99 Fleet average non-methane organic gas exhaust emission standards for light-duty vehicles and... follows: Table R99-15—Fleet Average Non-Methane Organic Gas Standards (g/mi) for Light-Duty Vehicles and...
40 CFR 86.000-9 - Emission standards for 2000 and later model year light-duty trucks.
Code of Federal Regulations, 2013 CFR
2013-07-01
....000-9 Emission standards for 2000 and later model year light-duty trucks. Section 86.000-9 includes...) and CO Model year Percentage 2002 40 2003 80 2004 100 Table A00-6—Useful Life Standards (G/MI) for... applicable model year's heavy light-duty trucks shall not exceed the applicable SFTP standards in table A00-6...
40 CFR 86.000-9 - Emission standards for 2000 and later model year light-duty trucks.
Code of Federal Regulations, 2012 CFR
2012-07-01
....000-9 Emission standards for 2000 and later model year light-duty trucks. Section 86.000-9 includes...) and CO Model year Percentage 2002 40 2003 80 2004 100 Table A00-6—Useful Life Standards (G/MI) for... applicable model year's heavy light-duty trucks shall not exceed the applicable SFTP standards in table A00-6...
40 CFR 86.000-8 - Emission standards for 2000 and later model year light-duty vehicles.
Code of Federal Regulations, 2012 CFR
2012-07-01
....000-8 Emission standards for 2000 and later model year light-duty vehicles. Section 86.000-8 includes... later model year light-duty vehicles shall meet the additional SFTP standards of table A00-2 (defined by...=NOX) and CO Model year Percentage 2000 40 2001 80 2002 100 Table A00-2—Useful Life Standards (G/MI...
40 CFR 86.000-8 - Emission standards for 2000 and later model year light-duty vehicles.
Code of Federal Regulations, 2013 CFR
2013-07-01
....000-8 Emission standards for 2000 and later model year light-duty vehicles. Section 86.000-8 includes... later model year light-duty vehicles shall meet the additional SFTP standards of table A00-2 (defined by...=NOX) and CO Model year Percentage 2000 40 2001 80 2002 100 Table A00-2—Useful Life Standards (G/MI...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Fleet average non-methane organic gas....1710-99 Fleet average non-methane organic gas exhaust emission standards for light-duty vehicles and... follows: Table R99-15—Fleet Average Non-Methane Organic Gas Standards (g/mi) for Light-Duty Vehicles and...
2009-01-01
Background The infection and virulence functions of diverse plant and animal pathogens that possess quorum sensing systems are regulated by N-acylhomoserine lactones (AHLs) acting as signal molecules. AHL-acylase is a quorum quenching enzyme and degrades AHLs by removing the fatty acid side chain from the homoserine lactone ring of AHLs. This blocks AHL accumulation and pathogenic phenotypes in quorum sensing bacteria. Results An aac gene of undemonstrated function from Ralstonia solanacearum GMI1000 was cloned, expressed in Escherichia coli; it inactivated four AHLs that were tested. The sequence of the 795 amino acid polypeptide was considerably similar to the AHL-acylase from Ralstonia sp. XJ12B with 83% identity match and shared 39% identity with an aculeacin A acylase precursor from the gram-positive actinomycete Actinoplanes utahensis. Aculeacin A is a neutral lipopeptide antibiotic and an antifungal drug. An electrospray ionisation mass spectrometry (ESI-MS) analysis verified that Aac hydrolysed the amide bond of AHL, releasing homoserine lactone and the corresponding fatty acids. However, ESI-MS analysis demonstrated that the Aac could not catalyze the hydrolysis of the palmitoyl moiety of the aculeacin A. Moreover, the results of MIC test of aculeacin A suggest that Aac could not deacylate aculeacin A. The specificity of Aac for AHLs showed a greater preference for long acyl chains than for short acyl chains. Heterologous expression of the aac gene in Chromobacterium violaceum CV026 effectively inhibited violacein and chitinase activity, both of which were regulated by the quorum-sensing mechanism. These results indicated that Aac could control AHL-dependent pathogenicity. Conclusion This is the first study to find an AHL-acylase in a phytopathogen. Our data provide direct evidence that the functioning of the aac gene (NP520668) of R. solanacearum GMI1000 is via AHL-acylase and not via aculeacin A acylase. Since Aac is a therapeutic potential quorum-quenching agent, its further biotechnological applications in agriculture, clinical and bio-industrial fields should be evaluated in the near future. PMID:19426552
Activities of NASA's Global Modeling Initiative (GMI) in the Assessment of Subsonic Aircraft Impact
NASA Technical Reports Server (NTRS)
Rodriquez, J. M.; Logan, J. A.; Rotman, D. A.; Bergmann, D. J.; Baughcum, S. L.; Friedl, R. R.; Anderson, D. E.
2004-01-01
The Intergovernmental Panel on Climate Change estimated a peak increase in ozone ranging from 7-12 ppbv (zonal and annual average, and relative to a baseline with no aircraft), due to the subsonic aircraft in the year 2015, corresponding to aircraft emissions of 1.3 TgN/year. This range of values presumably reflects differences in model input (e.g., chemical mechanism, ground emission fluxes, and meteorological fields), and algorithms. The model implemented by the Global Modeling Initiative allows testing the impact of individual model components on the assessment calculations. We present results of the impact of doubling the 1995 aircraft emissions of NOx, corresponding to an extra 0.56 TgN/year, utilizing meteorological data from NASA's Data Assimilation Office (DAO), the Goddard Institute for Space Studies (GISS), and the Middle Atmosphere Community Climate Model, version 3 (MACCM3). Comparison of results to observations can be used to assess the model performance. Peak ozone perturbations ranging from 1.7 to 2.2 ppbv of ozone are calculated using the different fields. These correspond to increases in total tropospheric ozone ranging from 3.3 to 4.1 Tg/Os. These perturbations are consistent with the IPCC results, due to the difference in aircraft emissions. However, the range of values calculated is much smaller than in IPCC.
Can Real-Time Data Also Be Climate Quality?
NASA Astrophysics Data System (ADS)
Brewer, M.; Wentz, F. J.
2015-12-01
GMI, AMSR-2 and WindSat herald a new era of highly accurate and timely microwave data products. Traditionally, there has been a large divide between real-time and re-analysis data products. What if these completely separate processing systems could be merged? Through advanced modeling and physically based algorithms, Remote Sensing Systems (RSS) has narrowed the gap between real-time and research-quality. Satellite microwave ocean products have proven useful for a wide array of timely Earth science applications. Through cloud SST capabilities have enormously benefited tropical cyclone forecasting and day to day fisheries management, to name a few. Oceanic wind vectors enhance operational safety of shipping and recreational boating. Atmospheric rivers are of import to many human endeavors, as are cloud cover and knowledge of precipitation events. Some activities benefit from both climate and real-time operational data used in conjunction. RSS has been consistently improving microwave Earth Science Data Records (ESDRs) for several decades, while making near real-time data publicly available for semi-operational use. These data streams have often been produced in 2 stages: near real-time, followed by research quality final files. Over the years, we have seen this time delay shrink from months or weeks to mere hours. As well, we have seen the quality of near real-time data improve to the point where the distinction starts to blur. We continue to work towards better and faster RFI filtering, adaptive algorithms and improved real-time validation statistics for earlier detection of problems. Can it be possible to produce climate quality data in real-time, and what would the advantages be? We will try to answer these questions…
NASA Astrophysics Data System (ADS)
Kirstetter, P. E.; Petersen, W. A.; Gourley, J. J.; Kummerow, C.; Huffman, G. J.; Turk, J.; Tanelli, S.; Maggioni, V.; Anagnostou, E. N.; Hong, Y.; Schwaller, M.
2017-12-01
Accurate characterization of uncertainties in space-borne precipitation estimates is critical for many applications including water budget studies or prediction of natural hazards at the global scale. The GPM precipitation Level II (active and passive) and Level III (IMERG) estimates are compared to the high quality and high resolution NEXRAD-based precipitation estimates derived from the NOAA/NSSL's Multi-Radar, Multi-Sensor (MRMS) platform. A surface reference is derived from the MRMS suite of products to be accurate with known uncertainty bounds and measured at a resolution below the pixel sizes of any GPM estimate, providing great flexibility in matching to grid scales or footprints. It provides an independent and consistent reference research framework for directly evaluating GPM precipitation products across a large number of meteorological regimes as a function of resolution, accuracy and sample size. The consistency of the ground and space-based sensors in term of precipitation detection, typology and quantification are systematically evaluated. Satellite precipitation retrievals are further investigated in terms of precipitation distributions, systematic biases and random errors, influence of precipitation sub-pixel variability and comparison between satellite products. Prognostic analysis directly provides feedback to algorithm developers on how to improve the satellite estimates. Specific factors for passive (e.g. surface conditions for GMI) and active (e.g. non uniform beam filling for DPR) sensors are investigated. This cross products characterization acts as a bridge to intercalibrate microwave measurements from the GPM constellation satellites and propagate to the combined and global precipitation estimates. Precipitation features previously used to analyze Level II satellite estimates under various precipitation processes are now intoduced for Level III to test several assumptions in the IMERG algorithm. Specifically, the contribution of Level II is explicitly characterized and a rigorous characterization is performed to migrate across scales fully understanding the propagation of errors from Level II to Level III. Perpectives are presented to advance the use of uncertainty as an integral part of QPE for ground-based and space-borne sensors
Source attribution of interannual variability of tropospheric ozone over the southern hemisphere
NASA Astrophysics Data System (ADS)
Liu, J.; Rodriguez, J. M.; Logan, J. A.; Steenrod, S. D.; Douglass, A. R.; Olsen, M. A.; Wargan, K.; Ziemke, J. R.
2015-12-01
Both model simulations and GMAO assimilated ozone product derived from OMI/MLS show a high tropospheric ozone column centered over the south Atlantic from the equator to 30S. This ozone maximum extends eastward to South America and the southeast Pacific; it extends southwestward to southern Africa, south Indian Ocean. In this study, we use hindcast simulations from the GMI model of tropospheric and stratospheric chemistry, driven by assimilated MERRA meteorological fields, to investigate the factors controlling the interannual variations (IAV) of this ozone maximum during the last two decades. We also use various GMI tracer diagnostics, including a stratospheric ozone tracer to tag the impact of stratospheric ozone, and a tagged CO tracer to track the emission sources, to ascertain the contribution of difference processes to IAV in ozone at different altitudes, as well as partial columns above different pressure level. Our initial model analysis suggests that the IAV of the stratospheric contribution plays a major role on in the IAV of the upper tropospheric ozone and explains a large portion of variance during its winter season. Over the south Atlantic region, the IAV of surface emissions from both South America and southern Africa also contribute significantly to the IAV of ozone, especially in the middle and lower troposphere
Consistency Between Convection Allowing Model Output and Passive Microwave Satellite Observations
NASA Astrophysics Data System (ADS)
Bytheway, J. L.; Kummerow, C. D.
2018-01-01
Observations from the Global Precipitation Measurement (GPM) core satellite were used along with precipitation forecasts from the High Resolution Rapid Refresh (HRRR) model to assess and interpret differences between observed and modeled storms. Using a feature-based approach, precipitating objects were identified in both the National Centers for Environmental Prediction Stage IV multisensor precipitation product and HRRR forecast at lead times of 1, 2, and 3 h at valid times corresponding to GPM overpasses. Precipitating objects were selected for further study if (a) the observed feature occurred entirely within the swath of the GPM Microwave Imager (GMI) and (b) the HRRR model predicted it at all three forecast lead times. Output from the HRRR model was used to simulate microwave brightness temperatures (Tbs), which were compared to those observed by the GMI. Simulated Tbs were found to have biases at both the warm and cold ends of the distribution, corresponding to the stratiform/anvil and convective areas of the storms, respectively. Several experiments altered both the simulation microphysics and hydrometeor classification in order to evaluate potential shortcomings in the model's representation of precipitating clouds. In general, inconsistencies between observed and simulated brightness temperatures were most improved when transferring snow water content to supercooled liquid hydrometeor classes.
Preliminary Structural Design - Defining the Design Space
1993-02-01
York, 1949 7. Rosenblatt, R., Prnciples of Neurodynamics , New York, Spartan Books, 1959 8. Swift, R.,"Structural Design Using Neural Networks," Ph.D...Explorations in the Microstructure of Cognition . Vol. 1 Foundations D. E. Rumelhart and J.L. McClelland Editors, MIT Press, 1986 40. Parker, D. B...Processing: Explorations in the Microstructure of Cognition , MIT Press 1986 45. Schittkowski, K., Nonlinear o a gmi codes Lecture Notes in Economics and
NASA Technical Reports Server (NTRS)
Alvarado, Matthew J.; Lonsdale, Chantelle R.; Macintyre, Helen L.; Bian, Huisheng; Chin, Mian; Ridley, David A.; Heald, Colette L.; Thornhill, Kenneth L.; Anderson, Bruce E.; Cubison, Michael J.;
2016-01-01
Accurate modeling of the scattering and absorption of ultraviolet and visible radiation by aerosols is essential for accurate simulations of atmospheric chemistry and climate. Closure studies using in situ measurements of aerosol scattering and absorption can be used to evaluate and improve models of aerosol optical properties without interference from model errors in aerosol emissions, transport, chemistry, or deposition rates. Here we evaluate the ability of four externally mixed, fixed size distribution parameterizations used in global models to simulate submicron aerosol scattering and absorption at three wavelengths using in situ data gathered during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign. The four models are the NASA Global Modeling Initiative (GMI) Combo model, GEOS-Chem v9- 02, the baseline configuration of a version of GEOS-Chem with online radiative transfer calculations (called GC-RT), and the Optical Properties of Aerosol and Clouds (OPAC v3.1) package. We also use the ARCTAS data to perform the first evaluation of the ability of the Aerosol Simulation Program (ASP v2.1) to simulate submicron aerosol scattering and absorption when in situ data on the aerosol size distribution are used, and examine the impact of different mixing rules for black carbon (BC) on the results. We find that the GMI model tends to overestimate submicron scattering and absorption at shorter wavelengths by 10-23 percent, and that GMI has smaller absolute mean biases for submicron absorption than OPAC v3.1, GEOS-Chem v9-02, or GC-RT. However, the changes to the density and refractive index of BC in GCRT improve the simulation of submicron aerosol absorption at all wavelengths relative to GEOS-Chem v9-02. Adding a variable size distribution, as in ASP v2.1, improves model performance for scattering but not for absorption, likely due to the assumption in ASP v2.1 that BC is present at a constant mass fraction throughout the aerosol size distribution. Using a core-shell mixing rule in ASP overestimates aerosol absorption, especially for the fresh biomass burning aerosol measured in ARCTAS-B, suggesting the need for modeling the time-varying mixing states of aerosols in future versions of ASP.
NASA Astrophysics Data System (ADS)
Alvarado, Matthew J.; Lonsdale, Chantelle R.; Macintyre, Helen L.; Bian, Huisheng; Chin, Mian; Ridley, David A.; Heald, Colette L.; Thornhill, Kenneth L.; Anderson, Bruce E.; Cubison, Michael J.; Jimenez, Jose L.; Kondo, Yutaka; Sahu, Lokesh K.; Dibb, Jack E.; Wang, Chien
2016-07-01
Accurate modeling of the scattering and absorption of ultraviolet and visible radiation by aerosols is essential for accurate simulations of atmospheric chemistry and climate. Closure studies using in situ measurements of aerosol scattering and absorption can be used to evaluate and improve models of aerosol optical properties without interference from model errors in aerosol emissions, transport, chemistry, or deposition rates. Here we evaluate the ability of four externally mixed, fixed size distribution parameterizations used in global models to simulate submicron aerosol scattering and absorption at three wavelengths using in situ data gathered during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign. The four models are the NASA Global Modeling Initiative (GMI) Combo model, GEOS-Chem v9-02, the baseline configuration of a version of GEOS-Chem with online radiative transfer calculations (called GC-RT), and the Optical Properties of Aerosol and Clouds (OPAC v3.1) package. We also use the ARCTAS data to perform the first evaluation of the ability of the Aerosol Simulation Program (ASP v2.1) to simulate submicron aerosol scattering and absorption when in situ data on the aerosol size distribution are used, and examine the impact of different mixing rules for black carbon (BC) on the results. We find that the GMI model tends to overestimate submicron scattering and absorption at shorter wavelengths by 10-23 %, and that GMI has smaller absolute mean biases for submicron absorption than OPAC v3.1, GEOS-Chem v9-02, or GC-RT. However, the changes to the density and refractive index of BC in GC-RT improve the simulation of submicron aerosol absorption at all wavelengths relative to GEOS-Chem v9-02. Adding a variable size distribution, as in ASP v2.1, improves model performance for scattering but not for absorption, likely due to the assumption in ASP v2.1 that BC is present at a constant mass fraction throughout the aerosol size distribution. Using a core-shell mixing rule in ASP overestimates aerosol absorption, especially for the fresh biomass burning aerosol measured in ARCTAS-B, suggesting the need for modeling the time-varying mixing states of aerosols in future versions of ASP.
Impacts of Using Distributed Energy Resources to Reduce Peak Loads in Vermont
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruth, Mark F.; Lunacek, Monte S.; Jones, Birk
To help the United States develop a modern electricity grid that provides reliable power from multiple resources as well as resiliency under extreme conditions, the U.S. Department of Energy (DOE) is leading the Grid Modernization Initiative (GMI) to help shape the future of the nation's grid. Under the GMI, DOE funded the Vermont Regional Initiative project to provide the technical support and analysis to utilities that need to mitigate possible impacts of increasing renewable generation required by statewide goals. Advanced control of distributed energy resources (DER) can both support higher penetrations of renewable energy by balancing controllable loads to windmore » and photovoltaic (PV) solar generation and reduce peak demand by shedding noncritical loads. This work focuses on the latter. This document reports on an experiment that evaluated and quantified the potential benefits and impacts of reducing the peak load through demand response (DR) using centrally controllable electric water heaters (EWHs) and batteries on two Green Mountain Power (GMP) feeders. The experiment simulated various hypothetical scenarios that varied the number of controllable EWHs, the amount of distributed PV systems, and the number of distributed residential batteries. The control schemes were designed with several objectives. For the first objective, the primary simulations focused on reducing the load during the independent system operator (ISO) peak when capacity charges were the primary concern. The second objective was to mitigate DR rebound to avoid new peak loads and high ramp rates. The final objective was to minimize customers' discomfort, which is defined by the lack of hot water when it is needed. We performed the simulations using the National Renewable Energy Laboratory's (NREL's) Integrated Energy System Model (IESM) because it can simulate both electric power distribution feeder and appliance end use performance and it includes the ability to simulate multiple control strategies.« less
NASA Technical Reports Server (NTRS)
Kim, Min-Jeong; Jin, Jianjun; McCarty, Will; El Akkraoui, Amal; Todling, Ricardo; Gelaro, Ron
2018-01-01
Many numerical weather prediction (NWP) centers assimilate radiances affected by clouds and precipitation from microwave sensors, with the expectation that these data can provide critical constraints on meteorological parameters in dynamically sensitive regions to make significant impacts on forecast accuracy for precipitation. The Global Modeling and Assimilation Office (GMAO) at NASA Goddard Space Flight Center assimilates all-sky microwave radiance data from various microwave sensors such as all-sky GPM Microwave Imager (GMI) radiance in the Goddard Earth Observing System (GEOS) atmospheric data assimilation system (ADAS), which includes the GEOS atmospheric model, the Gridpoint Statistical Interpolation (GSI) atmospheric analysis system, and the Goddard Aerosol Assimilation System (GAAS). So far, most of NWP centers apply same large data thinning distances, that are used in clear-sky radiance data to avoid correlated observation errors, to all-sky microwave radiance data. For example, NASA GMAO is applying 145 km thinning distances for most of satellite radiance data including microwave radiance data in which all-sky approach is implemented. Even with these coarse observation data usage in all-sky assimilation approach, noticeable positive impacts from all-sky microwave data on hurricane track forecasts were identified in GEOS-5 system. The motivation of this study is based on the dynamic thinning distance method developed in our all-sky framework to use of denser data in cloudy and precipitating regions due to relatively small spatial correlations of observation errors. To investigate the benefits of all-sky microwave radiance on hurricane forecasts, several hurricane cases selected between 2016-2017 are examined. The dynamic thinning distance method is utilized in our all-sky approach to understand the sources and mechanisms to explain the benefits of all-sky microwave radiance data from various microwave radiance sensors like Advanced Microwave Sounder Unit (AMSU-A), Microwave Humidity Sounder (MHS), and GMI on GEOS-5 analyses and forecasts of various hurricanes.
Wireless Telemetry of In-Flight Collision Avoidance Neural Signals in Insects
2010-09-01
with a high LQ product, along with vertical npn transistors (Q1 and Q2) providing a high gm/I ratio with relatively low parasitic capacitance allows...species (pigeon: Sun and Frost, 1998; frog: Kang and Nakagawa, 2006; fish : Preuss et al., 2006; fruit fly: Fotowat et al., 2009). In locusts, this...Eur. J. Neurosci. 7, 981-992. Houweling, A. R. and Brecht, M. (2008). Behavioural report of single neuron stimulation in somatosensory cortex
Design, Fabrication, Characterization and Modeling of Integrated Functional Materials
2010-10-19
ribbons: A relationship between the soft magnetic properties and GMI effect has been established in Co69Fe4.5R1.5Si10B15 (R = Ni, Al , Cr) amorphous... deposition . A strain compression-relaxation mechanism has been proposed in order to explain the structure- property relationships in the CFO-PZT bilayer...being pursued. The new Laser Assisted Spray process chamber for co- deposition of QDs and polymer films is shown in Fig. 55. Fig. 55
1986-09-01
workforce that is incapable of performing organizational objectives (Humple and Lyons , 1983; Tucker, 1985). . Research into retirement within the federal...organization (Donnelly, Gibson, Ivancevich ; 1984). The present study focuses on civilian middle managers, GM13 to GMI5, within technical career fields...Masson, Demestree, and Lyon ; 1979). The second study performed a factorial analysis on data from 457 respondents between the ages of 25 to 64 that worked
GPM Mission Overview and U.S. Science Status
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Azarbarzin, Art; Skofronick, Gail; Carlisle, Candace
2012-01-01
The Global Precipitation Measurement (GPM) Mission is an international satellite mission to unify and advance precipitation measurements from a constellation of research and operational sensors to provide "next-generation" precipitation products [1-2]. Water is fundamental to life on Earth. Knowing where and how much rain and snow falls globally is vital to understanding how weather and climate impact both our environment and Earth's water and energy cycles, including effects on agriculture, fresh water availability, and responses to natural disasters. Since rainfall and snowfall vary greatly from place to place and over time, satellites can provide more uniform observations of rain and snow around the globe than ground instruments, especially in areas where surface measurements are difficult. Relative to current global rainfall products, GPM data products will be characterized by: (l) more accurate instantaneous precipitation measurements (especially for light rain and cold-season solid precipitation), (2) more frequent sampling by an expanded constellation of domestic and international microwave radiometers including operational humidity sounders, (3) intercalibrated microwave brightness temperatures from constellation radiometers within a unified framework, and (4) physical-based precipitation retrievals from constellation radiometers using a common a priori cloud/hydrometeor database derived from GPM Core sensor measurements. The cornerstone of the GPM mission is the deployment of a Core Observatory in a unique 65 non-Sun-synchronous orbit to serve as a physics observatory and a reference standard to unify precipitation measurements by a constellation of dedicated and operational passive microwave sensors. The design of the GPM Core Observatory is an advancement of the Tropical Rainfall Measuring Mission (TRMM)'s highly successful rain-sensing package. The Core Observatory will carry a Ku/Ka-band Dual-frequency Precipitation Radar (DPR) and a multichannel (l0-183 GHz) GPM Microwave Radiometer (GMI). Since light rain and falling snow account for a significant fraction of precipitation occurrence in middle and high latitudes, the GPM instruments extend the capabilities of the TRMM sensors to detect falling snow, measure light rain, and provide, for the first time, quantitative estimates of microphysical properties of precipitation particles. The combined use of DPR and GMI measurements will place greater constraints on possible solutions to radiometer retrievals to improve the accuracy and consistency of precipitation retrievals from all constellation radiometers. The GMI uses 13 different microwave channels to observe energy from the different types of precipitation through clouds for estimating everything from heavy to light rain and for detecting falling snow. As the satellite passes over Earth, the GMI constantly scans a region 885 kilometers across. The Ball Aerospace and Technology Corporation built the GMI under contract with NASA Goddard Space Flight Center. The DPR provides three-dimensional information about precipitation particles derived from reflected energy by these particles at different heights within the cloud system. The two frequencies of the DPR also allow the radar to infer the sizes of precipitation particles and offer insights into a storm's physical characteristics. The Ka-band frequeny scans across a region of 125 kilometers and is nested within the wider scan of the Ku-band frequency of 245 kilometers. The Japan Aerospace and Exploration Agency (JAXA) and Japan's National Institute of Information and Communications Technology (NICT) built the DPR. The Core Observatory satellite will fly at an altitude of 253 miles (407 kilometers) in a non-Sun-synchronous orbit that covers the Earth from 65 S to 65 N - from about the Antarctic Circle to the Arctic Circle. The GPM Core Observatory is being developed and tested at NASA Goddard Space Flight Center. Once complete, a Japanese H-lIA rocket will carry thPM Core Observatory into orbit from Tanegashima Island, Japan in 2014. The GPM constellation is envisioned to comprise 8 or more microwave sensors provided by partners, including both conical imagers and cross-track sounders. GPM is currently a partnership between NASA and the Japan Aerospace Exploration Agency (JAXA). Additional partnerships are under development to include microwave radiometers on the French-Indian Megha-Tropiques satellite and U.S. Defense Meteorological Satellite Program (DMSP) satellites, as well as humidity sounders or precipitation sensors on operational satellites such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP), NOAA-NASA Joint Polar Satellite System (JPSS) satellites, European MetOp satellites, and DMSP follow-on sensors. In addition, data from Chinese and Russian microwave radiometers may be available through international cooperation under the auspices of the Committee on Earth Observation Satellites (CEOS) and Group on Earth Observations (GEO). GPM's next-generation global precipitation data will lead to scientific advances and societal benefits in the following areas: (1) Improved knowledge of the Earth's water cycle and its link to climate change (2) New insights into precipitation microphysics, storm structures and large-scale atmospheric processes (3) Better understanding of climate sensitivity and feedback processes (4) Extended capabilities in monitoring and predicting hurricanes and other extreme weather events (5) Improved forecasting capabilities for natural hazards, including floods, droughts and landslides. (6) Enhanced numerical prediction skills for weather and climate (7) Better agricultural crop forecasting and monitoring of freshwater resources. An overview of the GPM mission concept and science activities in the United States, together with an update on international collaborations in radiometer intercalibration and ground validation, will be presented.
The version 3 OMI NO2 standard product
NASA Astrophysics Data System (ADS)
Krotkov, Nickolay A.; Lamsal, Lok N.; Celarier, Edward A.; Swartz, William H.; Marchenko, Sergey V.; Bucsela, Eric J.; Chan, Ka Lok; Wenig, Mark; Zara, Marina
2017-09-01
We describe the new version 3.0 NASA Ozone Monitoring Instrument (OMI) standard nitrogen dioxide (NO2) products (SPv3). The products and documentation are publicly available from the NASA Goddard Earth Sciences Data and Information Services Center (https://disc.gsfc.nasa.gov/datasets/OMNO2_V003/summary/). The major improvements include (1) a new spectral fitting algorithm for NO2 slant column density (SCD) retrieval and (2) higher-resolution (1° latitude and 1.25° longitude) a priori NO2 and temperature profiles from the Global Modeling Initiative (GMI) chemistry-transport model with yearly varying emissions to calculate air mass factors (AMFs) required to convert SCDs into vertical column densities (VCDs). The new SCDs are systematically lower (by ˜ 10-40 %) than previous, version 2, estimates. Most of this reduction in SCDs is propagated into stratospheric VCDs. Tropospheric NO2 VCDs are also reduced over polluted areas, especially over western Europe, the eastern US, and eastern China. Initial evaluation over unpolluted areas shows that the new SPv3 products agree better with independent satellite- and ground-based Fourier transform infrared (FTIR) measurements. However, further evaluation of tropospheric VCDs is needed over polluted areas, where the increased spatial resolution and more refined AMF estimates may lead to better characterization of pollution hot spots.
Signatures of Hydrometeor Species from Airborne Passive Microwave Data for Frequencies 10-183 GHz
NASA Technical Reports Server (NTRS)
Cecil, Daniel J.; Leppert, Kenneth, II
2014-01-01
There are 2 basic precipitation retrieval methods using passive microwave measurements: (1) Emission-based: Based on the tendency of liquid precipitation to cause an increase in brightness temperature (BT) primarily at frequencies below 22 GHz over a radiometrically cold background, often an ocean background (e.g., Spencer et al. 1989; Adler et al. 1991; McGaughey et al. 1996); and (2) Scattering-based: Based on the tendency of precipitation-sized ice to scatter upwelling radiation, thereby reducing the measured BT over a relatively warmer (usually land) background at frequencies generally 37 GHz (e.g., Spencer et al. 1989; Smith et al. 1992; Ferraro and Marks 1995). Passive microwave measurements have also been used to detect intense convection (e.g., Spencer and Santek 1985) and for the detection of hail (e.g., Cecil 2009; Cecil and Blankenship 2012; Ferraro et al. 2014). The Global Precipitation Measurement (GPM) mission expands upon the successful Tropical Rainfall Measurement Mission program to provide global rainfall and snowfall observations every 3 hours (Hou et al. 2014). One of the instruments on board the GPM Core Observatory is the GPM Microwave Imager (GMI) which is a conically-scanning microwave radiometer with 13 channels ranging from 10-183 GHz. Goal of this study: Determine the signatures of various hydrometeor species in terms of BTs measured at frequencies used by GMI by using data collected on 3 case days (all having intense/severe convection) during the Mid-latitude Continental Convective Clouds Experiment conducted over Oklahoma in 2011.
Pérez, M A
2012-12-01
Probabilistic analyses allow the effect of uncertainty in system parameters to be determined. In the literature, many researchers have investigated static loading effects on dental implants. However, the intrinsic variability and uncertainty of most of the main problem parameters are not accounted for. The objective of this research was to apply a probabilistic computational approach to predict the fatigue life of three different commercial dental implants considering the variability and uncertainty in their fatigue material properties and loading conditions. For one of the commercial dental implants, the influence of its diameter in the fatigue life performance was also studied. This stochastic technique was based on the combination of a probabilistic finite element method (PFEM) and a cumulative damage approach known as B-model. After 6 million of loading cycles, local failure probabilities of 0.3, 0.4 and 0.91 were predicted for the Lifecore, Avinent and GMI implants, respectively (diameter of 3.75mm). The influence of the diameter for the GMI implant was studied and the results predicted a local failure probability of 0.91 and 0.1 for the 3.75mm and 5mm, respectively. In all cases the highest failure probability was located at the upper screw-threads. Therefore, the probabilistic methodology proposed herein may be a useful tool for performing a qualitative comparison between different commercial dental implants. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Global Free-tropospheric NO2 Abundances Derived Using a Cloud Slicing Technique from AURA OMI
NASA Technical Reports Server (NTRS)
Choi, S.; Joiner, J.; Choi, Y.; Duncan, B.N.; Vasilkov, A.; Krotkov, N.; Bucsela, E.J.
2014-01-01
We derive free-tropospheric NO2 volume mixing ratios (VMRs) by applying a cloud-slicing technique to data from the Ozone Monitoring Instrument (OMI) on the Aura satellite. In the cloud-slicing approach, the slope of the above-cloud NO2 column versus the cloud scene pressure is proportional to the NO2 VMR. In this work, we use a sample of nearby OMI pixel data from a single orbit for the linear fit. The OMI data include cloud scene pressures from the rotational-Raman algorithm and above-cloud NO2 vertical column density (VCD) (defined as the NO2 column from the cloud scene pressure to the top of the atmosphere) from a differential optical absorption spectroscopy (DOAS) algorithm. We compare OMI-derived NO2 VMRs with in situ aircraft profiles measured during the NASA Intercontinental Chemical Transport Experiment Phase B (INTEX-B) campaign in 2006. The agreement is generally within the estimated uncertainties when appropriate data screening is applied. We then derive a global seasonal climatology of free-tropospheric NO2 VMR in cloudy conditions. Enhanced NO2 in the free troposphere commonly appears near polluted urban locations where NO2 produced in the boundary layer may be transported vertically out of the boundary layer and then horizontally away from the source. Signatures of lightning NO2 are also shown throughout low and middle latitude regions in summer months. A profile analysis of our cloud-slicing data indicates signatures of lightning-generated NO2 in the upper troposphere. Comparison of the climatology with simulations from the global modeling initiative (GMI) for cloudy conditions (cloud optical depth less than10) shows similarities in the spatial patterns of continental pollution outflow. However, there are also some differences in the seasonal variation of free-tropospheric NO2 VMRs near highly populated regions and in areas affected by lightning-generated NOx.
All-Sky Microwave Imager Data Assimilation at NASA GMAO
NASA Technical Reports Server (NTRS)
Kim, Min-Jeong; Jin, Jianjun; El Akkraoui, Amal; McCarty, Will; Todling, Ricardo; Gu, Wei; Gelaro, Ron
2017-01-01
Efforts in all-sky satellite data assimilation at the Global Modeling and Assimilation Office (GMAO) at NASA Goddard Space Flight Center have been focused on the development of GSI configurations to assimilate all-sky data from microwave imagers such as the GPM Microwave Imager (GMI) and Global Change Observation Mission-Water (GCOM-W) Advanced Microwave Scanning Radiometer 2 (AMSR-2). Electromagnetic characteristics associated with their wavelengths allow microwave imager data to be relatively transparent to atmospheric gases and thin ice clouds, and highly sensitive to precipitation. Therefore, GMAOs all-sky data assimilation efforts are primarily focused on utilizing these data in precipitating regions. The all-sky framework being tested at GMAO employs the GSI in a hybrid 4D-EnVar configuration of the Goddard Earth Observing System (GEOS) data assimilation system, which will be included in the next formal update of GEOS. This article provides an overview of the development of all-sky radiance assimilation in GEOS, including some performance metrics. In addition, various projects underway at GMAO designed to enhance the all-sky implementation will be introduced.
Sensitivity of polar ozone recovery predictions of the GMI 3D CTM to GCM and DAS dynamics
NASA Astrophysics Data System (ADS)
Considine, D.; Connell, P.; Strahan, S.; Douglass, A.; Rotman, D.
2003-04-01
The Global Modeling Initiative (GMI) 3-D chemistry and transport model has been used to generate 2 simulations of the 1995-2030 time period. The 36-year simulations both used the source gas and aerosol boundary conditions of the 2002 World Meteorological Organization assessment exercise MA2. The first simulation was based on a single year of meteorological data (winds, temperatures) generated by the new Goddard Space Flight Center "Finite Volume" General Circulation Model (FVGCM), repeated for each year of the simulation. The second simulation used a year of meteorological data generated by a new data assimilation system based on the FVGCM (FVDAS), using observations for July 1, 1999 - June 30, 2000. All other aspects of the two simulations were identical. The increase in vortex-averaged south polar springtime ozone concentrations in the lower stratosphere over the course of the simulations is more robust in the simulation driven by the GCM meteorological data than in the simulation driven by DAS winds. At the same time, the decrease in estimated chemical springtime ozone loss is similar. We thus attribute the differences between the two simulations to differences in the representations of polar dynamics which reduce the sensitivity of the simulation driven by DAS winds to changes in vortex chemistry. We also evaluate the representations in the two simulations of trace constituent distributions in the current polar lower stratosphere using various observations. In these comparisons the GCM-based simulation often is in better agreement with the observations than the DAS-based simulation.
Chemical and Dynamical Impacts of Stratospheric Sudden Warmings on Arctic Ozone Variability
NASA Technical Reports Server (NTRS)
Strahan, S. E.; Douglass, A. R.; Steenrod, S. D.
2016-01-01
We use the Global Modeling Initiative (GMI) chemistry and transport model with Modern-Era Retrospective Analysis for Research and Applications (MERRA) meteorological fields to quantify heterogeneous chemical ozone loss in Arctic winters 2005-2015. Comparisons to Aura Microwave Limb Sounder N2O and O3 observations show the GMI simulation credibly represents the transport processes and net heterogeneous chemical loss necessary to simulate Arctic ozone. We find that the maximum seasonal ozone depletion varies linearly with the number of cold days and with wave driving (eddy heat flux) calculated from MERRA fields. We use this relationship and MERRA temperatures to estimate seasonal ozone loss from 1993 to 2004 when inorganic chlorine levels were in the same range as during the Aura period. Using these loss estimates and the observed March mean 63-90N column O3, we quantify the sensitivity of the ozone dynamical resupply to wave driving, separating it from the sensitivity of ozone depletion to wave driving. The results show that about 2/3 of the deviation of the observed March Arctic O3 from an assumed climatological mean is due to variations in O3 resupply and 13 is due to depletion. Winters with a stratospheric sudden warming (SSW) before mid-February have about 1/3 the depletion of winters without one and export less depletion to the midlatitudes. However, a larger effect on the spring midlatitude ozone comes from dynamical differences between warm and cold Arctic winters, which can mask or add to the impact of exported depletion.
NASA Technical Reports Server (NTRS)
Ziemke, J. R.; Olsen, M. A.; Witte, J. C.; Douglass, A. R.; Strahan, S. E.; Wargan, K.; Liu, X.; Schoeberl, M. R.; Yang, K.; Kaplan, T. B.;
2013-01-01
Measurements from the Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS), both onboard the Aura spacecraft, have been used to produce daily global maps of column and profile ozone since August 2004. Here we compare and evaluate three strategies to obtain daily maps of tropospheric and stratospheric ozone from OMI and MLS measurements: trajectory mapping, direct profile retrieval, and data assimilation. Evaluation is based upon an assessment that includes validation using ozonesondes and comparisons with the Global Modeling Initiative (GMI) chemical transport model (CTM). We investigate applications of the three ozone data products from near-decadal and inter-annual timescales to day-to-day case studies. Zonally averaged inter-annual changes in tropospheric ozone from all of the products in any latitude range are of the order 1-2 Dobson Units while changes (increases) over the 8-year Aura record investigated http://eospso.gsfc.nasa.gov/atbd-category/49 vary approximately 2-4 Dobson Units. It is demonstrated that all of the ozone products can measure and monitor exceptional tropospheric ozone events including major forest fire and pollution transport events. Stratospheric ozone during the Aura record has several anomalous inter-annual events including stratospheric warming split events in the Northern Hemisphere extra-tropics that are well captured using the data assimilation ozone profile product. Data assimilation with continuous daily global coverage and vertical ozone profile information is the best of the three strategies at generating a global tropospheric and stratospheric ozone product for science applications.
The Global Precipitation Measurement Mission
NASA Astrophysics Data System (ADS)
Jackson, Gail
2014-05-01
The Global Precipitation Measurement (GPM) mission's Core satellite, scheduled for launch at the end of February 2014, is well designed estimate precipitation from 0.2 to 110 mm/hr and to detect falling snow. Knowing where and how much rain and snow falls globally is vital to understanding how weather and climate impact both our environment and Earth's water and energy cycles, including effects on agriculture, fresh water availability, and responses to natural disasters. The design of the GPM Core Observatory is an advancement of the Tropical Rainfall Measuring Mission (TRMM)'s highly successful rain-sensing package [3]. The cornerstone of the GPM mission is the deployment of a Core Observatory in a unique 65o non-Sun-synchronous orbit to serve as a physics observatory and a calibration reference to improve precipitation measurements by a constellation of 8 or more dedicated and operational, U.S. and international passive microwave sensors. The Core Observatory will carry a Ku/Ka-band Dual-frequency Precipitation Radar (DPR) and a multi-channel (10-183 GHz) GPM Microwave Radiometer (GMI). The DPR will provide measurements of 3-D precipitation structures and microphysical properties, which are key to achieving a better understanding of precipitation processes and improving retrieval algorithms for passive microwave radiometers. The combined use of DPR and GMI measurements will place greater constraints on possible solutions to radiometer retrievals to improve the accuracy and consistency of precipitation retrievals from all constellation radiometers. Furthermore, since light rain and falling snow account for a significant fraction of precipitation occurrence in middle and high latitudes, the GPM instruments extend the capabilities of the TRMM sensors to detect falling snow, measure light rain, and provide, for the first time, quantitative estimates of microphysical properties of precipitation particles. The GPM Core Observatory was developed and tested at NASA Goddard Space Flight Center. It was shipped to Japan in November 2012 for launch on a Japanese H-IIA rocket from Tanegashima Island, Japan. The launch has been officially scheduled for 1:07 p.m. to 3:07 p.m. EST Thursday, February 27, 2014 (3:07 a.m. to 5:07 a.m. JST Friday, February 28). The day that the GPM Core was shipped to Japan was the day that GPM's Project Scientist, Dr. Arthur Hou passed away after a year-long battle with cancer. Dr. Hou truly made GPM a global effort with a global team. He excelled in providing scientific oversight for achieving GPM's many science objectives and application goals, including delivering high-resolution precipitation data in near real time for better understanding, monitoring and prediction of global precipitation systems and high-impact weather events such as hurricanes. Dr. Hou successfully forged international partnerships to collect and validate space-borne measurements of precipitation around the globe. He served as a professional mentor to numerous junior and mid-level scientists. His presence, leadership, generous personality, and the example he set for all of us as a true "team-player" will be greatly missed. The GPM mission will be described, Arthur's role as Project Scientist for GPM, and early imagery of GPM's retrievals of precipitation will be presented if available at the end of April 2014 (2 months after launch).
Global Precipitation Measurement (GPM) Mission: Overview and Status
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.
2012-01-01
The Global Precipitation Measurement (GPM) Mission is an international satellite mission specifically designed to unify and advance precipitation measurements from a constellation of research and operational microwave sensors. NASA and JAXA will deploy a Core Observatory in 2014 to serve as a reference satellite to unify precipitation measurements from the constellation of sensors. The GPM Core Observatory will carry a Ku/Ka-band Dual-frequency Precipitation Radar (DPR) and a conical-scanning multi-channel (10-183 GHz) GPM Microwave Radiometer (GMI). The DPR will be the first dual-frequency radar in space to provide not only measurements of 3-D precipitation structures but also quantitative information on microphysical properties of precipitating particles. The DPR and GMI measurements will together provide a database that relates vertical hydrometeor profiles to multi-frequency microwave radiances over a variety of environmental conditions across the globe. This combined database will be used as a common transfer standard for improving the accuracy and consistency of precipitation retrievals from all constellation radiometers. For global coverage, GPM relies on existing satellite programs and new mission opportunities from a consortium of partners through bilateral agreements with either NASA or JAXA. Each constellation member may have its unique scientific or operational objectives but contributes microwave observations to GPM for the generation and dissemination of unified global precipitation data products. In addition to the DPR and GMI on the Core Observatory, the baseline GPM constellation consists of the following sensors: (1) Special Sensor Microwave Imager/Sounder (SSMIS) instruments on the U.S. Defense Meteorological Satellite Program (DMSP) satellites, (2) the Advanced Microwave Scanning Radiometer-2 (AMSR-2) on the GCOM-W1 satellite of JAXA, (3) the Multi-Frequency Microwave Scanning Radiometer (MADRAS) and the multi-channel microwave humidity sounder (SAPHIR) on the French-Indian MeghaTropiques satellite, (4) the Microwave Humidity Sounder (MHS) on the National Oceanic and Atmospheric Administration (NOAA)-19, (5) MHS instruments on MetOp satellites launched by the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT), (6) the Advanced Technology Microwave Sounder (ATMS) on the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP), and (7) ATMS instruments on the NOAA-NASA Joint Polar Satellite System (JPSS) satellites. Data from Chinese and Russian microwave radiometers may also become available through international collaboration under the auspices of the Committee on Earth Observation Satellites (CEOS) and Group on Earth Observations (GEO). The current generation of global rainfall products combines observations from a network of uncoordinated satellite missions using a variety of merging techniques. GPM will provide "next-generation" precipitation products characterized by: (1) more accurate instantaneous precipitation estimate (especially for light rain and cold-season solid precipitation), (2) intercalibrated microwave brightness temperatures from constellation radiometers within a consistent framework, and (3) unified precipitation retrievals from constellation radiometers using a common a priori hydrometeor database constrained by combined radar/radiometer measurements provided by the GPM Core Observatory. GPM is a science mission with integrated applications goals. GPM will provide a key measurement to improve understanding of global water cycle variability and freshwater availability in a changing climate. The DPR and GMI measurements will offer insights into 3-dimensional structures of hurricanes and midlatitude storms, microphysical properties of precipitating particles, and latent heat associated with precipitation processes. The GPM mission will also make data available in near realtime (within 3 hours of observations) forocietal applications ranging from position fixes of storm centers, numerical weather prediction, flood forecasting, freshwater management, landslide warning, crop prediction, to tracking of water-borne diseases. An overview of the GPM mission design, retrieval strategy, ground validation activities, and international science collaboration will be presented.
The GEOS Chemistry Climate Model: Implications of Climate Feedbacks on Ozone Depletion and Recovery
NASA Technical Reports Server (NTRS)
Stolarski, Richard S.; Pawson, Steven; Douglass, Anne R.; Newman, Paul A.; Kawa, S. Randy; Nielsen, J. Eric; Rodriquez, Jose; Strahan, Susan; Oman, Luke; Waugh, Darryn
2008-01-01
The Goddard Earth Observing System Chemistry Climate Model (GEOS CCM) has been developed by combining the atmospheric chemistry and transport modules developed over the years at Goddard and the GEOS general circulation model, also developed at Goddard. The first version of the model was used in the CCMVal intercomparison exercises that contributed to the 2006 WMO/UNEP Ozone Assessment. The second version incorporates the updated version of the GCM (GEOS 5) and will be used for the next round of CCMVal evaluations and the 2010 Ozone Assessment. The third version, now under development, incorporates the combined stratosphere and troposphere chemistry package developed under the Global Modeling Initiative (GMI). We will show comparison to past observations that indicate that we represent the ozone trends over the past 30 years. We will also show the basic temperature, composition, and dynamical structure of the simulations. We will further show projections into the future. We will show results from an ensemble of transient and time-slice simulations, including simulations with fixed 1960 chlorine, simulations with a best guess scenario (Al), and simulations with extremely high chlorine loadings. We will discuss planned extensions of the model to include emission-based boundary conditions for both anthropogenic and biogenic compounds.
Asymmetric giant magnetoimpedance effect created by micro magnets
NASA Astrophysics Data System (ADS)
Atalay, S.; Izgi, T.; Buznikov, N. A.; Kolat, V. S.
2018-05-01
Asymmetric giant magnetoimpedance (AGMI) effect has been investigated in as-prepared and current annealed amorphous (Co0.9Fe0.05Ni0.05)75Si15B10 ribbons. Asymmetry was created by micro magnets. Different numbers of magnets were used and it was found that increasing number of magnet, the shift in AGMI curves increases. When two micro magnets were placed 1 cm away from the ends of ribbon, a distortion in two peak shape of the GMI curve was observed. At high frequency range, a linear change in the AGMI was observed for the current annealed sample.
Extremely Low Passive Microwave Brightness Temperatures Due to Thunderstorms
NASA Technical Reports Server (NTRS)
Cecil, Daniel J.
2015-01-01
Extreme events by their nature fall outside the bounds of routine experience. With imperfect or ambiguous measuring systems, it is appropriate to question whether an unusual measurement represents an extreme event or is the result of instrument errors or other sources of noise. About three weeks after the Tropical Rainfall Measuring Mission (TRMM) satellite began collecting data in Dec 1997, a thunderstorm was observed over northern Argentina with 85 GHz brightness temperatures below 50 K and 37 GHz brightness temperatures below 70 K (Zipser et al. 2006). These values are well below what had previously been observed from satellite sensors with lower resolution. The 37 GHz brightness temperatures are also well below those measured by TRMM for any other storm in the subsequent 16 years. Without corroborating evidence, it would be natural to suspect a problem with the instrument, or perhaps an irregularity with the platform during the first weeks of the satellite mission. Automated quality control flags or other procedures in retrieval algorithms could treat these measurements as errors, because they fall outside the expected bounds. But the TRMM satellite also carries a radar and a lightning sensor, both confirming the presence of an intense thunderstorm. The radar recorded 40+ dBZ reflectivity up to about 19 km altitude. More than 200 lightning flashes per minute were recorded. That same storm's 19 GHz brightness temperatures below 150 K would normally be interpreted as the result of a low-emissivity water surface (e.g., a lake, or flood waters) if not for the simultaneous measurements of such intense convection. This paper will examine records from TRMM and related satellite sensors including SSMI, AMSR-E, and the new GMI to find the strongest signatures resulting from thunderstorms, and distinguishing those from sources of noise. The lowest brightness temperatures resulting from thunderstorms as seen by TRMM have been in Argentina in November and December. For SSMI sensors carried on five DMSP satellites examined so far, the lowest thunderstorm-related brightness temperatures have been from Argentina in November - December and from Minnesota in June-July. The Minnesota cases were associated with spotter reports of large hail, significant severe wind, and tornadoes. Those locations have the record-holders for each satellite. The lowest AMSR-E 36.5 GHz brightness temperatures associated with deep convection have been in Argentina; the lowest 89.0 GHz brightness temperatures were from Typhoon Bolaven in the Philippine Sea. This paper will show examples of cases with the lowest brightness temperatures, and map the locations of these and other storms with brightness temperatures nearly as low. The study is largely motivated by the new GMI sensor on the Global Precipitation Mission core satellite, launched in February 2014, with its high resolution expected to reveal unprecedented low brightness temperatures when extreme events are encountered.
Magnetic microfluidic system for isolation of single cells
NASA Astrophysics Data System (ADS)
Mitterboeck, Richard; Kokkinis, Georgios; Berris, Theocharis; Keplinger, Franz; Giouroudi, Ioanna
2015-06-01
This paper presents the design and realization of a compact, portable and cost effective microfluidic system for isolation and detection of rare circulating tumor cells (CTCs) in suspension. The innovative aspect of the proposed isolation method is that it utilizes superparamagnetic particles (SMPs) to label CTCs and then isolate those using microtraps with integrated current carrying microconductors. The magnetically labeled and trapped CTCs can then be detected by integrated magnetic microsensors e.g. giant magnetoresistive (GMR) or giant magnetoimpedance (GMI) sensors. The channel and trap dimensions are optimized to protect the cells from shear stress and achieve high trapping efficiency. These intact single CTCs can then be used for additional analysis, testing and patient specific drug screening. Being able to analyze the CTCs metastasis-driving capabilities on the single cell level is considered of great importance for developing patient specific therapies. Experiments showed that it is possible to capture single labeled cells in multiple microtraps and hold them there without permanent electric current and magnetic field.
Inter-comparison of the EUMETSAT H-SAF and NASA PPS precipitation products over Western Europe.
NASA Astrophysics Data System (ADS)
Kidd, Chris; Panegrossi, Giulia; Ringerud, Sarah; Stocker, Erich
2017-04-01
The development of precipitation retrieval techniques utilising passive microwave satellite observations has achieved a good degree of maturity through the use of physically-based schemes. The DMSP Special Sensor Microwave Imager/Sounder (SSMIS) has been the mainstay of passive microwave observations over the last 13 years forming the basis of many satellite precipitation products, including NASA's Precipitation Processing System (PPS) and EUMETSAT's Hydrological Satellite Application Facility (H-SAF). The NASA PPS product utilises the Goddard Profiling (GPROF; currently 2014v2-0) retrieval scheme that provides a physically consistent retrieval scheme through the use of coincident active/passive microwave retrievals from the Global Precipitation Measurement (GPM) mission core satellite. The GPM combined algorithm retrieves hydrometeor profiles optimized for consistency with both Dual-frequency Precipitation Radar (DPR) and GPM Microwave Imager (GMI); these profiles form the basis of the GPROF database which can be utilized for any constellation radiometer within the framework a Bayesian retrieval scheme. The H-SAF product (PR-OBS-1 v1.7) is based on a physically-based Bayesian technique where the a priori information is provided by a Cloud Dynamic Radiation Database (CDRD). Meteorological parameter constraints, derived from synthetic dynamical-thermodynamical-hydrological meteorological profile variables, are used in conjunction with multi-hydrometeor microphysical profiles and multispectral PMW brightness temperature vectors into a specialized a priori knowledge database underpinning and guiding the algorithm's Bayesian retrieval solver. This paper will present the results of an inter-comparison of the NASA PPS GPROF and EUMETSAT H-SAF PR-OBS-1 products over Western Europe for the period from 1 January 2015 through 31 December 2016. Surface radar is derived from the UKMO-derived Nimrod European radar product, available at 15 minute/5 km resolution. Initial results show that overall the correlations between the two satellite precipitation products and surface radar precipitation estimates are similar, particularly for cases where there is extensive precipitation; however, the H-SAF tends to have poorer correlations in situations where rain is light or limited in extent. Similarly, RMSEs for the GPROF scheme tend to a smaller than those of the H-SAF retrievals. The difference in the performance can be traced to the identification of precipitation; the GPROF2014v2-0 scheme overestimates the occurrence and extent of the precipitation, generating a significant amount of light precipitation. The H-SAF scheme has a lower precipitation threshold of about 0.25 mmh-1 while overestimating moderate and higher precipitation intensities.
NASA Technical Reports Server (NTRS)
Rodriquez, J. M.; Yoshida, Y.; Duncan, B. N.; Bucsela, E. J.; Gleason, J. F.; Allen, D.; Pickering, K. E.
2007-01-01
We present simulations of the tropospheric composition for the years 2004 and 2005, carried out by the GMI Combined Stratosphere-Troposphere (Combo) model, at a resolution of 2degx2.5deg. The model includes a new parameterization of lightning sources of NO(x) which is coupled to the cloud mass fluxes in the adopted meteorological fields. These simulations use two different sets of input meteorological fields: a)late-look assimilated fields from the Global Modeling and Assimilation Office (GMAO), GEOS-4 system and b) 12-hour forecast fields initialized with the assimilated data. Comparison of the forecast to the assimilated fields indicates that the forecast fields exhibit less vigorous convection, and yield tropical precipitation fields in better agreement with observations. Since these simulations include a complete representation of the stratosphere, they provide realistic stratosphere-tropospheric fluxes of O3 and NO(y). Furthermore, the stratospheric contribution to total columns of different troposheric species can be subtracted in a consistent fashion, and the lightning production of NO(y) will depend on the adopted meteorological field. We concentrate here on the simulated tropospheric columns of NO2, and compare them to observations by the OM1 instrument for the years 2004 and 2005. The comparison is used to address these questions: a) is there a significant difference in the agreement/disagreement between simulations for these two different meteorological fields, and if so, what causes these differences?; b) how do the simulations compare to OMI observations, and does this comparison indicate an improvement in simulations with the forecast fields? c) what are the implications of these simulations for our understanding of the NO2 emissions over continental polluted regions?
Meyer, Damien; Cunnac, Sébastien; Guéneron, Mareva; Declercq, Céline; Van Gijsegem, Frédérique; Lauber, Emmanuelle; Boucher, Christian; Arlat, Matthieu
2006-01-01
Ralstonia solanacearum GMI1000 is a gram-negative plant pathogen which contains an hrp gene cluster which codes for a type III protein secretion system (TTSS). We identified two novel Hrp-secreted proteins, called PopF1 and PopF2, which display similarity to one another and to putative TTSS translocators, HrpF and NopX, from Xanthomonas spp. and rhizobia, respectively. They also show similarities with TTSS translocators of the YopB family from animal-pathogenic bacteria. Both popF1 and popF2 belong to the HrpB regulon and are required for the interaction with plants, but PopF1 seems to play a more important role in virulence and hypersensitive response (HR) elicitation than PopF2 under our experimental conditions. PopF1 and PopF2 are not necessary for the secretion of effector proteins, but they are required for the translocation of AvrA avirulence protein into tobacco cells. We conclude that PopF1 and PopF2 are type III translocators belonging to the HrpF/NopX family. The hrpF gene of Xanthomonas campestris pv. campestris partially restored HR-inducing ability to popF1 popF2 mutants of R. solanacearum, suggesting that translocators of R. solanacearum and Xanthomonas are functionally conserved. Finally, R. solanacearum strain UW551, which does not belong to the same phylotype as GMI1000, also possesses two putative translocator proteins. However, although one of these proteins is clearly related to PopF1 and PopF2, the other seems to be different and related to NopX proteins, thus showing that translocators might be variable in R. solanacearum. PMID:16788199
NASA Astrophysics Data System (ADS)
Murray, L. T.; Strode, S. A.; Fiore, A. M.; Lamarque, J. F.; Prather, M. J.; Thompson, C. R.; Peischl, J.; Ryerson, T. B.; Allen, H.; Blake, D. R.; Crounse, J. D.; Brune, W. H.; Elkins, J. W.; Hall, S. R.; Hintsa, E. J.; Huey, L. G.; Kim, M. J.; Moore, F. L.; Ullmann, K.; Wennberg, P. O.; Wofsy, S. C.
2017-12-01
Nitrogen oxides (NOx ≡ NO + NO2) in the background atmosphere are critical precursors for the formation of tropospheric ozone and OH, thereby exerting strong influence on surface air quality, reactive greenhouse gases, and ecosystem health. The impact of NOx on atmospheric composition and climate is sensitive to the relative partitioning of reactive nitrogen between NOx and longer-lived reservoir species of the total reactive nitrogen family (NOy) such as HNO3, HNO4, PAN and organic nitrates (RONO2). Unfortunately, global chemistry-climate models (CCMs) and chemistry-transport models (CTMs) have historically disagreed in their reactive nitrogen budgets outside of polluted continental regions, and we have lacked in situ observations with which to evaluate them. Here, we compare and evaluate the NOy budget of six global models (GEOS-Chem CTM, GFDL AM3 CCM, GISS E2.1 CCM, GMI CTM, NCAR CAM CCM, and UCI CTM) using new observations of total reactive nitrogen and its member species from the NASA Atmospheric Tomography (ATom) mission. ATom has now completed two of its four planned deployments sampling the remote Pacific and Atlantic basins of both hemispheres with a comprehensive suite of measurements for constraining reactive photochemistry. All six models have simulated conditions climatologically similar to the deployments. The GMI and GEOS-Chem CTMs have in addition performed hindcast simulations using the MERRA-2 reanalysis, and have been sampled along the flight tracks. We evaluate the performance of the models relative to the observations, and identify factors contributing to their disparate behavior using known differences in model oxidation mechanisms, heterogeneous loss pathways, lightning and surface emissions, and physical loss processes.
Current Operational Use of and Future Needs for Microwave Imagery at NOAA
NASA Astrophysics Data System (ADS)
Goldberg, M.; McWilliams, G.; Chang, P.
2017-12-01
There are many applications of microwave imagery served by NOAA's operational products and services. They include the use of microwave imagery and derived products for monitoring precipitation, tropical cyclones, sea surface temperature under all weather conditions, wind speed, snow and ice cover, and even soil moisture. All of NOAA's line offices including the National Weather Service, National Ocean Service, National Marine Fisheries Service, and Office of Oceanic and Atmospheric Research rely on microwave imagery. Currently microwave imagery products used by NOAA come from a constellation of satellites that includes Air Force's Special Sensor Microwave Imager Sounder (SSMIS), the Japanese Advanced Microwave Scanning Radiometer (AMSR), the Navy's WindSat, and NASA's Global Precipitation Monitoring (GPM) Microwave Imager (GMI). Follow-on missions for SSMIS are very uncertain, JAXA approval for a follow-on to AMSR2 is still pending, and GMI is a research satellite (lacking high-latitude coverage) with no commitment for operational continuity. Operational continuity refers to a series of satellites, so when one satellite reaches its design life a new satellite is launched. EUMETSAT has made a commitment to fly a microwave imager in the mid-morning orbit. China and Russia have demonstrated on-orbit microwave imagers. Of utmost importance to NOAA, however, is the quality, access, and latency of the data This presentation will focus on NOAA's current requirements for microwave imagery data which, for the most part, are being fulfilled by AMSR2, SSMIS, and WindSat. It will include examples of products and applications of microwave imagery at NOAA. We will also discuss future needs, especially for improved temporal resolution which hopefully can be met by an international constellation of microwave imagers. Finally, we will discuss what we are doing to address the potential gap in imagery.
Khalifa, J; Ouali, M; Chaltiel, L; Le Guellec, S; Le Cesne, A; Blay, J-Y; Cousin, P; Chaigneau, L; Bompas, E; Piperno-Neumann, S; Bui-Nguyen, B; Rios, M; Delord, J-P; Penel, N; Chevreau, C
2015-10-15
Advanced malignant solitary fibrous tumors (SFTs) are rare soft-tissue sarcomas with a poor prognosis. Several treatment options have been reported, but with uncertain rates of efficacy. Our aim is to describe the activity of trabectedin in a retrospective, multi-center French series of patients with SFTs. Patients were mainly identified through the French RetrospectYon database and were treated between January 2008 and May 2013. Trabectedin was administered at an initial dose of 1.5 mg/m(2), q3 weeks. The best tumor response was assessed according to the Response Evaluation Criteria In Solid Tumors 1.1. The Kaplan-Meier method was used to estimate median progression-free survival (PFS) and overall survival (OS). The growth-modulation index (GMI) was defined as the ratio between the time to progression with trabectedin (TTPn) and the TTP with the immediately prior line of treatment (TTPn-1). Eleven patients treated with trabectedin for advanced SFT were identified. Trabectedin had been used as second-line treatment in 8 patients (72.7 %) and as at least third-line therapy in a further 3 (27.3 %). The best RECIST response was a partial response (PR) in one patient (9.1 %) and stable disease (SD) in eight patients (72.7 %). Disease-control rate (DCR = PR + SD) was 81.8 %. After a median follow-up of 29.2 months, the median PFS was 11.6 months (95 % CI = 2.0; 15.2 months) and the median OS was 22.3 months (95 % CI = 9.1 months; not reached). The median GMI was 1.49 (range: 0.11-4.12). Trabectedin is a very promising treatment for advanced SFTs. Further investigations are needed.
Global snowfall: A combined CloudSat, GPM, and reanalysis perspective.
NASA Astrophysics Data System (ADS)
Milani, Lisa; Kulie, Mark S.; Skofronick-Jackson, Gail; Munchak, S. Joseph; Wood, Norman B.; Levizzani, Vincenzo
2017-04-01
Quantitative global snowfall estimates derived from multi-year data records will be presented to highlight recent advances in high latitude precipitation retrievals using spaceborne observations. More specifically, the analysis features the 2006-2016 CloudSat Cloud Profiling Radar (CPR) and the 2014-2016 Global Precipitation (GPM) Microwave Imager (GMI) and Dual-frequency Precipitation Radar (DPR) observational datasets and derived products. The ERA-Interim reanalysis dataset is also used to define the meteorological context and an independent combined modeling/observational evaluation dataset. An overview is first provided of CloudSat CPR-derived results that have stimulated significant recent research regarding global snowfall, including seasonal analyses of unique snowfall modes. GMI and DPR global annual snowfall retrievals are then evaluated against the CloudSat estimates to highlight regions where the datasets provide both consistent and diverging snowfall estimates. A hemispheric seasonal analysis for both datasets will also be provided. These comparisons aim at providing a unified global snowfall characterization that leverages the respective instrument's strengths. Attention will also be devoted to regions around the globe that experience unique snowfall modes. For instance, CloudSat has demonstrated an ability to effectively discern snowfall produced by shallow cumuliform cloud structures (e.g., lake/ocean-induced convective snow produced by air/water interactions associated with seasonal cold air outbreaks). The CloudSat snowfall database also reveals prevalent seasonal shallow cumuliform snowfall trends over climate-sensitive regions like the Greenland Ice Sheet. Other regions with unique snowfall modes, such as the US East Coast winter storm track zone that experiences intense snowfall rates directly associated with strong low pressure systems, will also be highlighted to demonstrate GPM's observational effectiveness. Linkages between CloudSat and GPM global snowfall analyses and independent ERA-Interim datasets will also be presented as a final evaluation exercise.
NASA Astrophysics Data System (ADS)
Jacobson, Mark Z.
2002-10-01
Under the 1997 Kyoto Protocol, no control of black carbon (BC) was considered. Here, it is found, through simulations in which 12 identifiable effects of aerosol particles on climate are treated, that any emission reduction of fossil-fuel (f.f.) particulate BC plus associated organic matter (OM) may slow global warming more than may any emission reduction of CO2 or CH4 for a specific period. When all f.f. BC + OM and anthropogenic CO2 and CH4 emissions are eliminated together, the period is 25-100 years. It is also estimated that historical net global warming can be attributed roughly to greenhouse gas plus f.f. BC + OM warming minus substantial cooling by other particles. Eliminating all f.f. BC + OM could eliminate 20-45% of net warming (8-18% of total warming before cooling is subtracted out) within 3-5 years if no other change occurred. Reducing CO2 emissions by a third would have the same effect, but after 50-200 years. Finally, diesel cars emitting continuously under the most recent U.S. and E.U. particulate standards (0.08 g/mi; 0.05 g/km) may warm climate per distance driven over the next 100+ years more than equivalent gasoline cars. Thus, fuel and carbon tax laws that favor diesel appear to promote global warming. Toughening vehicle particulate emission standards by a factor of 8 (0.01 g/mi; 0.006 g/km) does not change this conclusion, although it shortens the period over which diesel cars warm to 13-54 years. Although control of BC + OM can slow warming, control of greenhouse gases is necessary to stop warming. Reducing BC + OM will not only slow global warming but also improve human health.
Trace Gas/Aerosol Interactions and GMI Modeling Support
NASA Technical Reports Server (NTRS)
Penner, Joyce E.; Liu, Xiaohong; Das, Bigyani; Bergmann, Dan; Rodriquez, Jose M.; Strahan, Susan; Wang, Minghuai; Feng, Yan
2005-01-01
Current global aerosol models use different physical and chemical schemes and parameters, different meteorological fields, and often different emission sources. Since the physical and chemical parameterization schemes are often tuned to obtain results that are consistent with observations, it is difficult to assess the true uncertainty due to meteorology alone. Under the framework of the NASA global modeling initiative (GMI), the differences and uncertainties in aerosol simulations (for sulfate, organic carbon, black carbon, dust and sea salt) solely due to different meteorological fields are analyzed and quantified. Three meteorological datasets available from the NASA DAO GCM, the GISS-II' GCM, and the NASA finite volume GCM (FVGCM) are used to drive the same aerosol model. The global sulfate and mineral dust burdens with FVGCM fields are 40% and 20% less than those with DAO and GISS fields, respectively due to its heavier rainfall. Meanwhile, the sea salt burden predicted with FVGCM fields is 56% and 43% higher than those with DAO and GISS, respectively, due to its stronger convection especially over the Southern Hemispheric Ocean. Sulfate concentrations at the surface in the Northern Hemisphere extratropics and in the middle to upper troposphere differ by more than a factor of 3 between the three meteorological datasets. The agreement between model calculated and observed aerosol concentrations in the industrial regions (e.g., North America and Europe) is quite similar for all three meteorological datasets. Away from the source regions, however, the comparisons with observations differ greatly for DAO, FVGCM and GISS, and the performance of the model using different datasets varies largely depending on sites and species. Global annual average aerosol optical depth at 550 nm is 0.120-0.131 for the three meteorological datasets.
Multidecadal Changes in the UTLS Ozone from the MERRA-2 Reanalysis and the GMI Chemistry Model
NASA Technical Reports Server (NTRS)
Wargan, Krzysztof; Orbe, Clara; Pawson, Steven; Ziemke, Jerald R.; Oman, Luke; Olsen, Mark; Coy, Lawrence; Knowland, Emma
2018-01-01
Long-term changes of ozone in the UTLS (Upper Troposphere / Lower Stratosphere) reflect the response to decreases in the stratospheric concentrations of ozone-depleting substances as well as changes in the stratospheric circulation induced by climate change. To date, studies of UTLS ozone changes and variability have relied mainly on satellite and in-situ observations as well as chemistry-climate model simulations. By comparison, the potential of reanalysis ozone data remains relatively untapped. This is despite evidence from recent studies, including detailed analyses conducted under SPARC (Scalable Processor Architecture) Reanalysis Intercomparison Project (S-RIP), that demonstrate that stratospheric ozone fields from modern atmospheric reanalyses exhibit good agreement with independent data while delineating issues related to inhomogeneities in the assimilated observations. In this presentation, we will explore the possibility of inferring long-term geographically and vertically resolved behavior of the lower stratospheric (LS) ozone from NASA's MERRA-2 (Modern-Era Retrospective Analysis for Research and Applications -2) reanalysis after accounting for the few known discontinuities and gaps in its assimilated input data. This work builds upon previous studies that have documented excellent agreement between MERRA-2 ozone and ozonesonde observations in the LS. Of particular importance is a relatively good vertical resolution of MERRA-2 allowing precise separation of tropospheric and stratospheric ozone contents. We also compare the MERRA-2 LS ozone results with the recently completed 37-year simulation produced using Goddard Earth Observing System in "replay"� mode coupled with the GMI (Global Modeling Initiative) chemistry mechanism. Replay mode dynamically constrains the model with the MERRA-2 reanalysis winds, temperature, and pressure. We will emphasize the areas of agreement of the reanalysis and replay and interpret differences between them in the context of our increasing understanding of model transport driven by assimilated winds.
Evaluation of Convective Transport in the GEOS-5 Chemistry and Climate Model
NASA Technical Reports Server (NTRS)
Pickering, Kenneth E.; Ott, Lesley E.; Shi, Jainn J.; Tao. Wei-Kuo; Mari, Celine; Schlager, Hans
2011-01-01
The NASA Goddard Earth Observing System (GEOS-5) Chemistry and Climate Model (CCM) consists of a global atmospheric general circulation model and the combined stratospheric and tropospheric chemistry package from the NASA Global Modeling Initiative (GMI) chemical transport model. The subgrid process of convective tracer transport is represented through the Relaxed Arakawa-Schubert parameterization in the GEOS-5 CCM. However, substantial uncertainty for tracer transport is associated with this parameterization, as is the case with all global and regional models. We have designed a project to comprehensively evaluate this parameterization from the point of view of tracer transport, and determine the most appropriate improvements that can be made to the GEOS-5 convection algorithm, allowing improvement in our understanding of the role of convective processes in determining atmospheric composition. We first simulate tracer transport in individual observed convective events with a cloud-resolving model (WRF). Initial condition tracer profiles (CO, CO2, O3) are constructed from aircraft data collected in undisturbed air, and the simulations are evaluated using aircraft data taken in the convective anvils. A single-column (SCM) version of the GEOS-5 GCM with online tracers is then run for the same convective events. SCM output is evaluated based on averaged tracer fields from the cloud-resolving model. Sensitivity simulations with adjusted parameters will be run in the SCM to determine improvements in the representation of convective transport. The focus of the work to date is on tropical continental convective events from the African Monsoon Multidisciplinary Analyses (AMMA) field mission in August 2006 that were extensively sampled by multiple research aircraft.
NASA Astrophysics Data System (ADS)
Sotiropoulou, R. P.; Meshkhidze, N.; Nenes, A.
2006-12-01
The aerosol indirect forcing is one of the largest sources of uncertainty in assessments of anthropogenic climate change [IPCC, 2001]. Much of this uncertainty arises from the approach used for linking cloud droplet number concentration (CDNC) to precursor aerosol. Global Climate Models (GCM) use a wide range of cloud droplet activation mechanisms ranging from empirical [Boucher and Lohmann, 1995] to detailed physically- based formulations [e.g., Abdul-Razzak and Ghan, 2000; Fountoukis and Nenes, 2005]. The objective of this study is to assess the uncertainties in indirect forcing and autoconversion of cloud water to rain caused by the application of different cloud droplet parameterization mechanisms; this is an important step towards constraining the aerosol indirect effects (AIE). Here we estimate the uncertainty in indirect forcing and autoconversion rate using the NASA Global Model Initiative (GMI). The GMI allows easy interchange of meteorological fields, chemical mechanisms and the aerosol microphysical packages. Therefore, it is an ideal tool for assessing the effect of different parameters on aerosol indirect forcing. The aerosol module includes primary emissions, chemical production of sulfate in clear air and in-cloud aqueous phase, gravitational sedimentation, dry deposition, wet scavenging in and below clouds, and hygroscopic growth. Model inputs include SO2 (fossil fuel and natural), black carbon (BC), organic carbon (OC), mineral dust and sea salt. The meteorological data used in this work were taken from the NASA Data Assimilation Office (DAO) and two different GCMs: the NASA GEOS4 finite volume GCM (FVGCM) and the Goddard Institute for Space Studies version II' (GISS II') GCM. Simulations were carried out for "present day" and "preindustrial" emissions using different meteorological fields (i.e. DAO, FVGCM, GISS II'); cloud droplet number concentration is computed from the correlations of Boucher and Lohmann [1995], Abdul-Razzak and Ghan [2000], Feingold and Heymsfield [1992], Fountoukis and Nenes [2005] and Segal and Khain [2006]. Computed CDNC is used to calculate the cloud optical depth, the autoconversion rate and the mean top-of-the-atmosphere (TOA) short-wave radiative forcing using modified FAST-J algorithm [Meshkhidze et al., 2006]. Autoconversion of cloud water to precipitation is parameterized following the formulation of Khairoutdinov and Kogan [2000]. References Abdul-Razzak, H., and S. J. Ghan (2000), J. Geophys. Res., 105, 6837-6844. Boucher, O., and U. Lohmann (1995), Tellus, Ser. B, 47, 281- 300. Feingold, G. and A. Heymsfield (1992), J. Atmos. Sci., 49, 2325-2342. Fountoukis, C., and A. Nenes (2005), J. Geophys. Res., 110, D11212, doi:10.1029/ 2004JD005591. Intergovernmental Panel on Climate Change - IPCC (2001), Climate Change, The Scientific Basis, Cambridge University Press, UK. Khairoutdinov, M. and Y. Kogan (2000), Mon. Weather Rev., 128 (1), 229-243. Meshkhidze, N., A Nenes, J. Kouatchou, B. Das and J. Rodriguez, 7th International Aerosol Conference, American Association for Aerosol Research (IAC 2006), St. Paul, Minnesota, October 2006 Nenes, A., and J. H. Seinfeld (2003), J. Geophys. Res., 108, 4415, doi:10.1029/ 2002JD002911. Segal, Y., and A. Khain (2006), J. Geophys. Res., 111, D15204, doi:10.1029/2005JD006561.
Petraitiene, Ruta; Petraitis, Vidmantas; Groll, Andreas H.; Sein, Tin; Schaufele, Robert L.; Francesconi, Andrea; Bacher, John; Avila, Nilo A.; Walsh, Thomas J.
2002-01-01
The antifungal efficacy, pharmacokinetics, and safety of caspofungin (CAS) were investigated in the treatment and prophylaxis of invasive pulmonary aspergillosis due to Aspergillus fumigatus in persistently neutropenic rabbits. Antifungal therapy consisted of 1, 3, or 6 mg of CAS/kg of body weight/day (CAS1, CAS3, and CAS6, respectively) or 1 mg of deoxycholate amphotericin B (AMB)/kg/day intravenously for 12 days starting 24 h after endotracheal inoculation. Prophylaxis (CAS1) was initiated 4 days before endotracheal inoculation. Rabbits treated with CAS had significant improvement in survival and reduction in organism-mediated pulmonary injury (OMPI) measured by pulmonary infarct score and total lung weight (P < 0.01). However, animals treated with CAS demonstrated a paradoxical trend toward increased residual fungal burden (log CFU per gram) and increased serum galactomannan antigen index (GMI) despite improved survival. Rabbits receiving prophylactic CAS1 also showed significant improvement in survival and reduction in OMPI (P < 0.01), but there was no effect on residual fungal burden. In vitro tetrazolium salt hyphal damage assays and histologic studies demonstrated that CAS had concentration- and dose-dependent effects on hyphal structural integrity. In parallel with a decline in GMI, AMB significantly reduced the pulmonary tissue burden of A. fumigatus (P ≤ 0.01). The CAS1, CAS3, and CAS6 dose regimens demonstrated dose-proportional exposure and maintained drug levels in plasma above the MIC for the entire 24-h dosing interval at doses that were ≥3 mg/kg/day. As serial galactomannan antigen levels may be used for therapeutic monitoring, one should be aware that profoundly neutropenic patients receiving echinocandins for aspergillosis might have persistent galactomannan antigenemia despite clinical improvement. CAS improved survival, reduced pulmonary injury, and caused dose-dependent hyphal damage but with no reduction in residual fungal burden or galactomannan antigenemia in persistently neutropenic rabbits with invasive pulmonary aspergillosis. PMID:11751105
Global Precipitation Measurement (GPM) launch, commissioning, and early operations
NASA Astrophysics Data System (ADS)
Neeck, Steven P.; Kakar, Ramesh K.; Azarbarzin, Ardeshir A.; Hou, Arthur Y.
2014-10-01
The Global Precipitation Measurement (GPM) mission is an international partnership co-led by NASA and the Japan Aerospace Exploration Agency (JAXA). The mission centers on the GPM Core Observatory and consists of an international network, or constellation, of additional satellites that together will provide next-generation global observations of precipitation from space. The GPM constellation will provide measurements of the intensity and variability of precipitation, three-dimensional structure of cloud and storm systems, the microphysics of ice and liquid particles within clouds, and the amount of water falling to Earth's surface. Observations from the GPM constellation, combined with land surface data, will improve weather forecast models; climate models; integrated hydrologic models of watersheds; and forecasts of hurricanes/typhoons/cylcones, landslides, floods and droughts. The GPM Core Observatory carries an advanced radar/radiometer system and serves as a reference standard to unify precipitation measurements from all satellites that fly within the constellation. The GPM Core Observatory improves upon the capabilities of its predecessor, the NASA-JAXA Tropical Rainfall Measuring Mission (TRMM), with advanced science instruments and expanded coverage of Earth's surface. The GPM Core Observatory carries two instruments, the NASA-supplied GPM Microwave Imager (GMI) and the JAXA-supplied Dual-frequency Precipitation Radar (DPR). The GMI measures the amount, size, intensity and type of precipitation, from heavy-tomoderate rain to light rain and snowfall. The DPR provides three-dimensional profiles and intensities of liquid and solid precipitation. The French Centre National d'Études Spatiales (CNES), the Indian Space Research Organisation (ISRO), the U.S. National Oceanic and Atmospheric Administration (NOAA), the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), and the U.S. Department of Defense are partners with NASA and JAXA. The GPM Core Observatory was launched from JAXA's Tanegashima Space Center on an H-IIA launch vehicle on February 28, 2014 Japan Standard Time (JST). The mission has completed its checkout and commissioning phase and is in Operations Phase. The current status and early results will be discussed.
NASA Astrophysics Data System (ADS)
Quiros, David C.; Smith, Jeremy; Thiruvengadam, Arvind; Huai, Tao; Hu, Shaohua
2017-11-01
Heavy-duty on-road vehicles account for 70% of all freight transport and 20% of transportation-sector greenhouse gas (GHG) emissions in the United States. This study measured three prevalent GHG emissions - carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) - from seven heavy-duty vehicles, fueled by diesel and compressed natural gas (CNG), and compliant to the MY 2007 or 2010 U.S. EPA emission standards, while operated over six routes used for freight movement in California. Total combined (tractor, trailer, and payload) weights were 68,000 ± 1000 lbs. for the seven vehicles. Using the International Panel on Climate Change (IPCC) radiative forcing values for a 100-year time horizon, N2O emissions accounted for 2.6-8.3% of total tailpipe CO2 equivalent emissions (CO2-eq) for diesel vehicles equipped with Diesel Oxidation Catalyst, Diesel Particulate Filter, and Selective Catalytic Reduction system (DOC + DPF + SCR), and CH4 emissions accounted for 1.4-5.9% of CO2-eq emissions from the CNG-powered vehicle with a three-way catalyst (TWC). N2O emissions from diesel vehicles equipped with SCR (0.17-0.30 g/mi) were an order of magnitude higher than diesel vehicles without SCR (0.013-0.023 g/mi) during highway operation. For the vehicles selected in this test program, we measured 11-22% lower CO2-eq emissions from a hybrid compared to conventional diesel vehicles during transport over lower-speed routes of the freight transport system, but 20-27% higher CO2-eq emissions during higher-speed routes. Similarly, a CNG vehicle emitted up to 15% lower CO2-eq compared to conventional diesel vehicles over more neutral-grade highway routes, but emitted up to 12% greater CO2-eq emissions over routes with higher engine loads.
NASA Technical Reports Server (NTRS)
Lammers, Matt
2017-01-01
Geospatial weather visualization remains predominately a two-dimensional endeavor. Even popular advanced tools like the Nullschool Earth display 2-dimensional fields on a 3-dimensional globe. Yet much of the observational data and model output contains detailed three-dimensional fields. In 2014, NASA and JAXA (Japanese Space Agency) launched the Global Precipitation Measurement (GPM) satellite. Its two instruments, the Dual-frequency Precipitation Radar (DPR) and GPM Microwave Imager (GMI) observe much of the Earth's atmosphere between 65 degrees North Latitude and 65 degrees South Latitude. As part of the analysis and visualization tools developed by the Precipitation Processing System (PPS) Group at NASA Goddard, a series of CesiumJS [Using Cesium Markup Language (CZML), JavaScript (JS) and JavaScript Object Notation (JSON)] -based globe viewers have been developed to improve data acquisition decision making and to enhance scientific investigation of the satellite data. Other demos have also been built to illustrate the capabilities of CesiumJS in presenting atmospheric data, including model forecasts of hurricanes, observed surface radar data, and gridded analyses of global precipitation. This talk will present these websites and the various workflows used to convert binary satellite and model data into a form easily integrated with CesiumJS.
Model Assessment of the Impact on Ozone of Subsonic and Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Ko, Malcolm; Weisenstein, Debra; Danilin, Michael; Scott, Courtney; Shia, Run-Lie
2000-01-01
This is the final report for work performed between June 1999 through May 2000. The work represents continuation of the previous contract which encompasses five areas: (1) continued refinements and applications of the 2-D chemistry-transport model (CTM) to assess the ozone effects from aircraft operation in the stratosphere; (2) studying the mechanisms that determine the evolution of the sulfur species in the aircraft plume and how such mechanisms affect the way aircraft sulfur emissions should be introduced into global models; (3) the development of diagnostics in the AER 3-wave interactive model to assess the importance of the dynamics feedback and zonal asymmetry in model prediction of ozone response to aircraft operation; (4) the development of a chemistry parameterization scheme in support of the global modeling initiative (GMI); and (5) providing assessment results for preparation of national and international reports which include the "Aviation and the Global Atmosphere" prepared by the Intergovernmental Panel on Climate Change, "Assessment of the effects of high-speed aircraft in the stratosphere: 1998" by NASA, and the "Model and Measurements Intercomparison II" by NASA. Part of the work was reported in the final report. We participated in the SAGE III Ozone Loss and Validation Experiment (SOLVE) campaign and we continue with our analyses of the data.
Precipitation Estimation Using Combined Radar/Radiometer Measurements Within the GPM Framework
NASA Technical Reports Server (NTRS)
Hou, Arthur
2012-01-01
The Global Precipitation Measurement (GPM) Mission is an international satellite mission specifically designed to unify and advance precipitation measurements from a constellation of research and operational microwave sensors. The GPM mission centers upon the deployment of a Core Observatory in a 65o non-Sun-synchronous orbit to serve as a physics observatory and a transfer standard for intersatellite calibration of constellation radiometers. The GPM Core Observatory will carry a Ku/Ka-band Dual-frequency Precipitation Radar (DPR) and a conical-scanning multi-channel (10-183 GHz) GPM Microwave Radiometer (GMI). The DPR will be the first dual-frequency radar in space to provide not only measurements of 3-D precipitation structures but also quantitative information on microphysical properties of precipitating particles needed for improving precipitation retrievals from microwave sensors. The DPR and GMI measurements will together provide a database that relates vertical hydrometeor profiles to multi-frequency microwave radiances over a variety of environmental conditions across the globe. This combined database will be used as a common transfer standard for improving the accuracy and consistency of precipitation retrievals from all constellation radiometers. For global coverage, GPM relies on existing satellite programs and new mission opportunities from a consortium of partners through bilateral agreements with either NASA or JAXA. Each constellation member may have its unique scientific or operational objectives but contributes microwave observations to GPM for the generation and dissemination of unified global precipitation data products. In addition to the DPR and GMI on the Core Observatory, the baseline GPM constellation consists of the following sensors: (1) Special Sensor Microwave Imager/Sounder (SSMIS) instruments on the U.S. Defense Meteorological Satellite Program (DMSP) satellites, (2) the Advanced Microwave Scanning Radiometer-2 (AMSR-2) on the GCOM-W1 satellite of JAXA, (3) the Multi-Frequency Microwave Scanning Radiometer (MADRAS) and the multi-channel microwave humidity sounder (SAPHIR) on the French-Indian Megha- Tropiques satellite, (4) the Microwave Humidity Sounder (MHS) on the National Oceanic and Atmospheric Administration (NOAA)-19, (5) MHS instruments on MetOp satellites launched by the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), (6) the Advanced Technology Microwave Sounder (ATMS) on the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP), and (7) ATMS instruments on the NOAA-NASA Joint Polar Satellite System (JPSS) satellites. Data from Chinese and Russian microwave radiometers may also become available through international collaboration under the auspices of the Committee on Earth Observation Satellites (CEOS) and Group on Earth Observations (GEO). The current generation of global rainfall products combines observations from a network of uncoordinated satellite missions using a variety of merging techniques. GPM will provide next-generation precipitation products characterized by: (1) more accurate instantaneous precipitation estimate (especially for light rain and cold-season solid precipitation), (2) intercalibrated microwave brightness temperatures from constellation radiometers within a consistent framework, and (3) unified precipitation retrievals from constellation radiometers using a common a priori hydrometeor database constrained by combined radar/radiometer measurements provided by the GPM Core Observatory.
TRMM Microwave Imager (TMI) Updates for Final Data Version Release
NASA Technical Reports Server (NTRS)
Kroodsma, Rachael A; Bilanow, Stephen; Ji, Yimin; McKague, Darren
2017-01-01
The Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) dataset released by the Precipitation Processing System (PPS) will be updated to a final version within the next year. These updates are based on increased knowledge in recent years of radiometer calibration and sensor performance issues. In particular, the Global Precipitation Measurement (GPM) Microwave Imager (GMI) is used as a model for many of the TMI version updates. This paper discusses four aspects of the TMI data product that will be improved: spacecraft attitude, calibration and quality control, along-scan bias corrections, and sensor pointing accuracy. These updates will be incorporated into the final TMI data version, improving the quality of the data product and ensuring accurate geophysical parameters can be derived from TMI.
NASA Technical Reports Server (NTRS)
Choi, S.; Joiner, J.; Choi, Y.; Duncan, B. N.; Bucsela, E.
2014-01-01
We derive free-tropospheric NO2 volume mixing ratios (VMRs) and stratospheric column amounts of NO2 by applying a cloud slicing technique to data from the Ozone Monitoring Instrument (OMI) on the Aura satellite. In the cloud-slicing approach, the slope of the above-cloud NO2 column versus the cloud scene pressure is proportional to the NO2 VMR. In this work, we use a sample of nearby OMI pixel data from a single orbit for the linear fit. The OMI data include cloud scene pressures from the rotational-Raman algorithm and above-cloud NO2 vertical column density (VCD) (defined as the NO2 column from the cloud scene pressure to the top-of-the-atmosphere) from a differential optical absorption spectroscopy (DOAS) algorithm. Estimates of stratospheric column NO2 are obtained by extrapolating the linear fits to the tropopause. We compare OMI-derived NO2 VMRs with in situ aircraft profiles measured during the NASA Intercontinental Chemical Transport Experiment Phase B (INTEX-B) campaign in 2006. The agreement is generally within the estimated uncertainties when appropriate data screening is applied. We then derive a global seasonal climatology of free-tropospheric NO2 VMR in cloudy conditions. Enhanced NO2 in the free troposphere commonly appears near polluted urban locations where NO2 produced in the boundary layer may be transported vertically out of the boundary layer and then horizontally away from the source. Signatures of lightning NO2 are also shown throughout low and middle latitude regions in summer months. A profile analysis of our cloud slicing data indicates signatures of uplifted and transported anthropogenic NO2 in the middle troposphere as well as lightning-generated NO2 in the upper troposphere. Comparison of the climatology with simulations from the Global Modeling Initiative (GMI) for cloudy conditions (cloud optical thicknesses > 10) shows similarities in the spatial patterns of continental pollution outflow. However, there are also some differences in the seasonal variation of free-tropospheric NO2 VMRs near highly populated regions and in areas affected by lightning-generated NOx. Stratospheric column NO2 obtained from cloud slicing agrees well with other independently-generated estimates, providing further confidence in the free-tropospheric results.
Njenga, S M; Wamae, C N
2001-10-01
An immunochromatographic card test (ICT) that uses fingerprick whole blood instead of serum for diagnosis of bancroftian filariasis has recently been developed. The card test was validated in the field in Kenya by comparing its sensitivity to the combined sensitivity of Knott's concentration and counting chamber methods. A total of 102 (14.6%) and 117 (16.7%) persons was found to be microfilaremic by Knott's concentration and counting chamber methods, respectively. The geometric mean intensities (GMI) were 74.6 microfilariae (mf)/ml and 256.5 mf/ml by Knott's concentration and counting chamber methods, respectively. All infected individuals detected by both Knott's concentration and counting chamber methods were also antigen positive by the ICT filariasis card test (100% sensitivity). Further, of 97 parasitologically amicrofilaremic persons, 24 (24.7%) were antigen positive by the ICT. The overall prevalence of antigenemia was 37.3%. Of 100 nonendemic area control persons, none was found to be filarial antigen positive (100% specificity). The results show that the new version of the ICT filariasis card test is a simple, sensitive, specific, and rapid test that is convenient in field settings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montalvo, D.A.; Hare, C.T.
1985-03-01
The report describes the laboratory testing of nine in-use light-duty gasoline passenger cars using up to four PCV disablement configurations. The nine vehicles included 1975 to 1983 model years, with odometer readings generally between 20,000 and 60,000 miles. No two vehicles were identical in make and engine type, and engine displacements ranged from 89 to 403 cu in. The vehicles were tested over the 1975 Federal Test Procedure, with sampling for crankcase HC conducted during each individual cycle of the 3-bag FTP and during the 10-minute hot soak. Emissions of crankcase HC are provided in g/mi for the 3-bag FTP,more » and in g/min for the 10-minute soak.« less
Ground Validation Assessments of GPM Core Observatory Science Requirements
NASA Astrophysics Data System (ADS)
Petersen, Walt; Huffman, George; Kidd, Chris; Skofronick-Jackson, Gail
2017-04-01
NASA Global Precipitation Measurement (GPM) Mission science requirements define specific measurement error standards for retrieved precipitation parameters such as rain rate, raindrop size distribution, and falling snow detection on instantaneous temporal scales and spatial resolutions ranging from effective instrument fields of view [FOV], to grid scales of 50 km x 50 km. Quantitative evaluation of these requirements intrinsically relies on GPM precipitation retrieval algorithm performance in myriad precipitation regimes (and hence, assumptions related to physics) and on the quality of ground-validation (GV) data being used to assess the satellite products. We will review GPM GV products, their quality, and their application to assessing GPM science requirements, interleaving measurement and precipitation physical considerations applicable to the approaches used. Core GV data products used to assess GPM satellite products include 1) two minute and 30-minute rain gauge bias-adjusted radar rain rate products and precipitation types (rain/snow) adapted/modified from the NOAA/OU multi-radar multi-sensor (MRMS) product over the continental U.S.; 2) Polarimetric radar estimates of rain rate over the ocean collected using the K-Pol radar at Kwajalein Atoll in the Marshall Islands and the Middleton Island WSR-88D radar located in the Gulf of Alaska; and 3) Multi-regime, field campaign and site-specific disdrometer-measured rain/snow size distribution (DSD), phase and fallspeed information used to derive polarimetric radar-based DSD retrievals and snow water equivalent rates (SWER) for comparison to coincident GPM-estimated DSD and precipitation rates/types, respectively. Within the limits of GV-product uncertainty we demonstrate that the GPM Core satellite meets its basic mission science requirements for a variety of precipitation regimes. For the liquid phase, we find that GPM radar-based products are particularly successful in meeting bias and random error requirements associated with retrievals of rain rate and required +/- 0.5 millimeter error bounds for mass-weighted mean drop diameter. Version-04 (V4) GMI GPROF radiometer-based rain rate products exhibit reasonable agreement with GV, but do not completely meet mission science requirements over the continental U.S. for lighter rain rates (e.g., 1 mm/hr) due to excessive random error ( 75%). Importantly, substantial corrections were made to the V4 GPROF algorithm and preliminary analysis of Version 5 (V5) rain products indicates more robust performance relative to GV. For the frozen phase and a modest GPM requirement to "demonstrate detection of snowfall", DPR products do successfully identify snowfall within the sensitivity and beam sampling limits of the DPR instrument ( 12 dBZ lower limit; lowest clutter-free bins). Similarly, the GPROF algorithm successfully "detects" falling snow and delineates it from liquid precipitation. However, the GV approach to computing falling-snow "detection" statistics is intrinsically tied to GPROF Bayesian algorithm-based thresholds of precipitation "detection" and model analysis temperature, and is not sufficiently tied to SWER. Hence we will also discuss ongoing work to establish the lower threshold SWER for "detection" using combined GV radar, gauge and disdrometer-based case studies.
Recent Progress on the Second Generation CMORPH: A Prototype Operational Processing System
NASA Astrophysics Data System (ADS)
Xie, Pingping; Joyce, Robert; Wu, Shaorong
2016-04-01
As reported at the EGU General Assembly of 2015, a conceptual test system was developed for the second generation CMORPH to produce global analyses of 30-min precipitation on a 0.05deg lat/lon grid over the entire globe from pole to pole through integration of information from satellite observations as well as numerical model simulations. The second generation CMORPH is built upon the Kalman Filter based CMORPH algorithm of Joyce and Xie (2011). Inputs to the system include both rainfall and snowfall rate retrievals from passive microwave (PMW) measurements aboard all available low earth orbit (LEO) satellites, precipitation estimates derived from infrared (IR) observations of geostationary (GEO) as well as LEO platforms, and precipitation simulations from numerical global models. Sub-systems were developed and refined to derive precipitation estimates from the GEO and LEO IR observations and to compute precipitating cloud motion vectors. The results were reported at the EGU of 2014 and the AGU 2015 Fall Meetings. In this presentation, we report our recent work on the construction of a prototype operational processing system for the second generation CMORPH. The second generation CMORPH prototype operational processing system takes in the passive microwave (PMW) retrievals of instantaneous precipitation rates from all available sensors, the full-resolution GEO and LEO IR data, as well as the hourly precipitation fields generated by the NOAA/NCEP Climate Forecast System (CFS) Reanalysis (CFS). First, a combined field of PMW based precipitation retrievals (MWCOMB) is created on a 0.05deg lat/lon grid over the entire globe through inter-calibrating retrievals from various sensors against a common reference. For this experiment, the reference field is the GMI based retrievals with climatological adjustment against the TMI retrievals using data over the overlapping period. Precipitation estimation is then derived from the GEO and LEO IR data through calibration against the global MWCOMB and the CloudSat CPR based estimates. At the meantime, precipitating cloud motion vectors are derived through the combination of vectors computed from the GEO IR based precipitation estimates and the CFSR precipitation with a 2DVAR technique. A prototype system is applied to generate integrated global precipitation estimates over the entire globe for a three-month period from June 1 to August 31 of 2015. Preliminary tests are conducted to optimize the performance of the system. Specific efforts are made to improve the computational efficiency of the system. The second generation CMORPH test products are compared to the first generation CMORPH and ground observations. Detailed results will be reported at the EGU.
NASA Astrophysics Data System (ADS)
Zhang, A.; Chen, S.; Fan, S.; Min, C.
2017-12-01
Precipitation is one of the basic elements of regional and global climate change. Not only does the precipitation have a great impact on the earth's hydrosphere, but also plays a crucial role in the global energy balance. S-band ground-based dual-polarization radar has the excellent performance of identifying the different phase states of precipitation, which can dramatically improve the accuracy of hail identification and quantitative precipitation estimation (QPE). However, the ground-based radar cannot measure the precipitation in mountains, sparsely populated plateau, desert and ocean because of the ground-based radar void. The Unites States National Aeronautics and Space Administration (NASA) and Japan Aerospace Exploration Agency (JAXA) have launched the Global Precipitation Measurement (GPM) for almost three years. GPM is equipped with a GPM Microwave Imager (GMI) and a Dual-frequency (Ku- and Ka-band) Precipitation Radar (DPR) that covers the globe between 65°S and 65°N. The main parameters and the detection method of DPR are different from those of ground-based radars, thus, the DPR's reliability and capability need to be investigated and evaluated by the ground-based radar. This study compares precipitation derived from the ground-based radar measurement to that derived from the DPR's observations. The ground-based radar is a S-band dual-polarization radar deployed near an airport in the west of Zhuhai city. The ground-based quantitative precipitation estimates are with a high resolution of 1km×1km×6min. It shows that this radar covers the whole Pearl River Delta of China, including Hong Kong and Macao. In order to quantify the DPR precipitation quantification capabilities relative to the S-band radar, statistical metrics used in this study are as follows: the difference (Dif) between DPR and the S-band radar observation, root-mean-squared error (RMSE) and correlation coefficient (CC). Additionally, Probability of Detection (POD) and False Alarm Ratio (FAR) are used to further evaluate the rainfall capacity of the DPR. The comparisons performed between the DPR and the S-band radar are expected to provide a useful reference not only for algorithm developers but also the end users in hydrology, ecology, weather forecast service and so on.
The Global Precipitation Measurement (GPM) Mission: Overview and U.S. Status
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Azarbarzin, Ardeshir A.; Kakar, Ramesh K.; Neeck, Steven
2011-01-01
The Global Precipitation Measurement (GPM) Mission is an international satellite mission specifically designed to unify and advance precipitation measurements from a constellation of research and operational microwave sensors. Building upon the success of the U.S.-Japan Tropical Rainfall Measuring Mission (TRMM), the National Aeronautics and Space Administration (NASA) of the United States and the Japan Aerospace and Exploration Agency (JAXA) will deploy in 2013 a GPM "Core" satellite carrying a KulKa-band Dual-frequency Precipitation Radar (DPR) and a conical-scanning multi-channel (10-183 GHz) GPM Microwave Imager (GMI) to establish a new reference standard for precipitation measurements from space. The combined active/passive sensor measurements will also be used to provide common database for precipitation retrievals from constellation sensors. For global coverage, GPM relies on existing satellite programs and new mission opportunities from a consortium of partners through bilateral agreements with either NASA or JAXA. Each constellation member may have its unique scientific or operational objectives but contributes microwave observations to GPM for the generation and dissemination of unified global precipitation data products. In addition to the DPR and GMI on the Core Observatory, the baseline GPM constellation consists of the following sensors: (1) Special Sensor Microwave Imager/Sounder (SSMIS) instruments on the U.S. Defense Meteorological Satellite Program (DMSP) satellites, (2) the Advanced Microwave Scanning Radiometer- 2 (AMSR-2) on the GCOM-Wl satellite of JAXA, (3) the Multi-Frequency Microwave Scanning Radiometer (MADRAS) and the multi-channel microwave humidity sounder (SAPHIR) on the French-Indian Megha-Tropiques satellite, (4) the Microwave Humidity Sounder (MHS) on the National Oceanic and Atmospheric Administration (NOAA)-19, (5) MHS instruments on MetOp satellites launched by the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), (6) the Advanced Technology Microwave Sounder (ATMS) on the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP), (7) ATMS instruments on the NOAA-NASA Joint Polar Satellite System (JPSS) satellites, and (8) a microwave imager under planning for the Defense Weather Satellite System (DWSS).
NASA Astrophysics Data System (ADS)
Gong, J.; Zeng, X.; Wu, D. L.; Li, X.
2017-12-01
Diurnal variation of tropical ice cloud has been well observed and examined in terms of the area of coverage, occurring frequency, and total mass, but rarely on ice microphysical parameters (habit, size, orientation, etc.) because of lack of direct measurements of ice microphysics on a high temporal and spatial resolutions. This accounts for a great portion of the uncertainty in evaluating ice cloud's role on global radiation and hydrological budgets. The design of Global Precipitation Measurement (GPM) mission's procession orbit gives us an unprecedented opportunity to study the diurnal variation of ice microphysics on the global scale for the first time. Dominated by cloud ice scattering, high-frequency microwave polarimetric difference (PD, namely the brightness temperature difference between vertically- and horizontally-polarized paired channel measurements) from the GPM Microwave Imager (GMI) has been proven by our previous study to be very valuable to infer cloud ice microphysical properties. Using one year of PD measurements at 166 GHz, we found that cloud PD exhibits a strong diurnal cycle in the tropics (25S-25N). The peak PD amplitude varies as much as 35% over land, compared to only 6% over ocean. The diurnal cycle of the peak PD value is strongly anti-correlated with local ice cloud occurring frequency and the total ice mass with a leading period of 3 hours for the maximum correlation. The observed PD diurnal cycle can be explained by the change of ice crystal axial ratio. Using a radiative transfer model, we can simulate the observed 166 GHz PD-brightness temperature curve as well as its diurnal variation using different axial ratio values, which can be caused by the diurnal variation of ice microphysical properties including particle size, percentage of horizontally-aligned non-spherical particles, and ice habit. The leading of the change of PD ahead of ice cloud mass and occurring frequency implies the important role microphysics play in the formation and dissipation processes of ice clouds and frozen precipitations.
Construction and analysis of gene-gene dynamics influence networks based on a Boolean model.
Mazaya, Maulida; Trinh, Hung-Cuong; Kwon, Yung-Keun
2017-12-21
Identification of novel gene-gene relations is a crucial issue to understand system-level biological phenomena. To this end, many methods based on a correlation analysis of gene expressions or structural analysis of molecular interaction networks have been proposed. They have a limitation in identifying more complicated gene-gene dynamical relations, though. To overcome this limitation, we proposed a measure to quantify a gene-gene dynamical influence (GDI) using a Boolean network model and constructed a GDI network to indicate existence of a dynamical influence for every ordered pair of genes. It represents how much a state trajectory of a target gene is changed by a knockout mutation subject to a source gene in a gene-gene molecular interaction (GMI) network. Through a topological comparison between GDI and GMI networks, we observed that the former network is denser than the latter network, which implies that there exist many gene pairs of dynamically influencing but molecularly non-interacting relations. In addition, a larger number of hub genes were generated in the GDI network. On the other hand, there was a correlation between these networks such that the degree value of a node was positively correlated to each other. We further investigated the relationships of the GDI value with structural properties and found that there are negative and positive correlations with the length of a shortest path and the number of paths, respectively. In addition, a GDI network could predict a set of genes whose steady-state expression is affected in E. coli gene-knockout experiments. More interestingly, we found that the drug-targets with side-effects have a larger number of outgoing links than the other genes in the GDI network, which implies that they are more likely to influence the dynamics of other genes. Finally, we found biological evidences showing that the gene pairs which are not molecularly interacting but dynamically influential can be considered for novel gene-gene relationships. Taken together, construction and analysis of the GDI network can be a useful approach to identify novel gene-gene relationships in terms of the dynamical influence.
Srinivasa, Rajiv N; Chick, Jeffrey Forris Beecham; Hage, Anthony N; Gemmete, Joseph J; Murrey, Douglas C; Srinivasa, Ravi N
2018-04-01
To report approach, technical success, safety, and short-term outcomes of thoracic duct stent-graft reconstruction for the treatment of chylothorax. Two patients, 1 (50%) male and 1 (50%) female, with mean age of 38 years (range: 16-59 years) underwent endolymphatic thoracic duct stent-graft reconstruction between September 2016 and July 2017. Patients had radiographic left-sided chylothoraces (n = 2) from idiopathic causes (n = 1) and heart transplantation (n = 1). In both (100%) patients, antegrade lymphatic access was used to opacify the thoracic duct after which retrograde access was used for thoracic duct stent-graft placement. Pelvic lymphangiography technical success, antegrade cisterna chyli cannulation technical success, thoracic duct opacification technical success, retrograde thoracic duct access technical success, thoracic duct stent-graft reconstruction technical success, ethiodized oil volume, contrast volume, estimated blood loss, procedure time, fluoroscopy time, radiation dose, clinical success, complications, deaths, and follow-up were recorded. Pelvic lymphangiography, antegrade cisterna chyli cannulation, thoracic duct opacification, retrograde thoracic duct access, and thoracic duct stent-graft reconstruction were technically successful in both (100%) patients. Mean ethiodized oil volume was 8 mL (range: 5-10 mL). Mean contrast volume was 13 mL (range: 5-20 mL). Mean estimated blood loss was 13 mL (range: 10-15 mL). Mean fluoroscopy time was 50.4 min (range: 31.2-69.7 min). Mean dose area product and reference air kerma were 954.4 μGmy 2 (range: 701-1,208 μGmy 2 ) and 83.5 mGy (range: 59-108 mGy), respectively. Chylothorax resolved in both (100%) patients. There were no minor or major complications directly related to the procedure. Thoracic duct stent-graft reconstruction may be a technically successful and safe alternative to thoracic duct embolization, disruption, and surgical ligation for the treatment of chylothorax. Additional studies are warranted. Copyright © 2017 Elsevier Inc. All rights reserved.
STAR Algorithm Integration Team - Facilitating operational algorithm development
NASA Astrophysics Data System (ADS)
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tatur, M.; Tyrer, H.; Tomazic, D.
2005-01-01
Due to its high efficiency and superior durability the diesel engine is again becoming a prime candidate for future light-duty vehicle applications within the United States. While in Europe the overall diesel share exceeds 40%, the current diesel share in the U.S. is 1%. Despite the current situation and the very stringent Tier 2 emission standards, efforts are being made to introduce the diesel engine back into the U.S. market. In order to succeed, these vehicles have to comply with emissions standards over a 120,000 miles distance while maintaining their excellent fuel economy. The availability of technologies such as high-pressure,more » common-rail fuel systems, low-sulfur diesel fuel, NO{sub x} adsorber catalysts (NAC), and diesel particle filters (DPFs) allow the development of powertrain systems that have the potential to comply with the light-duty Tier 2 emission requirements. In support of this, the U.S. Department of Energy (DOE) has engaged in several test projects under the Advanced Petroleum Based Fuels - Diesel Emission Controls (APBF-DEC) activity. The primary technology being addressed by these projects are the sulfur tolerance and durability of the NAC/DPF system. The project investigated the performance of the emission control system and system desulphurization effects on regulated and unregulated emissions. Emissions measurements were conducted over the Federal Test Procedure (FTP), Supplemental Federal Test Procedure (SFTP), and the Highway Fuel Economy Test (HFET). Testing was conducted after the accumulation of 150 hours of engine operation calculated to be the equivalent of approximately 8,200 miles. For these evaluations three out of six of the FTP test cycles were within the 50,000-mile Tier 2 bin 5 emission standards (0.05 g/mi NO{sub x} and 0.01 g/mi PM). Emissions over the SC03 portion of the SFTP were within the 4,000-mile SFTP standards. The emission of NO{sub x}+NMHC exceeded the 4,000-mile standard over the US06 portion of the SFTP. Testing was also conducted after the accumulation of 1,000 hours of engine operation calculated to be the equivalent of approximately 50,000 miles. Recalibrated driveability maps resulted in more repeatable NOs{sub x} emissions from cycle to cycle. The NO{sub x} level was below the Tier 2 emission limits for 50,000 and 120,000 miles. NMHC emissions were found at a level outside the limit for 120,000 miles.« less
Takach, Edward; O'Shea, Thomas; Liu, Hanlan
2014-08-01
Quantifying amino acids in biological matrices is typically performed using liquid chromatography (LC) coupled with fluorescent detection (FLD), requiring both derivatization and complete baseline separation of all amino acids. Due to its high specificity and sensitivity, the use of UPLC-MS/MS eliminates the derivatization step and allows for overlapping amino acid retention times thereby shortening the analysis time. Furthermore, combining UPLC-MS/MS with stable isotope labeling (e.g., isobaric tag for relative and absolute quantitation, i.e., iTRAQ) of amino acids enables quantitation while maintaining sensitivity, selectivity and speed of analysis. In this study, we report combining UPLC-MS/MS analysis with iTRAQ labeling of amino acids resulting in the elution and quantitation of 44 amino acids within 5 min demonstrating the speed and convenience of this assay over established approaches. This chromatographic analysis time represented a 5-fold improvement over the conventional HPLC-MS/MS method developed in our laboratory. In addition, the UPLC-MS/MS method demonstrated improvements in both specificity and sensitivity without loss of precision. In comparing UPLC-MS/MS and HPLC-MS/MS results of 32 detected amino acids, only 2 amino acids exhibited imprecision (RSD) >15% using UPLC-MS/MS, while 9 amino acids exhibited RSD >15% using HPLC-MS/MS. Evaluating intra- and inter-assay precision over 3 days, the quantitation range for 32 detected amino acids in rat plasma was 0.90-497 μM, with overall mean intra-day precision of less than 15% and mean inter-day precision of 12%. This UPLC-MS/MS assay was successfully implemented for the quantitative analysis of amino acids in rat and mouse plasma, along with mouse urine and tissue samples, resulting in the following concentration ranges: 0.98-431 μM in mouse plasma for 32 detected amino acids; 0.62-443 μM in rat plasma for 32 detected amino acids; 0.44-8590μM in mouse liver for 33 detected amino acids; 0.61-1241 μM in mouse kidney for 37 detected amino acids; and 1.39-1,681 μM in rat urine for 34 detected amino acids. The utility of the assay was further demonstrated by measuring and comparing plasma amino acid levels between pre-diabetic Zucker diabetic fatty rats (ZDF/Gmi fa/fa) and their lean littermates (ZDF/Gmi fa/?). Significant differences (P<0.001) in 9 amino acid concentrations were observed, with the majority ranging from a 2- to 5-fold increase in pre-diabetic ZDF rats on comparison with ZDF lean rats, consistent with previous literature reports. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Donghoon; Choi, Sunghoon; Kim, Hee-Joung
2018-03-01
When processing medical images, image denoising is an important pre-processing step. Various image denoising algorithms have been developed in the past few decades. Recently, image denoising using the deep learning method has shown excellent performance compared to conventional image denoising algorithms. In this study, we introduce an image denoising technique based on a convolutional denoising autoencoder (CDAE) and evaluate clinical applications by comparing existing image denoising algorithms. We train the proposed CDAE model using 3000 chest radiograms training data. To evaluate the performance of the developed CDAE model, we compare it with conventional denoising algorithms including median filter, total variation (TV) minimization, and non-local mean (NLM) algorithms. Furthermore, to verify the clinical effectiveness of the developed denoising model with CDAE, we investigate the performance of the developed denoising algorithm on chest radiograms acquired from real patients. The results demonstrate that the proposed denoising algorithm developed using CDAE achieves a superior noise-reduction effect in chest radiograms compared to TV minimization and NLM algorithms, which are state-of-the-art algorithms for image noise reduction. For example, the peak signal-to-noise ratio and structure similarity index measure of CDAE were at least 10% higher compared to conventional denoising algorithms. In conclusion, the image denoising algorithm developed using CDAE effectively eliminated noise without loss of information on anatomical structures in chest radiograms. It is expected that the proposed denoising algorithm developed using CDAE will be effective for medical images with microscopic anatomical structures, such as terminal bronchioles.
Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces
2011-02-28
Final Report for AFOSR #FA9550-08-1-0422 Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces August 1, 2008 to November 30...focused on developing high level general purpose algorithms , such as Tabu Search and Genetic Algorithms . However, understanding of when and why these... algorithms perform well still lags. Our project extended the theory of certain combi- natorial optimization problems to develop analytical
Analysis of Ozone in Cloudy Versus Clear Sky Conditions
NASA Technical Reports Server (NTRS)
Strode, Sarah; Douglass, Anne; Ziemke, Jerald
2016-01-01
Convection impacts ozone concentrations by transporting ozone vertically and by lofting ozone precursors from the surface, while the clouds and lighting associated with convection affect ozone chemistry. Observations of the above-cloud ozone column (Ziemke et al., 2009) derived from the OMI instrument show geographic variability, and comparison of the above-cloud ozone with all-sky tropospheric ozone columns from OMI indicates important regional differences. We use two global models of atmospheric chemistry, the GMI chemical transport model (CTM) and the GEOS-5 chemistry climate model, to diagnose the contributions of transport and chemistry to observed differences in ozone between areas with and without deep convection, as well as differences in clean versus polluted convective regions. We also investigate how the above-cloud tropospheric ozone from OMI can provide constraints on the relationship between ozone and convection in a free-running climate simulation as well as a CTM.
Costs of using motivational interviewing for problem drinking in the U.S. Air Force.
Cowell, Alexander J; Brown, Janice M; Wedehase, Brendan J; Masuda, Yuta J
2010-12-01
Despite the popularity of motivational interviewing (MI) to address heavy drinking, limited evidence exists on the costs of using MI to address heavy drinking. This study examines the costs of using MI to address heavy drinking at four U.S. Air Force (USAF) bases. Clients were referred to and assessed at a base program to address their drinking as a result of an incident; those who were not alcohol dependent were invited to participate in the study. Participants consented and were randomly assigned to one of three intervention arms: individual MI (IMI), group MI (GMI), and Substance Abuse Awareness Seminar (SAAS). Three cost perspectives were taken: USAF, client, and the two combined. Data were collected from bases and public sources. The start-up cost per base ranged from $1340 to $2400 per provider staff member. Average implementation costs across bases were highest for the SAAS intervention ($148 per client).
Reconciling CloudSat and GPM Estimates of Falling Snow
NASA Technical Reports Server (NTRS)
Munchak, S. Joseph; Jackson, Gail Skofronick; Kulie, Mark; Wood, Norm; Miliani, Lisa
2017-01-01
Satellite-based estimates of falling snow have been provided by CloudSat (launched in 2006) and the Global Precipitation Measurement (GPM) core satellite (launched in 2014). The CloudSat estimates are derived from W-band radar measurements whereas the GPM estimates are derived from its scanning Ku- and Ka-band Dual-Frequency Precipitation Radar (DPR) and 13-channel microwave imager (GMI). Each platform has advantages and disadvantages: CloudSat has higher resolution (approximately 1.5 km) and much better sensitivity (-28 dBZ), but poorer sampling (nadir-only and daytime-only since 2011) and the reflectivity-snowfall (Z-S) relationship is poorly constrained with single-frequency measurements. Meanwhile, DPR suffers from relatively poor resolution (5 km) and sensitivity (approximately 13 dBZ), but has cross-track scanning capability to cover a 245-km swath. Additionally, where Ku and Ka measurements are available, the conversion of reflectivity to snowfall rate is better-constrained than with a single frequency.
Iterative algorithms for large sparse linear systems on parallel computers
NASA Technical Reports Server (NTRS)
Adams, L. M.
1982-01-01
Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.
2014-09-01
to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging system that...research is to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging ...i) developed time-of- flight extraction algorithms to perform USCT, (ii) developing image reconstruction algorithms for USCT, (iii) developed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2016-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
APBF-DEC NOx Adsorber/DPF Project: SUV / Pick-up Truck Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webb, C; Weber, P; Thornton,M
2003-08-24
The objective of this project is to determine the influence of diesel fuel composition on the ability of NOX adsorber catalyst (NAC) technology, in conjunction with diesel particle filters (DPFs), to achieve stringent emissions levels with a minimal fuel economy impact. The test bed for this project was intended to be a light-duty sport utility vehicle (SUV) with a goal of achieving light-duty Tier 2-Bin 5 tail pipe emission levels (0.07 g/mi. NOX and 0.01 g/mi. PM). However, with the current US market share of light-duty diesel applications being so low, no US 2002 model year (MY) light-duty truck (LDT)more » or SUV platforms equipped with a diesel engine and having a gross vehicle weight rating (GVWR) less than 8500 lb exist. While the current level of diesel engine use is relatively small in the light-duty class, there exists considerable potential for the diesel engine to gain a much larger market share in the future as manufacturers of heavy light-duty trucks (HLDTs) attempt to offset the negative impact on cooperate average fuel economy (CAFE) that the recent rise in market share of the SUVs and LDTs has caused. The US EPA Tier 2 emission standards also contain regulation to prevent the migration of heavy light-duty trucks and SUV's to the medium duty class. This preventive measure requires that all medium duty trucks, SUV's and vans in the 8,500 to 10,000 lb GVWR range being used as passenger vehicles, meet light-duty Tier 2 standards. In meeting the Tier 2 emission standards, the HLDTs and medium-duty passenger vehicles (MDPVs) will face the greatest technological challenges. Because the MDPV is the closest weight class and application relative to the potential upcoming HLDTs and SUV's, a weight class compromise was made in this program to allow the examination of using a diesel engine with a NAC-DPF system on a 2002 production vehicle. The test bed for this project is a 2500 series Chevrolet Silverado equipped with a 6.6L Duramax diesel engine certified to 2002 MY Federal heavy-duty and 2002 MY California medium-duty emission standards. The stock vehicle included cooled air charge (CAC), turbocharger (TC), direct fuel injection (DFI), oxidation catalyst (OC), and exhaust gas recirculation (EGR)« less
Lima, Sergio M. Q.; Berbel-Filho, Waldir M.; Araújo, Thais F. P.; Lazzarotto, Henrique; Tatarenkov, Andrey; Avise, John C.
2017-01-01
Paleo-drainage connections and headwater stream-captures are two main historical processes shaping the distribution of strictly freshwater fishes. Recently, bathymetric-based methods of paleo-drainage reconstruction have opened new possibilities to investigate how these processes have shaped the genetic structure of freshwater organisms. In this context, the present study used paleo-drainage reconstructions and single-locus cluster delimitation analyses to examine genetic structure on the whole distribution of Pareiorhaphis garbei, a ‘near threatened’ armored catfish from the Fluminense freshwater ecoregion in Southeastern Brazil. Sequences of two mitochondrial genes (cytochrome b and cytochrome c oxidase subunit 1) were obtained from five sampling sites in four coastal drainages: Macaé (KAE), São João (SJO), Guapi-Macacu [sub-basins Guapiaçu (GAC) and Guapimirim (GMI)], and Santo Aleixo (SAL). Pronounced genetic structure was found, involving 10 haplotypes for cytB and 6 for coi, with no haplotypes shared between localities. Coalescent-based delineation methods as well as distance-based methods revealed genetic clusters corresponding to each sample site. Paleo-drainage reconstructions showed two putative paleo-rivers: an eastern one connecting KAE and SJO; and a western one merging in the Guanabara Bay (GAC, GMI, and SAL). A disagreement was uncovered between the inferred past riverine connections and current population genetic structure. Although KAE and SJO belong to the same paleo-river, the latter is more closely related to specimens from the Guanabara paleo-river. This discordance between paleo-drainage connections and phylogenetic structure may indicate an ancient stream-capture event in headwaters of this region. Furthermore, all analyses showed high divergence between KAE and the other lineages, suggesting at least one cryptic species in the latter, and that the nominal species should be restricted to the Macaé river basin, its type locality. In this drainage, impacts such as the invasive species and habitat loss can be especially threatening for such species with a narrow range. Our results also suggest that freshwater fishes from headwaters in the Serra do Mar mountains might have different biogeographical patterns than those from the lowlands, indicating a complex and dynamic climatic and geomorphological history. PMID:29259623
NASA Astrophysics Data System (ADS)
Liang, Q.; Douglass, A. R.; Duncan, B. N.; Stolarski, R. S.; Witte, J. C.
2007-12-01
In this study, we use CFC-12 and hydrochloric acid (HCl) to quantify the annual cycle of stratosphere-to- troposphere transport of O3 to the Arctic troposphere. To do so, we analyze results from a 5-year stratosphere and troposphere simulation from the Global Modeling Initiative (GMI) Chemical Transport Model (CTM) for 1994- 1998 and a 10-year simulation using the GEOS Chemistry Climate Model (GEOS CCM) for 1995-2004. The later includes a tagged CFC-12 tracer to track the transport of aged stratospheric air into the troposphere. We compare the simulated CFC-12 with 10 years surface CFC-12 measurements at two NOAA-GMD sites, Alert and Barrow. We compare O3 with 10 years of ozonesondes at Alert, Eureka, and Resolute. CFC-12, HCl and O3 are all compared with satellite observations from the Advanced Composition Explorer (ACE) and several MkIV balloon measurements in the Arctic. The GEOS CCM and GMI CTM simulations capture well the observed magnitude and annual cycle of CFC-12, HCl, and O3 in the stratosphere and troposphere. Since CFC-12 is emitted at the surface and destroyed in the stratosphere while HCl and O3 are produced in the stratosphere, the stratospheric air shows strong correlation between HCl and O3 and anti-correlation between CFC-12 and O3. We use the CFC-12 tagged tracer to track the transport from the stratosphere to the troposphere and the subsequent transport into the lower troposphere in the Arctic. HCl is paired with O3 to quantify the stratospheric contribution to O3 in the troposphere by applying a scaling factor to the simulated HCl using the HCl-O3 regression ratio. O3 and its annual cycle in the upper troposphere are dominated by stratospheric influence, which peaks in spring. The stratospheric contribution decreases as altitude decreases, accompanied by a delay in the phase of maximum. In the middle troposphere (2-6km), the stratospheric contribution peaks during the summer and is comparable to that of net photochemistry. Due to inefficient transport into the lower Arctic surface, the stratospheric contribution of O3 at the surface accounts for only a few (<5) ppbv.
Lima, Sergio M Q; Berbel-Filho, Waldir M; Araújo, Thais F P; Lazzarotto, Henrique; Tatarenkov, Andrey; Avise, John C
2017-01-01
Paleo-drainage connections and headwater stream-captures are two main historical processes shaping the distribution of strictly freshwater fishes. Recently, bathymetric-based methods of paleo-drainage reconstruction have opened new possibilities to investigate how these processes have shaped the genetic structure of freshwater organisms. In this context, the present study used paleo-drainage reconstructions and single-locus cluster delimitation analyses to examine genetic structure on the whole distribution of Pareiorhaphis garbei , a 'near threatened' armored catfish from the Fluminense freshwater ecoregion in Southeastern Brazil. Sequences of two mitochondrial genes (cytochrome b and cytochrome c oxidase subunit 1) were obtained from five sampling sites in four coastal drainages: Macaé (KAE), São João (SJO), Guapi-Macacu [sub-basins Guapiaçu (GAC) and Guapimirim (GMI)], and Santo Aleixo (SAL). Pronounced genetic structure was found, involving 10 haplotypes for cytB and 6 for coi , with no haplotypes shared between localities. Coalescent-based delineation methods as well as distance-based methods revealed genetic clusters corresponding to each sample site. Paleo-drainage reconstructions showed two putative paleo-rivers: an eastern one connecting KAE and SJO; and a western one merging in the Guanabara Bay (GAC, GMI, and SAL). A disagreement was uncovered between the inferred past riverine connections and current population genetic structure. Although KAE and SJO belong to the same paleo-river, the latter is more closely related to specimens from the Guanabara paleo-river. This discordance between paleo-drainage connections and phylogenetic structure may indicate an ancient stream-capture event in headwaters of this region. Furthermore, all analyses showed high divergence between KAE and the other lineages, suggesting at least one cryptic species in the latter, and that the nominal species should be restricted to the Macaé river basin, its type locality. In this drainage, impacts such as the invasive species and habitat loss can be especially threatening for such species with a narrow range. Our results also suggest that freshwater fishes from headwaters in the Serra do Mar mountains might have different biogeographical patterns than those from the lowlands, indicating a complex and dynamic climatic and geomorphological history.
Magnetic sensor technology based on giant magneto-impedance effect in amorphous wires
NASA Astrophysics Data System (ADS)
Wang, X.; Teng, Y.; Wang, C.; Li, Q.
2012-12-01
This project focuses on giant magneto-impedance (GMI) effect that found in the soft magnetic amorphous wires in recent years, when AC current through the amorphous wire, induced voltage in the wires would change sensitively with a small external magnetic field along the wire vertical imposed changes. GMI magnetic sensor could compensate for the shortcomings of the traditional magnetic sensors and detect weak magnetic field, meanwhile the characteristics of high stability, high sensitivity, high resolution, fast response and low power consumption, which makes it becoming the focus of extensive research at home and abroad and being new mode of the next age of the physical geography observation. The emphasis of the project is the research on the high sensitivity amorphous wire detector and the low noise capability circuit design. In this paper, it is analyzed the theory of the Amorphous Wire Giant-Magneto-Impedance (AWGMI) effect and its influence factors in details, and expatiated the sensor principle based on AWGMI. On the basis of AWGMI, the experimental system of the micro-magnetic sensor is designed, which is composed of the detecting signals, processing and collecting data, display and transmitting data circuit and corresponding functional software etc. The properties of this kind of micro-magnetic sensor are studied by experiments, such as its linearity, sensitivity, frequency response, noise, stability and temperature properties and so on, especially analyzed the relation of the drive signals with all kinds of characteristics. The results show that there is no direct relationship between the frequency of the drive signals and linear property of the sensor. But with the increase of its frequency, some fluctuation appears on the characteristic curves; the direct relation is found between the frequency of the drive signal and sensitivity, with the increase of the frequency, AWGMI effect increases monotonously. It leads to the amplitude of the output voltage increase with the change of the outer magnetic field and results in the increase of the sensor sensitivity; it can be enhanced the corresponding rate of the sensor to the low frequency magnetic field by increasing the drive signal frequency. By experiments, the best sensitivity and noise valves is 0.5225 mV/nT, 1.566nT respectively.
Graphical programming interface: A development environment for MRI methods.
Zwart, Nicholas R; Pipe, James G
2015-11-01
To introduce a multiplatform, Python language-based, development environment called graphical programming interface for prototyping MRI techniques. The interface allows developers to interact with their scientific algorithm prototypes visually in an event-driven environment making tasks such as parameterization, algorithm testing, data manipulation, and visualization an integrated part of the work-flow. Algorithm developers extend the built-in functionality through simple code interfaces designed to facilitate rapid implementation. This article shows several examples of algorithms developed in graphical programming interface including the non-Cartesian MR reconstruction algorithms for PROPELLER and spiral as well as spin simulation and trajectory visualization of a FLORET example. The graphical programming interface framework is shown to be a versatile prototyping environment for developing numeric algorithms used in the latest MR techniques. © 2014 Wiley Periodicals, Inc.
New development of the image matching algorithm
NASA Astrophysics Data System (ADS)
Zhang, Xiaoqiang; Feng, Zhao
2018-04-01
To study the image matching algorithm, algorithm four elements are described, i.e., similarity measurement, feature space, search space and search strategy. Four common indexes for evaluating the image matching algorithm are described, i.e., matching accuracy, matching efficiency, robustness and universality. Meanwhile, this paper describes the principle of image matching algorithm based on the gray value, image matching algorithm based on the feature, image matching algorithm based on the frequency domain analysis, image matching algorithm based on the neural network and image matching algorithm based on the semantic recognition, and analyzes their characteristics and latest research achievements. Finally, the development trend of image matching algorithm is discussed. This study is significant for the algorithm improvement, new algorithm design and algorithm selection in practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2015-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
NASA Astrophysics Data System (ADS)
Furukawa, K.; Nio, T.; Oki, R.; Kubota, T.; Iguchi, T.
2017-09-01
The Dual-frequency Precipitation Radar (DPR) on the Global Precipitation Measurement (GPM) core satellite was developed by Japan Aerospace Exploration Agency (JAXA) and National Institute of Information and Communications Technology (NICT). The objective of the GPM mission is to observe global precipitation more frequently and accurately. The GPM core satellite is a joint product of National Aeronautics and Space Administration (NASA), JAXA and NICT. NASA developed the satellite bus and the GPM Microwave Imager (GMI), and JAXA and NICT developed the DPR. The inclination of the GPM core satellite is 65 degrees, and the nominal flight altitude is 407 km. The non-sunsynchronous circular orbit is necessary for measuring the diurnal change of rainfall. The DPR consists of two radars, which are Ku-band precipitation radar (KuPR) and Ka-band precipitation radar (KaPR). GPM core observatory was successfully launched by H2A launch vehicle on Feb. 28, 2014. DPR orbital check out was completed in May 2014. DPR products were released to the public on Sep. 2, 2014 and Normal Observation Operation period was started. JAXA is continuing DPR trend monitoring, calibration and validation operations to confirm that DPR keeps its function and performance on orbit. The results of DPR trend monitoring, calibration and validation show that DPR kept its function and performance on orbit during the 3 years and 2 months prime mission period. The DPR Prime mission period was completed in May 2017. The version 5 GPM products were released to the public in 2017. JAXA confirmed that GPM/DPR total system performance and the GPM version 5 products achieved the success criteria and the performance indicators that were defined for the JAXA GPM/DPR mission.
Current Status of Japan's Activity for GPM/DPR and Global Rainfall Map algorithm development
NASA Astrophysics Data System (ADS)
Kachi, M.; Kubota, T.; Yoshida, N.; Kida, S.; Oki, R.; Iguchi, T.; Nakamura, K.
2012-04-01
The Global Precipitation Measurement (GPM) mission is composed of two categories of satellites; 1) a Tropical Rainfall Measuring Mission (TRMM)-like non-sun-synchronous orbit satellite (GPM Core Observatory); and 2) constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory carries the Dual-frequency Precipitation Radar (DPR), which is being developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT), and microwave radiometer provided by the National Aeronautics and Space Administration (NASA). GPM Core Observatory will be launched in February 2014, and development of algorithms is underway. DPR Level 1 algorithm, which provides DPR L1B product including received power, will be developed by the JAXA. The first version was submitted in March 2011. Development of the second version of DPR L1B algorithm (Version 2) will complete in March 2012. Version 2 algorithm includes all basic functions, preliminary database, HDF5 I/F, and minimum error handling. Pre-launch code will be developed by the end of October 2012. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The first version of GPM/DPR Level-2 Algorithm Theoretical Basis Document was completed on November 2010. The second version, "Baseline code", was completed in January 2012. Baseline code includes main module, and eight basic sub-modules (Preparation module, Vertical Profile module, Classification module, SRT module, DSD module, Solver module, Input module, and Output module.) The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. It is important to develop algorithm applicable to both TRMM/PR and KuPR in order to produce long-term continuous data set. Pre-launch code will be developed by autumn 2012. Global Rainfall Map algorithm has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project between 2002 and 2007, and near-real-time version operating at JAXA since 2007. "Baseline code" used current operational GSMaP code (V5.222,) and development completed in January 2012. Pre-launch code will be developed by autumn 2012, including update of database for rain type classification and rain/no-rain classification, and introduction of rain-gauge correction.
ProperCAD: A portable object-oriented parallel environment for VLSI CAD
NASA Technical Reports Server (NTRS)
Ramkumar, Balkrishna; Banerjee, Prithviraj
1993-01-01
Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.
Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)
2000-01-01
In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.
Solar Occultation Retrieval Algorithm Development
NASA Technical Reports Server (NTRS)
Lumpe, Jerry D.
2004-01-01
This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.
NASA Technical Reports Server (NTRS)
Chen, C. P.; Wu, S. T.
1992-01-01
The objective of this investigation has been to develop an algorithm (or algorithms) for the improvement of the accuracy and efficiency of the computer fluid dynamics (CFD) models to study the fundamental physics of combustion chamber flows, which are necessary ultimately for the design of propulsion systems such as SSME and STME. During this three year study (May 19, 1978 - May 18, 1992), a unique algorithm was developed for all speed flows. This newly developed algorithm basically consists of two pressure-based algorithms (i.e. PISOC and MFICE). This PISOC is a non-iterative scheme and the FICE is an iterative scheme where PISOC has the characteristic advantages on low and high speed flows and the modified FICE has shown its efficiency and accuracy to compute the flows in the transonic region. A new algorithm is born from a combination of these two algorithms. This newly developed algorithm has general application in both time-accurate and steady state flows, and also was tested extensively for various flow conditions, such as turbulent flows, chemically reacting flows, and multiphase flows.
CAD system for footwear design based on whole real 3D data of last surface
NASA Astrophysics Data System (ADS)
Song, Wanzhong; Su, Xianyu
2000-10-01
Two major parts of application of CAD in footwear design are studied: the development of last surface; computer-aided design of planar shoe-template. A new quasi-experiential development algorithm of last surface based on triangulation approximation is presented. This development algorithm consumes less time and does not need any interactive operation for precisely development compared with other development algorithm of last surface. Based on this algorithm, a software, SHOEMAKERTM, which contains computer aided automatic measurement, automatic development of last surface and computer aide design of shoe-template has been developed.
Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner
NASA Technical Reports Server (NTRS)
Tanis, Fred J.
1984-01-01
A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.
Development of an algorithm for controlling a multilevel three-phase converter
NASA Astrophysics Data System (ADS)
Taissariyeva, Kyrmyzy; Ilipbaeva, Lyazzat
2017-08-01
This work is devoted to the development of an algorithm for controlling transistors in a three-phase multilevel conversion system. The developed algorithm allows to organize a correct operation and describes the state of transistors at each moment of time when constructing a computer model of a three-phase multilevel converter. The developed algorithm of operation of transistors provides in-phase of a three-phase converter and obtaining a sinusoidal voltage curve at the converter output.
The Rational Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Clark, Michael
2006-12-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
NASA Astrophysics Data System (ADS)
Houchin, J. S.
2014-09-01
A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.
NASA Technical Reports Server (NTRS)
Luke, Edward Allen
1993-01-01
Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.
Computation of Symmetric Discrete Cosine Transform Using Bakhvalov's Algorithm
NASA Technical Reports Server (NTRS)
Aburdene, Maurice F.; Strojny, Brian C.; Dorband, John E.
2005-01-01
A number of algorithms for recursive computation of the discrete cosine transform (DCT) have been developed recently. This paper presents a new method for computing the discrete cosine transform and its inverse using Bakhvalov's algorithm, a method developed for evaluation of a polynomial at a point. In this paper, we will focus on both the application of the algorithm to the computation of the DCT-I and its complexity. In addition, Bakhvalov s algorithm is compared with Clenshaw s algorithm for the computation of the DCT.
The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.
Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P
1999-10-01
In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.
NASA Technical Reports Server (NTRS)
Redemann, J.; Shinozuka, Y.; Kacenelenbogen, M.; Segal-Rozenhaimer, M.; LeBlanc, S.; Vaughan, M.; Stier, P.; Schutgens, N.
2017-01-01
We describe a technique for combining multiple A-Train aerosol data sets, namely MODIS spectral AOD (aerosol optical depth), OMI AAOD (absorption aerosol optical depth) and CALIOP aerosol backscatter retrievals (hereafter referred to as MOC retrievals) to estimate full spectral sets of aerosol radiative properties, and ultimately to calculate the 3-D distribution of direct aerosol radiative effects (DARE). We present MOC results using almost two years of data collected in 2007 and 2008, and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the MODIS Collection 6 AOD data derived with the dark target and deep blue algorithms has extended the coverage of the MOC retrievals towards higher latitudes. The MOC aerosol retrievals agree better with AERONET in terms of the single scattering albedo (ssa) at 441 nm than ssa calculated from OMI and MODIS data alone, indicating that CALIOP aerosol backscatter data contains information on aerosol absorption. We compare the spatio-temporal distribution of the MOC retrievals and MOC-based calculations of seasonal clear-sky DARE to values derived from four models that participated in the Phase II AeroCom model intercomparison initiative. Overall, the MOC-based calculations of clear-sky DARE at TOA over land are smaller (less negative) than previous model or observational estimates due to the inclusion of more absorbing aerosol retrievals over brighter surfaces, not previously available for observationally-based estimates of DARE. MOC-based DARE estimates at the surface over land and total (land and ocean) DARE estimates at TOA are in between previous model and observational results. Comparisons of seasonal aerosol property to AeroCom Phase II results show generally good agreement best agreement with forcing results at TOA is found with GMI-MerraV3. We discuss sampling issues that affect the comparisons and the major challenges in extending our clear-sky DARE results to all-sky conditions. We present estimates of clear-sky and all-sky DARE and show uncertainties that stem from the assumptions in the spatial extrapolation and accuracy of aerosol and cloud properties, in the diurnal evolution of these properties, and in the radiative transfer calculations.
Semantic super networks: A case analysis of Wikipedia papers
NASA Astrophysics Data System (ADS)
Kostyuchenko, Evgeny; Lebedeva, Taisiya; Goritov, Alexander
2017-11-01
An algorithm for constructing super-large semantic networks has been developed in current work. Algorithm was tested using the "Cosmos" category of the Internet encyclopedia "Wikipedia" as an example. During the implementation, a parser for the syntax analysis of Wikipedia pages was developed. A graph based on list of articles and categories was formed. On the basis of the obtained graph analysis, algorithms for finding domains of high connectivity in a graph were proposed and tested. Algorithms for constructing a domain based on the number of links and the number of articles in the current subject area is considered. The shortcomings of these algorithms are shown and explained, an algorithm is developed on their joint use. The possibility of applying a combined algorithm for obtaining the final domain is shown. The problem of instability of the received domain was discovered when starting an algorithm from two neighboring vertices related to the domain.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Analysis of estimation algorithms for CDTI and CAS applications
NASA Technical Reports Server (NTRS)
Goka, T.
1985-01-01
Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.
Development of model reference adaptive control theory for electric power plant control applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mabius, L.E.
1982-09-15
The scope of this effort includes the theoretical development of a multi-input, multi-output (MIMO) Model Reference Control (MRC) algorithm, (i.e., model following control law), Model Reference Adaptive Control (MRAC) algorithm and the formulation of a nonlinear model of a typical electric power plant. Previous single-input, single-output MRAC algorithm designs have been generalized to MIMO MRAC designs using the MIMO MRC algorithm. This MRC algorithm, which has been developed using Command Generator Tracker methodologies, represents the steady state behavior (in the adaptive sense) of the MRAC algorithm. The MRC algorithm is a fundamental component in the MRAC design and stability analysis.more » An enhanced MRC algorithm, which has been developed for systems with more controls than regulated outputs, alleviates the MRC stability constraint of stable plant transmission zeroes. The nonlinear power plant model is based on the Cromby model with the addition of a governor valve management algorithm, turbine dynamics and turbine interactions with extraction flows. An application of the MRC algorithm to a linearization of this model demonstrates its applicability to power plant systems. In particular, the generated power changes at 7% per minute while throttle pressure and temperature, reheat temperature and drum level are held constant with a reasonable level of control. The enhanced algorithm reduces significantly control fluctuations without modifying the output response.« less
Fast and stable algorithms for computing the principal square root of a complex matrix
NASA Technical Reports Server (NTRS)
Shieh, Leang S.; Lian, Sui R.; Mcinnis, Bayliss C.
1987-01-01
This note presents recursive algorithms that are rapidly convergent and more stable for finding the principal square root of a complex matrix. Also, the developed algorithms are utilized to derive the fast and stable matrix sign algorithms which are useful in developing applications to control system problems.
Development and validation of an algorithm for laser application in wound treatment 1
da Cunha, Diequison Rite; Salomé, Geraldo Magela; Massahud, Marcelo Renato; Mendes, Bruno; Ferreira, Lydia Masako
2017-01-01
ABSTRACT Objective: To develop and validate an algorithm for laser wound therapy. Method: Methodological study and literature review. For the development of the algorithm, a review was performed in the Health Sciences databases of the past ten years. The algorithm evaluation was performed by 24 participants, nurses, physiotherapists, and physicians. For data analysis, the Cronbach’s alpha coefficient and the chi-square test for independence was used. The level of significance of the statistical test was established at 5% (p<0.05). Results: The professionals’ responses regarding the facility to read the algorithm indicated: 41.70%, great; 41.70%, good; 16.70%, regular. With regard the algorithm being sufficient for supporting decisions related to wound evaluation and wound cleaning, 87.5% said yes to both questions. Regarding the participants’ opinion that the algorithm contained enough information to support their decision regarding the choice of laser parameters, 91.7% said yes. The questionnaire presented reliability using the Cronbach’s alpha coefficient test (α = 0.962). Conclusion: The developed and validated algorithm showed reliability for evaluation, wound cleaning, and use of laser therapy in wounds. PMID:29211197
Flexible methods for segmentation evaluation: results from CT-based luggage screening.
Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry
2014-01-01
Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.
Poussier, Stéphane; Thoquet, Philippe; Trigalet-Demery, Danièle; Barthet, Séverine; Meyer, Damien; Arlat, Matthieu; Trigalet, André
2003-08-01
Ralstonia solanacearum is a plant pathogenic bacterium that undergoes a spontaneous phenotypic conversion (PC) from a wild-type pathogenic to a non-pathogenic form. PC is often associated with mutations in phcA, which is a key virulence regulatory gene. Until now, reversion to the wild-type pathogenic form has not been observed for PC variants and the biological significance of PC has been questioned. In this study, we characterized various alterations in phcA (eight IS element insertions, three tandem duplications, seven deletions and a base substitution) in 19 PC mutants from the model strain GMI1000. In five of these variants, reversion to the pathogenic form was observed in planta, while no reversion was ever noticed in vitro whatever culture media used. However, reversion was observed for a 64 bp tandem duplication in vitro in the presence of tomato root exudate. This is the first report showing a complete cycle of phenotypic conversion/reversion in a plant pathogenic bacterium.
Castillo Diaz, Jean Manuel; Delgado-Moreno, Laura; Núñez, Rafael; Nogales, Rogelio; Romero, Esperanza
2016-08-01
In biobed bioremediation systems (BBSs) with vermicomposts exposed to a high load of pesticides, 6 bacteria and 4 fungus strains were isolated, identified, and investigated to enhance the removal of pesticides. Three different mixtures of BBSs composed of vermicomposts made from greenhouse (GM), olive-mill (OM) and winery (WM) wastes were contaminated, inoculated, and incubated for one month (GMI, OMI and WMI). The inoculums maintenance was evaluated by DGGE and Q-PCR. Pesticides were monitored by HPLC-DAD. The highest bacterial and fungal abundance was observed in WMI and OMI respectively. In WMI, the consortia improved the removal of tebuconazole, metalaxyl, and oxyfluorfen by 1.6-, 3.8-, and 7.7-fold, respectively. The dissipation of oxyfluorfen was also accelerated in OMI, with less than 30% remaining after 30d. One metabolite for metalaxyl and 4 for oxyfluorfen were identified by GC-MS. The isolates could be suitable to improve the efficiency of bioremediation systems. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jia, Dongming; Manz, Jörn; Yang, Yonggang
2018-04-01
The planar boron cluster B13+ provides a model to investigate the microscopic origin of the second law of thermodynamics in a small system. It is a molecular rotor with an inner wheel that rotates in an outer bearing. The cyclic reaction path of B13+ passes along thirty equivalent global minimum structures (GMi, i = 1, 2, ..., 30). The GMs are embedded in a cyclic thirty-well potential. They are separated by thirty equivalent transition states with potential barrier Vb. If the boron rotor B13+ is prepared initially in one of the thirty GMs, with energy below Vb, then it tunnels sequentially to its nearest, next-nearest etc. neighbors (520 fs per step) such that all the other GMs get populated. As a consequence, the entropy of occupying the GMs takes about 6 ps to increases from zero to a value close to the maximum value for equi-distribution. Perfect recurrences are practically not observable.
Development of Educational Support System for Algorithm using Flowchart
NASA Astrophysics Data System (ADS)
Ohchi, Masashi; Aoki, Noriyuki; Furukawa, Tatsuya; Takayama, Kanta
Recently, an information technology is indispensable for the business and industrial developments. However, it has been a social problem that the number of software developers has been insufficient. To solve the problem, it is necessary to develop and implement the environment for learning the algorithm and programming language. In the paper, we will describe the algorithm study support system for a programmer using the flowchart. Since the proposed system uses Graphical User Interface(GUI), it will become easy for a programmer to understand the algorithm in programs.
System Design under Uncertainty: Evolutionary Optimization of the Gravity Probe-B Spacecraft
NASA Technical Reports Server (NTRS)
Pullen, Samuel P.; Parkinson, Bradford W.
1994-01-01
This paper discusses the application of evolutionary random-search algorithms (Simulated Annealing and Genetic Algorithms) to the problem of spacecraft design under performance uncertainty. Traditionally, spacecraft performance uncertainty has been measured by reliability. Published algorithms for reliability optimization are seldom used in practice because they oversimplify reality. The algorithm developed here uses random-search optimization to allow us to model the problem more realistically. Monte Carlo simulations are used to evaluate the objective function for each trial design solution. These methods have been applied to the Gravity Probe-B (GP-B) spacecraft being developed at Stanford University for launch in 1999, Results of the algorithm developed here for GP-13 are shown, and their implications for design optimization by evolutionary algorithms are discussed.
Algorithmic formulation of control problems in manipulation
NASA Technical Reports Server (NTRS)
Bejczy, A. K.
1975-01-01
The basic characteristics of manipulator control algorithms are discussed. The state of the art in the development of manipulator control algorithms is briefly reviewed. Different end-point control techniques are described together with control algorithms which operate on external sensor (imaging, proximity, tactile, and torque/force) signals in realtime. Manipulator control development at JPL is briefly described and illustrated with several figures. The JPL work pays special attention to the front or operator input end of the control algorithms.
Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M
2017-12-06
While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.
HOLA: Human-like Orthogonal Network Layout.
Kieffer, Steve; Dwyer, Tim; Marriott, Kim; Wybrow, Michael
2016-01-01
Over the last 50 years a wide variety of automatic network layout algorithms have been developed. Some are fast heuristic techniques suitable for networks with hundreds of thousands of nodes while others are multi-stage frameworks for higher-quality layout of smaller networks. However, despite decades of research currently no algorithm produces layout of comparable quality to that of a human. We give a new "human-centred" methodology for automatic network layout algorithm design that is intended to overcome this deficiency. User studies are first used to identify the aesthetic criteria algorithms should encode, then an algorithm is developed that is informed by these criteria and finally, a follow-up study evaluates the algorithm output. We have used this new methodology to develop an automatic orthogonal network layout method, HOLA, that achieves measurably better (by user study) layout than the best available orthogonal layout algorithm and which produces layouts of comparable quality to those produced by hand.
Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.
Wei, Qinglai; Li, Benkai; Song, Ruizhuo
2018-04-01
In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.
The threshold algorithm: Description of the methodology and new developments
NASA Astrophysics Data System (ADS)
Neelamraju, Sridhar; Oligschleger, Christina; Schön, J. Christian
2017-10-01
Understanding the dynamics of complex systems requires the investigation of their energy landscape. In particular, the flow of probability on such landscapes is a central feature in visualizing the time evolution of complex systems. To obtain such flows, and the concomitant stable states of the systems and the generalized barriers among them, the threshold algorithm has been developed. Here, we describe the methodology of this approach starting from the fundamental concepts in complex energy landscapes and present recent new developments, the threshold-minimization algorithm and the molecular dynamics threshold algorithm. For applications of these new algorithms, we draw on landscape studies of three disaccharide molecules: lactose, maltose, and sucrose.
Developing an Enhanced Lightning Jump Algorithm for Operational Use
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.
2009-01-01
Overall Goals: 1. Build on the lightning jump framework set through previous studies. 2. Understand what typically occurs in nonsevere convection with respect to increases in lightning. 3. Ultimately develop a lightning jump algorithm for use on the Geostationary Lightning Mapper (GLM). 4 Lightning jump algorithm configurations were developed (2(sigma), 3(sigma), Threshold 10 and Threshold 8). 5 algorithms were tested on a population of 47 nonsevere and 38 severe thunderstorms. Results indicate that the 2(sigma) algorithm performed best over the entire thunderstorm sample set with a POD of 87%, a far of 35%, a CSI of 59% and a HSS of 75%.
Hull Form Design and Optimization Tool Development
2012-07-01
global minimum. The algorithm accomplishes this by using a method known as metaheuristics which allows the algorithm to examine a large area by...further development of these tools including the implementation and testing of a new optimization algorithm , the improvement of a rapid hull form...under the 2012 Naval Research Enterprise Intern Program. 15. SUBJECT TERMS hydrodynamic, hull form, generation, optimization, algorithm
NASA Astrophysics Data System (ADS)
Gong, Weiwei; Zhou, Xu
2017-06-01
In Computer Science, the Boolean Satisfiability Problem(SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. SAT is one of the first problems that was proven to be NP-complete, which is also fundamental to artificial intelligence, algorithm and hardware design. This paper reviews the main algorithms of the SAT solver in recent years, including serial SAT algorithms, parallel SAT algorithms, SAT algorithms based on GPU, and SAT algorithms based on FPGA. The development of SAT is analyzed comprehensively in this paper. Finally, several possible directions for the development of the SAT problem are proposed.
A class of parallel algorithms for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.
Esteban, Santiago; Rodríguez Tablado, Manuel; Peper, Francisco; Mahumud, Yamila S; Ricci, Ricardo I; Kopitowski, Karin; Terrasa, Sergio
2017-01-01
Precision medicine requires extremely large samples. Electronic health records (EHR) are thought to be a cost-effective source of data for that purpose. Phenotyping algorithms help reduce classification errors, making EHR a more reliable source of information for research. Four algorithm development strategies for classifying patients according to their diabetes status (diabetics; non-diabetics; inconclusive) were tested (one codes-only algorithm; one boolean algorithm, four statistical learning algorithms and six stacked generalization meta-learners). The best performing algorithms within each strategy were tested on the validation set. The stacked generalization algorithm yielded the highest Kappa coefficient value in the validation set (0.95 95% CI 0.91, 0.98). The implementation of these algorithms allows for the exploitation of data from thousands of patients accurately, greatly reducing the costs of constructing retrospective cohorts for research.
The advanced progress of precoding technology in 5g system
NASA Astrophysics Data System (ADS)
An, Chenyi
2017-09-01
With the development of technology, people began to put forward higher requirements for the mobile system, the emergence of the 5G subvert the track of the development of mobile communication technology. In the research of the core technology of 5G mobile communication, large scale MIMO, and precoding technology is a research hotspot. At present, the research on precoding technology in 5G system analyzes the various methods of linear precoding, the maximum ratio transmission (MRT) precoding algorithm, zero forcing (ZF) precoding algorithm, minimum mean square error (MMSE) precoding algorithm based on maximum signal to leakage and noise ratio (SLNR). Precoding algorithms are analyzed and summarized in detail. At the same time, we also do some research on nonlinear precoding methods, such as dirty paper precoding, THP precoding algorithm and so on. Through these analysis, we can find the advantages and disadvantages of each algorithm, as well as the development trend of each algorithm, grasp the development of the current 5G system precoding technology. Therefore, the research results and data of this paper can be used as reference for the development of precoding technology in 5G system.
Pike, Nancy A; Poulsen, Marie K; Woo, Mary A
Cognitive deficits are common, long-term sequelae in children and adolescents with congenital heart disease (CHD) who have undergone surgical palliation. However, there is a lack of a validated brief cognitive screening tool appropriate for the outpatient setting for adolescents with CHD. One candidate instrument is the Montreal Cognitive Assessment (MoCA) questionnaire. The purpose of the research was to validate scores from the MoCA against the General Memory Index (GMI) of the Wide Range Assessment of Memory and Learning, 2nd Edition (WRAML2), a widely accepted measure of cognition/memory, in adolescents and young adults with CHD. We administered the MoCA and the WRAML2 to 156 adolescents and young adults ages 14-21 (80 youth with CHD and 76 healthy controls who were gender and age matched). Spearman's rank order correlations were used to assess concurrent validity. To assess construct validity, the Mann-Whitney U test was used to compare differences in scores in youth with CHD and the healthy control group. Receiver operating characteristic curves were created and area under the curve, sensitivity, specificity, positive predictive value, and negative predictive value were also calculated. The MoCA median scores in the CHD versus healthy controls were (23, range 15-29 vs. 28, range 22-30; p < .001), respectively. With the screening cutoff scores at <26 points for the MoCA and 85 for GMI (<1 SD, M = 100, SD = 15), the CHD versus healthy control groups showed sensitivity of .96 and specificity of .67 versus sensitivity of .75 and specificity of .90, respectively, in the detection of cognitive deficits. A cutoff score of 26 on the MoCA was optimal in the CHD group; a cutoff of 25 had similar properties except for a lower negative predictive value. The area under the receiver operating characteristic curve (95% CI) for the MoCA was 0.84 (95% CI [0.75, 0.93], p < .001) and 0.84 (95% CI [0.62, 1.00], p = .02) for the CHD and controls, respectively. Scores on the MoCA were valid for screening to detect cognitive deficits in adolescents and young adults aged 14-21 with CHD when a cutoff score of 26 is used to differentiate youth with and without significant cognitive impairment. Future studies are needed in other adolescent disease groups with known cognitive deficits and healthy populations to explore the generalizability of validity of MoCA scores in adolescents and young adults.
Azad, Ariful; Buluç, Aydın
2016-05-16
We describe parallel algorithms for computing maximal cardinality matching in a bipartite graph on distributed-memory systems. Unlike traditional algorithms that match one vertex at a time, our algorithms process many unmatched vertices simultaneously using a matrix-algebraic formulation of maximal matching. This generic matrix-algebraic framework is used to develop three efficient maximal matching algorithms with minimal changes. The newly developed algorithms have two benefits over existing graph-based algorithms. First, unlike existing parallel algorithms, cardinality of matching obtained by the new algorithms stays constant with increasing processor counts, which is important for predictable and reproducible performance. Second, relying on bulk-synchronous matrix operations,more » these algorithms expose a higher degree of parallelism on distributed-memory platforms than existing graph-based algorithms. We report high-performance implementations of three maximal matching algorithms using hybrid OpenMP-MPI and evaluate the performance of these algorithm using more than 35 real and randomly generated graphs. On real instances, our algorithms achieve up to 200 × speedup on 2048 cores of a Cray XC30 supercomputer. Even higher speedups are obtained on larger synthetically generated graphs where our algorithms show good scaling on up to 16,384 cores.« less
HYBRID FAST HANKEL TRANSFORM ALGORITHM FOR ELECTROMAGNETIC MODELING
A hybrid fast Hankel transform algorithm has been developed that uses several complementary features of two existing algorithms: Anderson's digital filtering or fast Hankel transform (FHT) algorithm and Chave's quadrature and continued fraction algorithm. A hybrid FHT subprogram ...
A Coulomb collision algorithm for weighted particle simulations
NASA Technical Reports Server (NTRS)
Miller, Ronald H.; Combi, Michael R.
1994-01-01
A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.
Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.
2005-01-01
A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.
Development and application of unified algorithms for problems in computational science
NASA Technical Reports Server (NTRS)
Shankar, Vijaya; Chakravarthy, Sukumar
1987-01-01
A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.
Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms
Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon
2011-01-01
Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532
Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.
Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo
2017-05-01
In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.
The Global Precipitation Measurement (GPM) Mission: Overview and U.S. Status
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Azarbarzin, Ardeshir A.; Kakar, Ramesh K.; Neeck, Steven
2011-01-01
The Global Precipitation Measurement (GPM) Mission is an international satellite mission specifically designed to unify and advance precipitation measurements from a constellation of research and operational microwave sensors. The cornerstone of the GPM mission is the deployment of a Core Observatory in a 65 deg non-Sun-synchronous orbit to serve as a physics observatory and a transfer standard for inter-calibration of constellation radiometers. The GPM Core Observatory will carry a Ku/Ka-band Dual-frequency Precipitation Radar (DPR) and a conical-scanning multi-channel (10-183 GHz) GPM Microwave Radiometer (GMI). The first space-borne dual-frequency radar will provide not only measurements of 3-D precipitation structures but also quantitative information on microphysical properties of precipitating particles needed for improving precipitation retrievals from passive microwave sensors. The combined use of DPR and GMI measurements will place greater constraints on radiometer retrievals to improve the accuracy and consistency of precipitation estimates from all constellation radiometers. The GPM constellation is envisioned to comprise five or more conical-scanning microwave radiometers and four or more cross-track microwave sounders on operational satellites. NASA and the Japan Aerospace Exploration Agency (JAXA) plan to launch the GPM Core in July 2013. NASA will provide a second radiometer to be flown on a partner-provided GPM Low-Inclination Observatory (L10) to improve near real-time monitoring of hurricanes and mid-latitude storms. NASA and the Brazilian Space Program (AEB/IPNE) are currently engaged in a one-year study on potential L10 partnership. JAXA will contribute to GPM data from the Global Change Observation Mission-Water (GCOM-W) satellite. Additional partnerships are under development to include microwave radiometers on the French-Indian Megha-Tropiques satellite and U.S. Defense Meteorological Satellite Program (DMSP) satellites, as well as cross-track scanning humidity sounders on operational satellites such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP), POES, the NASA/NOAA Joint Polar Satellite System (JPSS), and EUMETSAT MetOp satellites. Data from Chinese and Russian microwave radiometers may also become available through international collaboration under the auspices of the Committee on Earth Observation Satellites (CEOS) and Group on Earth Observations (GEO). The current generation of global rainfall products combines observations from a network of uncoordinated satellite missions using a variety of merging techniques. Relative to current data products, GPM's "nextgeneration" precipitation products will be characterized by: (1) more accurate instantaneous precipitation estimate (especially for light rain and cold-season solid precipitation), (2) more frequent sampling by an expanded constellation of microwave radiometers including operational humidity sounders over land, (3) intercalibrated microwave brightness temperatures from constellation radiometers within a unified framework, and (4) physical-based precipitation retrievals from constellation radiometers using a common a priori hydrometeor database constrained by combined radar/radiometer measurements provided by the GPM Core Observatory. An overview of the GPM mission concept, the U.S. GPM program status and updates on international science collaborations on GPM will be presented.
An ATR architecture for algorithm development and testing
NASA Astrophysics Data System (ADS)
Breivik, Gøril M.; Løkken, Kristin H.; Brattli, Alvin; Palm, Hans C.; Haavardsholm, Trym
2013-05-01
A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development. In this paper we present an architecture for ATR systems that is designed to be exible, generic and efficient. The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations. A software framework in C++ that handles large data ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.
NASA Technical Reports Server (NTRS)
Phinney, D. E. (Principal Investigator)
1980-01-01
An algorithm for estimating spectral crop calendar shifts of spring small grains was applied to 1978 spring wheat fields. The algorithm provides estimates of the date of peak spectral response by maximizing the cross correlation between a reference profile and the observed multitemporal pattern of Kauth-Thomas greenness for a field. A methodology was developed for estimation of crop development stage from the date of peak spectral response. Evaluation studies showed that the algorithm provided stable estimates with no geographical bias. Crop development stage estimates had a root mean square error near 10 days. The algorithm was recommended for comparative testing against other models which are candidates for use in AgRISTARS experiments.
Wei, Qinglai; Liu, Derong; Lin, Qiao
In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.
Seizures in the elderly: development and validation of a diagnostic algorithm.
Dupont, Sophie; Verny, Marc; Harston, Sandrine; Cartz-Piver, Leslie; Schück, Stéphane; Martin, Jennifer; Puisieux, François; Alecu, Cosmin; Vespignani, Hervé; Marchal, Cécile; Derambure, Philippe
2010-05-01
Seizures are frequent in the elderly, but their diagnosis can be challenging. The objective of this work was to develop and validate an expert-based algorithm for the diagnosis of seizures in elderly people. A multidisciplinary group of neurologists and geriatricians developed a diagnostic algorithm using a combination of selected clinical, electroencephalographical and radiological criteria. The algorithm was validated by multicentre retrospective analysis of data of patients referred for specific symptoms and classified by the experts as epileptic patients or not. The algorithm was applied to all the patients, and the diagnosis provided by the algorithm was compared to the clinical diagnosis of the experts. Twenty-nine clinical, electroencephalographical and radiological criteria were selected for the algorithm. According to criteria combination, seizures were classified in four levels of diagnosis: certain, highly probable, possible or improbable. To validate the algorithm, the medical records of 269 elderly patients were analyzed (138 with epileptic seizures, 131 with non-epileptic manifestations). Patients were mainly referred for a transient focal deficit (40%), confusion (38%), unconsciousness (27%). The algorithm best classified certain and probable seizures versus possible and improbable seizures, with 86.2% sensitivity and 67.2% specificity. Using logistical regression, 2 simplified models were developed, the first with 13 criteria (Se 85.5%, Sp 90.1%), and the second with 7 criteria only (Se 84.8%, Sp 88.6%). In conclusion, the present study validated the use of a revised diagnostic algorithm to help diagnosis epileptic seizures in the elderly. A prospective study is planned to further validate this algorithm. Copyright 2010 Elsevier B.V. All rights reserved.
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Real-time Monitoring of 2017 Hurricanes and Typhoons with Lightning
NASA Astrophysics Data System (ADS)
Solorzano, N. N.; Thomas, J. N.; Bracy, C.; Holzworth, R. H., II
2017-12-01
The 2017 Atlantic season had the highest number of major hurricanes since 2005. To tackle the demand of real-time tropical cyclone (TC) monitoring, our group has developed a unique "storm-following" satellite and ground-based lightning product known as WWLLN-TC (World Wide Lightning Location Network - Tropical Cyclones; http://wwlln.net/storms/). In the present study, we explore this tool and other datasets, combining lightning and microwave data to quantify areas of intense convection in 2017 TCs Harvey, Hato, Irma, Maria, Nate, Ophelia and others. For each storm, the temporal distribution of discharges outside and within the inner core is compared to the changes in TC intensity. The intensification processes, monitored in near real-time by WWLLN-TC, are quantified in terms of pressure and/or wind speed changes. A peak in lightning activity is often observed in the inner core of TCs before and during rapid weakening, such as in Hurricanes Irma and Maria and Typhoon Hato. The microwave frequencies investigated include the 37 to 183 GHz channels of the satellite sensors DMSP/SSMIS and GPM/GMI. We reconstruct brightness temperatures from lightning data, providing more detailed pictures of the evolution of TCs at moments when satellite passes are missing or incomplete. This study also compares lightning activity in the inner core with convective and environmental parameters. Examples of environmental parameters discussed are sea surface temperature, wind shear, and sea surface height anomalies. We conclude by considering possible implications of WWLLN-TC on forecasts of rapid intensity change and rainfall.
DOT National Transportation Integrated Search
1976-04-01
The development and testing of incident detection algorithms was based on Los Angeles and Minneapolis freeway surveillance data. Algorithms considered were based on times series and pattern recognition techniques. Attention was given to the effects o...
A new algorithm for attitude-independent magnetometer calibration
NASA Technical Reports Server (NTRS)
Alonso, Roberto; Shuster, Malcolm D.
1994-01-01
A new algorithm is developed for inflight magnetometer bias determination without knowledge of the attitude. This algorithm combines the fast convergence of a heuristic algorithm currently in use with the correct treatment of the statistics and without discarding data. The algorithm performance is examined using simulated data and compared with previous algorithms.
Techniques for shuttle trajectory optimization
NASA Technical Reports Server (NTRS)
Edge, E. R.; Shieh, C. J.; Powers, W. F.
1973-01-01
The application of recently developed function-space Davidon-type techniques to the shuttle ascent trajectory optimization problem is discussed along with an investigation of the recently developed PRAXIS algorithm for parameter optimization. At the outset of this analysis, the major deficiency of the function-space algorithms was their potential storage problems. Since most previous analyses of the methods were with relatively low-dimension problems, no storage problems were encountered. However, in shuttle trajectory optimization, storage is a problem, and this problem was handled efficiently. Topics discussed include: the shuttle ascent model and the development of the particular optimization equations; the function-space algorithms; the operation of the algorithm and typical simulations; variable final-time problem considerations; and a modification of Powell's algorithm.
NASA Astrophysics Data System (ADS)
Zhileykin, M. M.; Kotiev, G. O.; Nagatsev, M. V.
2018-02-01
In order to meet the growing mobility requirements for the wheeled vehicles on all types of terrain the engineers have to develop a large number of specialized control algorithms for the multi-axle wheeled vehicle (MWV) suspension improving such qualities as ride comfort, handling and stability. The authors have developed an adaptive algorithm of the dynamic damping of the MVW body oscillations. The algorithm provides high ride comfort and high mobility of the vehicle. The article discloses a method for synthesis of an adaptive dynamic continuous algorithm of the MVW body oscillation damping and provides simulation results proving high efficiency of the developed control algorithm.
Development and Validation of an Algorithm to Identify Planned Readmissions From Claims Data.
Horwitz, Leora I; Grady, Jacqueline N; Cohen, Dorothy B; Lin, Zhenqiu; Volpe, Mark; Ngo, Chi K; Masica, Andrew L; Long, Theodore; Wang, Jessica; Keenan, Megan; Montague, Julia; Suter, Lisa G; Ross, Joseph S; Drye, Elizabeth E; Krumholz, Harlan M; Bernheim, Susannah M
2015-10-01
It is desirable not to include planned readmissions in readmission measures because they represent deliberate, scheduled care. To develop an algorithm to identify planned readmissions, describe its performance characteristics, and identify improvements. Consensus-driven algorithm development and chart review validation study at 7 acute-care hospitals in 2 health systems. For development, all discharges qualifying for the publicly reported hospital-wide readmission measure. For validation, all qualifying same-hospital readmissions that were characterized by the algorithm as planned, and a random sampling of same-hospital readmissions that were characterized as unplanned. We calculated weighted sensitivity and specificity, and positive and negative predictive values of the algorithm (version 2.1), compared to gold standard chart review. In consultation with 27 experts, we developed an algorithm that characterizes 7.8% of readmissions as planned. For validation we reviewed 634 readmissions. The weighted sensitivity of the algorithm was 45.1% overall, 50.9% in large teaching centers and 40.2% in smaller community hospitals. The weighted specificity was 95.9%, positive predictive value was 51.6%, and negative predictive value was 94.7%. We identified 4 minor changes to improve algorithm performance. The revised algorithm had a weighted sensitivity 49.8% (57.1% at large hospitals), weighted specificity 96.5%, positive predictive value 58.7%, and negative predictive value 94.5%. Positive predictive value was poor for the 2 most common potentially planned procedures: diagnostic cardiac catheterization (25%) and procedures involving cardiac devices (33%). An administrative claims-based algorithm to identify planned readmissions is feasible and can facilitate public reporting of primarily unplanned readmissions. © 2015 Society of Hospital Medicine.
3D Protein structure prediction with genetic tabu search algorithm
2010-01-01
Background Protein structure prediction (PSP) has important applications in different fields, such as drug design, disease prediction, and so on. In protein structure prediction, there are two important issues. The first one is the design of the structure model and the second one is the design of the optimization technology. Because of the complexity of the realistic protein structure, the structure model adopted in this paper is a simplified model, which is called off-lattice AB model. After the structure model is assumed, optimization technology is needed for searching the best conformation of a protein sequence based on the assumed structure model. However, PSP is an NP-hard problem even if the simplest model is assumed. Thus, many algorithms have been developed to solve the global optimization problem. In this paper, a hybrid algorithm, which combines genetic algorithm (GA) and tabu search (TS) algorithm, is developed to complete this task. Results In order to develop an efficient optimization algorithm, several improved strategies are developed for the proposed genetic tabu search algorithm. The combined use of these strategies can improve the efficiency of the algorithm. In these strategies, tabu search introduced into the crossover and mutation operators can improve the local search capability, the adoption of variable population size strategy can maintain the diversity of the population, and the ranking selection strategy can improve the possibility of an individual with low energy value entering into next generation. Experiments are performed with Fibonacci sequences and real protein sequences. Experimental results show that the lowest energy obtained by the proposed GATS algorithm is lower than that obtained by previous methods. Conclusions The hybrid algorithm has the advantages from both genetic algorithm and tabu search algorithm. It makes use of the advantage of multiple search points in genetic algorithm, and can overcome poor hill-climbing capability in the conventional genetic algorithm by using the flexible memory functions of TS. Compared with some previous algorithms, GATS algorithm has better performance in global optimization and can predict 3D protein structure more effectively. PMID:20522256
The MINERVA Software Development Process
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony; Munoz, Cesar A.; Dutle, Aaron M.
2017-01-01
This paper presents a software development process for safety-critical software components of cyber-physical systems. The process is called MINERVA, which stands for Mirrored Implementation Numerically Evaluated against Rigorously Verified Algorithms. The process relies on formal methods for rigorously validating code against its requirements. The software development process uses: (1) a formal specification language for describing the algorithms and their functional requirements, (2) an interactive theorem prover for formally verifying the correctness of the algorithms, (3) test cases that stress the code, and (4) numerical evaluation on these test cases of both the algorithm specifications and their implementations in code. The MINERVA process is illustrated in this paper with an application to geo-containment algorithms for unmanned aircraft systems. These algorithms ensure that the position of an aircraft never leaves a predetermined polygon region and provide recovery maneuvers when the region is inadvertently exited.
Efficient conjugate gradient algorithms for computation of the manipulator forward dynamics
NASA Technical Reports Server (NTRS)
Fijany, Amir; Scheid, Robert E.
1989-01-01
The applicability of conjugate gradient algorithms for computation of the manipulator forward dynamics is investigated. The redundancies in the previously proposed conjugate gradient algorithm are analyzed. A new version is developed which, by avoiding these redundancies, achieves a significantly greater efficiency. A preconditioned conjugate gradient algorithm is also presented. A diagonal matrix whose elements are the diagonal elements of the inertia matrix is proposed as the preconditioner. In order to increase the computational efficiency, an algorithm is developed which exploits the synergism between the computation of the diagonal elements of the inertia matrix and that required by the conjugate gradient algorithm.
Petri nets SM-cover-based on heuristic coloring algorithm
NASA Astrophysics Data System (ADS)
Tkacz, Jacek; Doligalski, Michał
2015-09-01
In the paper, coloring heuristic algorithm of interpreted Petri nets is presented. Coloring is used to determine the State Machines (SM) subnets. The present algorithm reduces the Petri net in order to reduce the computational complexity and finds one of its possible State Machines cover. The proposed algorithm uses elements of interpretation of Petri nets. The obtained result may not be the best, but it is sufficient for use in rapid prototyping of logic controllers. Found SM-cover will be also used in the development of algorithms for decomposition, and modular synthesis and implementation of parallel logic controllers. Correctness developed heuristic algorithm was verified using Gentzen formal reasoning system.
Experimental testing of four correction algorithms for the forward scattering spectrometer probe
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.
1992-01-01
Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.
Problem solving with genetic algorithms and Splicer
NASA Technical Reports Server (NTRS)
Bayer, Steven E.; Wang, Lui
1991-01-01
Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.
Genetics-based control of a mimo boiler-turbine plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimeo, R.M.; Lee, K.Y.
1994-12-31
A genetic algorithm is used to develop an optimal controller for a non-linear, multi-input/multi-output boiler-turbine plant. The algorithm is used to train a control system for the plant over a wide operating range in an effort to obtain better performance. The results of the genetic algorithm`s controller designed from the linearized plant model at a nominal operating point. Because the genetic algorithm is well-suited to solving traditionally difficult optimization problems it is found that the algorithm is capable of developing the controller based on input/output information only. This controller achieves a performance comparable to the standard linear quadratic regulator.
Time-saving impact of an algorithm to identify potential surgical site infections.
Knepper, B C; Young, H; Jenkins, T C; Price, C S
2013-10-01
To develop and validate a partially automated algorithm to identify surgical site infections (SSIs) using commonly available electronic data to reduce manual chart review. Retrospective cohort study of patients undergoing specific surgical procedures over a 4-year period from 2007 through 2010 (algorithm development cohort) or over a 3-month period from January 2011 through March 2011 (algorithm validation cohort). A single academic safety-net hospital in a major metropolitan area. Patients undergoing at least 1 included surgical procedure during the study period. Procedures were identified in the National Healthcare Safety Network; SSIs were identified by manual chart review. Commonly available electronic data, including microbiologic, laboratory, and administrative data, were identified via a clinical data warehouse. Algorithms using combinations of these electronic variables were constructed and assessed for their ability to identify SSIs and reduce chart review. The most efficient algorithm identified in the development cohort combined microbiologic data with postoperative procedure and diagnosis codes. This algorithm resulted in 100% sensitivity and 85% specificity. Time savings from the algorithm was almost 600 person-hours of chart review. The algorithm demonstrated similar sensitivity on application to the validation cohort. A partially automated algorithm to identify potential SSIs was highly sensitive and dramatically reduced the amount of manual chart review required of infection control personnel during SSI surveillance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omet, M.; Michizono, S.; Matsumoto, T.
We report the development and implementation of four FPGA-based predistortion-type klystron linearization algorithms. Klystron linearization is essential for the realization of ILC, since it is required to operate the klystrons 7% in power below their saturation. The work presented was performed in international collaborations at the Fermi National Accelerator Laboratory (FNAL), USA and the Deutsches Elektronen Synchrotron (DESY), Germany. With the newly developed algorithms, the generation of correction factors on the FPGA was improved compared to past algorithms, avoiding quantization and decreasing memory requirements. At FNAL, three algorithms were tested at the Advanced Superconducting Test Accelerator (ASTA), demonstrating a successfulmore » implementation for one algorithm and a proof of principle for two algorithms. Furthermore, the functionality of the algorithm implemented at DESY was demonstrated successfully in a simulation.« less
Karayiannis, N B
2000-01-01
This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.
Derivation of a regional active-optical reflectance sensor corn algorithm
USDA-ARS?s Scientific Manuscript database
Active-optical reflectance sensor (AORS) algorithms developed for in-season corn (Zea mays L.) N management have traditionally been derived using sub-regional scale information. However, studies have shown these previously developed AORS algorithms are not consistently accurate when used on a region...
Implementing a self-structuring data learning algorithm
NASA Astrophysics Data System (ADS)
Graham, James; Carson, Daniel; Ternovskiy, Igor
2016-05-01
In this paper, we elaborate on what we did to implement our self-structuring data learning algorithm. To recap, we are working to develop a data learning algorithm that will eventually be capable of goal driven pattern learning and extrapolation of more complex patterns from less complex ones. At this point we have developed a conceptual framework for the algorithm, but have yet to discuss our actual implementation and the consideration and shortcuts we needed to take to create said implementation. We will elaborate on our initial setup of the algorithm and the scenarios we used to test our early stage algorithm. While we want this to be a general algorithm, it is necessary to start with a simple scenario or two to provide a viable development and testing environment. To that end, our discussion will be geared toward what we include in our initial implementation and why, as well as what concerns we may have. In the future, we expect to be able to apply our algorithm to a more general approach, but to do so within a reasonable time, we needed to pick a place to start.
NASA Technical Reports Server (NTRS)
Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara
2001-01-01
In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.
Development of an Algorithm for Satellite Remote Sensing of Sea and Lake Ice
NASA Astrophysics Data System (ADS)
Dorofy, Peter T.
Satellite remote sensing of snow and ice has a long history. The traditional method for many snow and ice detection algorithms has been the use of the Normalized Difference Snow Index (NDSI). This manuscript is composed of two parts. Chapter 1, Development of a Mid-Infrared Sea and Lake Ice Index (MISI) using the GOES Imager, discusses the desirability, development, and implementation of alternative index for an ice detection algorithm, application of the algorithm to the detection of lake ice, and qualitative validation against other ice mapping products; such as, the Ice Mapping System (IMS). Chapter 2, Application of Dynamic Threshold in a Lake Ice Detection Algorithm, continues with a discussion of the development of a method that considers the variable viewing and illumination geometry of observations throughout the day. The method is an alternative to Bidirectional Reflectance Distribution Function (BRDF) models. Evaluation of the performance of the algorithm is introduced by aggregating classified pixels within geometrical boundaries designated by IMS and obtaining sensitivity and specificity statistical measures.
Battery algorithm verification and development using hardware-in-the-loop testing
NASA Astrophysics Data System (ADS)
He, Yongsheng; Liu, Wei; Koch, Brain J.
Battery algorithms play a vital role in hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), extended-range electric vehicles (EREVs), and electric vehicles (EVs). The energy management of hybrid and electric propulsion systems needs to rely on accurate information on the state of the battery in order to determine the optimal electric drive without abusing the battery. In this study, a cell-level hardware-in-the-loop (HIL) system is used to verify and develop state of charge (SOC) and power capability predictions of embedded battery algorithms for various vehicle applications. Two different batteries were selected as representative examples to illustrate the battery algorithm verification and development procedure. One is a lithium-ion battery with a conventional metal oxide cathode, which is a power battery for HEV applications. The other is a lithium-ion battery with an iron phosphate (LiFePO 4) cathode, which is an energy battery for applications in PHEVs, EREVs, and EVs. The battery cell HIL testing provided valuable data and critical guidance to evaluate the accuracy of the developed battery algorithms, to accelerate battery algorithm future development and improvement, and to reduce hybrid/electric vehicle system development time and costs.
A software framework for pipelined arithmetic algorithms in field programmable gate arrays
NASA Astrophysics Data System (ADS)
Kim, J. B.; Won, E.
2018-03-01
Pipelined algorithms implemented in field programmable gate arrays are extensively used for hardware triggers in the modern experimental high energy physics field and the complexity of such algorithms increases rapidly. For development of such hardware triggers, algorithms are developed in C++, ported to hardware description language for synthesizing firmware, and then ported back to C++ for simulating the firmware response down to the single bit level. We present a C++ software framework which automatically simulates and generates hardware description language code for pipelined arithmetic algorithms.
NOSS Altimeter Detailed Algorithm specifications
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Mcmillan, J. D.
1982-01-01
The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1995-01-01
Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.
Theory and algorithms for image reconstruction on chords and within regions of interest
NASA Astrophysics Data System (ADS)
Zou, Yu; Pan, Xiaochuan; Sidky, Emilâ Y.
2005-11-01
We introduce a formula for image reconstruction on a chord of a general source trajectory. We subsequently develop three algorithms for exact image reconstruction on a chord from data acquired with the general trajectory. Interestingly, two of the developed algorithms can accommodate data containing transverse truncations. The widely used helical trajectory and other trajectories discussed in literature can be interpreted as special cases of the general trajectory, and the developed theory and algorithms are thus directly applicable to reconstructing images exactly from data acquired with these trajectories. For instance, chords on a helical trajectory are equivalent to the n-PI-line segments. In this situation, the proposed algorithms become the algorithms that we proposed previously for image reconstruction on PI-line segments. We have performed preliminary numerical studies, which include the study on image reconstruction on chords of two-circle trajectory, which is nonsmooth, and on n-PI lines of a helical trajectory, which is smooth. Quantitative results of these studies verify and demonstrate the proposed theory and algorithms.
Development of a Novel Locomotion Algorithm for Snake Robot
NASA Astrophysics Data System (ADS)
Khan, Raisuddin; Masum Billah, Md; Watanabe, Mitsuru; Shafie, A. A.
2013-12-01
A novel algorithm for snake robot locomotion is developed and analyzed in this paper. Serpentine is one of the renowned locomotion for snake robot in disaster recovery mission to overcome narrow space navigation. Several locomotion for snake navigation, such as concertina or rectilinear may be suitable for narrow spaces, but is highly inefficient if the same type of locomotion is used even in open spaces resulting friction reduction which make difficulties for snake movement. A novel locomotion algorithm has been proposed based on the modification of the multi-link snake robot, the modifications include alterations to the snake segments as well elements that mimic scales on the underside of the snake body. Snake robot can be able to navigate in the narrow space using this developed locomotion algorithm. The developed algorithm surmount the others locomotion limitation in narrow space navigation.
Overby, Casey Lynnette; Pathak, Jyotishman; Gottesman, Omri; Haerian, Krystl; Perotte, Adler; Murphy, Sean; Bruce, Kevin; Johnson, Stephanie; Talwalkar, Jayant; Shen, Yufeng; Ellis, Steve; Kullo, Iftikhar; Chute, Christopher; Friedman, Carol; Bottinger, Erwin; Hripcsak, George; Weng, Chunhua
2013-01-01
Objective To describe a collaborative approach for developing an electronic health record (EHR) phenotyping algorithm for drug-induced liver injury (DILI). Methods We analyzed types and causes of differences in DILI case definitions provided by two institutions—Columbia University and Mayo Clinic; harmonized two EHR phenotyping algorithms; and assessed the performance, measured by sensitivity, specificity, positive predictive value, and negative predictive value, of the resulting algorithm at three institutions except that sensitivity was measured only at Columbia University. Results Although these sites had the same case definition, their phenotyping methods differed by selection of liver injury diagnoses, inclusion of drugs cited in DILI cases, laboratory tests assessed, laboratory thresholds for liver injury, exclusion criteria, and approaches to validating phenotypes. We reached consensus on a DILI phenotyping algorithm and implemented it at three institutions. The algorithm was adapted locally to account for differences in populations and data access. Implementations collectively yielded 117 algorithm-selected cases and 23 confirmed true positive cases. Discussion Phenotyping for rare conditions benefits significantly from pooling data across institutions. Despite the heterogeneity of EHRs and varied algorithm implementations, we demonstrated the portability of this algorithm across three institutions. The performance of this algorithm for identifying DILI was comparable with other computerized approaches to identify adverse drug events. Conclusions Phenotyping algorithms developed for rare and complex conditions are likely to require adaptive implementation at multiple institutions. Better approaches are also needed to share algorithms. Early agreement on goals, data sources, and validation methods may improve the portability of the algorithms. PMID:23837993
NASA Astrophysics Data System (ADS)
Strippoli, L. S.; Gonzalez-Arjona, D. G.
2018-04-01
GMV extensively worked in many activities aimed at developing, validating, and verifying up to TRL-6 advanced GNC and IP algorithms for Mars Sample Return rendezvous working under different ESA contracts on the development of advanced algorithms for VBN sensor.
Wang, JianLi; Sareen, Jitender; Patten, Scott; Bolton, James; Schmitz, Norbert; Birney, Arden
2014-05-01
Prediction algorithms are useful for making clinical decisions and for population health planning. However, such prediction algorithms for first onset of major depression do not exist. The objective of this study was to develop and validate a prediction algorithm for first onset of major depression in the general population. Longitudinal study design with approximate 3-year follow-up. The study was based on data from a nationally representative sample of the US general population. A total of 28 059 individuals who participated in Waves 1 and 2 of the US National Epidemiologic Survey on Alcohol and Related Conditions and who had not had major depression at Wave 1 were included. The prediction algorithm was developed using logistic regression modelling in 21 813 participants from three census regions. The algorithm was validated in participants from the 4th census region (n=6246). Major depression occurred since Wave 1 of the National Epidemiologic Survey on Alcohol and Related Conditions, assessed by the Alcohol Use Disorder and Associated Disabilities Interview Schedule-diagnostic and statistical manual for mental disorders IV. A prediction algorithm containing 17 unique risk factors was developed. The algorithm had good discriminative power (C statistics=0.7538, 95% CI 0.7378 to 0.7699) and excellent calibration (F-adjusted test=1.00, p=0.448) with the weighted data. In the validation sample, the algorithm had a C statistic of 0.7259 and excellent calibration (Hosmer-Lemeshow χ(2)=3.41, p=0.906). The developed prediction algorithm has good discrimination and calibration capacity. It can be used by clinicians, mental health policy-makers and service planners and the general public to predict future risk of having major depression. The application of the algorithm may lead to increased personalisation of treatment, better clinical decisions and more optimal mental health service planning.
Kagawa, Rina; Kawazoe, Yoshimasa; Ida, Yusuke; Shinohara, Emiko; Tanaka, Katsuya; Imai, Takeshi; Ohe, Kazuhiko
2017-07-01
Phenotyping is an automated technique that can be used to distinguish patients based on electronic health records. To improve the quality of medical care and advance type 2 diabetes mellitus (T2DM) research, the demand for T2DM phenotyping has been increasing. Some existing phenotyping algorithms are not sufficiently accurate for screening or identifying clinical research subjects. We propose a practical phenotyping framework using both expert knowledge and a machine learning approach to develop 2 phenotyping algorithms: one is for screening; the other is for identifying research subjects. We employ expert knowledge as rules to exclude obvious control patients and machine learning to increase accuracy for complicated patients. We developed phenotyping algorithms on the basis of our framework and performed binary classification to determine whether a patient has T2DM. To facilitate development of practical phenotyping algorithms, this study introduces new evaluation metrics: area under the precision-sensitivity curve (AUPS) with a high sensitivity and AUPS with a high positive predictive value. The proposed phenotyping algorithms based on our framework show higher performance than baseline algorithms. Our proposed framework can be used to develop 2 types of phenotyping algorithms depending on the tuning approach: one for screening, the other for identifying research subjects. We develop a novel phenotyping framework that can be easily implemented on the basis of proper evaluation metrics, which are in accordance with users' objectives. The phenotyping algorithms based on our framework are useful for extraction of T2DM patients in retrospective studies.
Li, Ye; Whelan, Michael; Hobbs, Leigh; Fan, Wen Qi; Fung, Cecilia; Wong, Kenny; Marchand-Austin, Alex; Badiani, Tina; Johnson, Ian
2016-06-27
In 2014/2015, Public Health Ontario developed disease-specific, cumulative sum (CUSUM)-based statistical algorithms for detecting aberrant increases in reportable infectious disease incidence in Ontario. The objective of this study was to determine whether the prospective application of these CUSUM algorithms, based on historical patterns, have improved specificity and sensitivity compared to the currently used Early Aberration Reporting System (EARS) algorithm, developed by the US Centers for Disease Control and Prevention. A total of seven algorithms were developed for the following diseases: cyclosporiasis, giardiasis, influenza (one each for type A and type B), mumps, pertussis, invasive pneumococcal disease. Historical data were used as baseline to assess known outbreaks. Regression models were used to model seasonality and CUSUM was applied to the difference between observed and expected counts. An interactive web application was developed allowing program staff to directly interact with data and tune the parameters of CUSUM algorithms using their expertise on the epidemiology of each disease. Using these parameters, a CUSUM detection system was applied prospectively and the results were compared to the outputs generated by EARS. The outcome was the detection of outbreaks, or the start of a known seasonal increase and predicting the peak in activity. The CUSUM algorithms detected provincial outbreaks earlier than the EARS algorithm, identified the start of the influenza season in advance of traditional methods, and had fewer false positive alerts. Additionally, having staff involved in the creation of the algorithms improved their understanding of the algorithms and improved use in practice. Using interactive web-based technology to tune CUSUM improved the sensitivity and specificity of detection algorithms.
PheKB: a catalog and workflow for creating electronic phenotype algorithms for transportability.
Kirby, Jacqueline C; Speltz, Peter; Rasmussen, Luke V; Basford, Melissa; Gottesman, Omri; Peissig, Peggy L; Pacheco, Jennifer A; Tromp, Gerard; Pathak, Jyotishman; Carrell, David S; Ellis, Stephen B; Lingren, Todd; Thompson, Will K; Savova, Guergana; Haines, Jonathan; Roden, Dan M; Harris, Paul A; Denny, Joshua C
2016-11-01
Health care generated data have become an important source for clinical and genomic research. Often, investigators create and iteratively refine phenotype algorithms to achieve high positive predictive values (PPVs) or sensitivity, thereby identifying valid cases and controls. These algorithms achieve the greatest utility when validated and shared by multiple health care systems.Materials and Methods We report the current status and impact of the Phenotype KnowledgeBase (PheKB, http://phekb.org), an online environment supporting the workflow of building, sharing, and validating electronic phenotype algorithms. We analyze the most frequent components used in algorithms and their performance at authoring institutions and secondary implementation sites. As of June 2015, PheKB contained 30 finalized phenotype algorithms and 62 algorithms in development spanning a range of traits and diseases. Phenotypes have had over 3500 unique views in a 6-month period and have been reused by other institutions. International Classification of Disease codes were the most frequently used component, followed by medications and natural language processing. Among algorithms with published performance data, the median PPV was nearly identical when evaluated at the authoring institutions (n = 44; case 96.0%, control 100%) compared to implementation sites (n = 40; case 97.5%, control 100%). These results demonstrate that a broad range of algorithms to mine electronic health record data from different health systems can be developed with high PPV, and algorithms developed at one site are generally transportable to others. By providing a central repository, PheKB enables improved development, transportability, and validity of algorithms for research-grade phenotypes using health care generated data. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
PheKB: a catalog and workflow for creating electronic phenotype algorithms for transportability
Kirby, Jacqueline C; Speltz, Peter; Rasmussen, Luke V; Basford, Melissa; Gottesman, Omri; Peissig, Peggy L; Pacheco, Jennifer A; Tromp, Gerard; Pathak, Jyotishman; Carrell, David S; Ellis, Stephen B; Lingren, Todd; Thompson, Will K; Savova, Guergana; Haines, Jonathan; Roden, Dan M; Harris, Paul A
2016-01-01
Objective Health care generated data have become an important source for clinical and genomic research. Often, investigators create and iteratively refine phenotype algorithms to achieve high positive predictive values (PPVs) or sensitivity, thereby identifying valid cases and controls. These algorithms achieve the greatest utility when validated and shared by multiple health care systems. Materials and Methods We report the current status and impact of the Phenotype KnowledgeBase (PheKB, http://phekb.org), an online environment supporting the workflow of building, sharing, and validating electronic phenotype algorithms. We analyze the most frequent components used in algorithms and their performance at authoring institutions and secondary implementation sites. Results As of June 2015, PheKB contained 30 finalized phenotype algorithms and 62 algorithms in development spanning a range of traits and diseases. Phenotypes have had over 3500 unique views in a 6-month period and have been reused by other institutions. International Classification of Disease codes were the most frequently used component, followed by medications and natural language processing. Among algorithms with published performance data, the median PPV was nearly identical when evaluated at the authoring institutions (n = 44; case 96.0%, control 100%) compared to implementation sites (n = 40; case 97.5%, control 100%). Discussion These results demonstrate that a broad range of algorithms to mine electronic health record data from different health systems can be developed with high PPV, and algorithms developed at one site are generally transportable to others. Conclusion By providing a central repository, PheKB enables improved development, transportability, and validity of algorithms for research-grade phenotypes using health care generated data. PMID:27026615
NASA Technical Reports Server (NTRS)
Strong, James P.
1987-01-01
A local area matching algorithm was developed on the Massively Parallel Processor (MPP). It is an iterative technique that first matches coarse or low resolution areas and at each iteration performs matches of higher resolution. Results so far show that when good matches are possible in the two images, the MPP algorithm matches corresponding areas as well as a human observer. To aid in developing this algorithm, a control or shell program was developed for the MPP that allows interactive experimentation with various parameters and procedures to be used in the matching process. (This would not be possible without the high speed of the MPP). With the system, optimal techniques can be developed for different types of matching problems.
Flexible methods for segmentation evaluation: Results from CT-based luggage screening
Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry
2017-01-01
BACKGROUND Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms’ behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms. PMID:24699346
ERIC Educational Resources Information Center
Leite, Walter L.; Huang, I-Chan; Marcoulides, George A.
2008-01-01
This article presents the use of an ant colony optimization (ACO) algorithm for the development of short forms of scales. An example 22-item short form is developed for the Diabetes-39 scale, a quality-of-life scale for diabetes patients, using a sample of 265 diabetes patients. A simulation study comparing the performance of the ACO algorithm and…
Development of a Dependency Theory Toolbox for Database Design.
1987-12-01
published algorithms and theorems , and hand simulating these algorithms can be a tedious and error prone chore. Additionally, since the process of...to design and study relational databases exists in the form of published algorithms and theorems . However, hand simulating these algorithms can be a...published algorithms and theorems . Hand simulating these algorithms can be a tedious and error prone chore. Therefore, a toolbox of algorithms and
NASA Technical Reports Server (NTRS)
Roth, J. P.
1972-01-01
Methods for development of logic design together with algorithms for failure testing, a method for design of logic for ultra-large-scale integration, extension of quantum calculus to describe the functional behavior of a mechanism component-by-component and to computer tests for failures in the mechanism using the diagnosis algorithm, and the development of an algorithm for the multi-output 2-level minimization problem are discussed.
Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deveci, Mehmet; Trott, Christian Robert; Rajamanickam, Sivasankaran
Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and datamore » structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.« less
Multi-threaded Sparse Matrix-Matrix Multiplication for Many-Core and GPU Architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deveci, Mehmet; Rajamanickam, Sivasankaran; Trott, Christian Robert
Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scienti c computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix-matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and datamore » structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.« less
NASA Technical Reports Server (NTRS)
Roth, J. P.
1972-01-01
The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.
DOT National Transportation Integrated Search
1994-12-01
THIS REPORT SUMMARIZES THE RESULTS OF A 3-YEAR RESEARCH PROJECT TO DEVELOP RELIABLE ALGORITHMS FOR THE DETECTION OF MOTOR VEHICLE DRIVER IMPAIRMENT DUE TO DROWSINESS. THESE ALGORITHMS ARE BASED ON DRIVING PERFORMANCE MEASURES THAT CAN POTENTIALLY BE ...
Alocomotino Control Algorithm for Robotic Linkage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dohner, Jeffrey L.
This dissertation describes the development of a control algorithm that transitions a robotic linkage system between stabilized states producing responsive locomotion. The developed algorithm is demonstrated using a simple robotic construction consisting of a few links with actuation and sensing at each joint. Numerical and experimental validation is presented.
FPGA-based Klystron linearization implementations in scope of ILC
Omet, M.; Michizono, S.; Matsumoto, T.; ...
2015-01-23
We report the development and implementation of four FPGA-based predistortion-type klystron linearization algorithms. Klystron linearization is essential for the realization of ILC, since it is required to operate the klystrons 7% in power below their saturation. The work presented was performed in international collaborations at the Fermi National Accelerator Laboratory (FNAL), USA and the Deutsches Elektronen Synchrotron (DESY), Germany. With the newly developed algorithms, the generation of correction factors on the FPGA was improved compared to past algorithms, avoiding quantization and decreasing memory requirements. At FNAL, three algorithms were tested at the Advanced Superconducting Test Accelerator (ASTA), demonstrating a successfulmore » implementation for one algorithm and a proof of principle for two algorithms. Furthermore, the functionality of the algorithm implemented at DESY was demonstrated successfully in a simulation.« less
A high-performance spatial database based approach for pathology imaging algorithm evaluation
Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.
2013-01-01
Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905
Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.
Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong
2014-09-01
A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.
Strategic Control Algorithm Development : Volume 3. Strategic Algorithm Report.
DOT National Transportation Integrated Search
1974-08-01
The strategic algorithm report presents a detailed description of the functional basic strategic control arrival algorithm. This description is independent of a particular computer or language. Contained in this discussion are the geometrical and env...
Finite pure integer programming algorithms employing only hyperspherically deduced cuts
NASA Technical Reports Server (NTRS)
Young, R. D.
1971-01-01
Three algorithms are developed that may be based exclusively on hyperspherically deduced cuts. The algorithms only apply, therefore, to problems structured so that these cuts are valid. The algorithms are shown to be finite.
A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models
NASA Astrophysics Data System (ADS)
Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng
2012-09-01
Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.
Adaptively resizing populations: Algorithm, analysis, and first results
NASA Technical Reports Server (NTRS)
Smith, Robert E.; Smuda, Ellen
1993-01-01
Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.
Liao, Katherine P; Ananthakrishnan, Ashwin N; Kumar, Vishesh; Xia, Zongqi; Cagan, Andrew; Gainer, Vivian S; Goryachev, Sergey; Chen, Pei; Savova, Guergana K; Agniel, Denis; Churchill, Susanne; Lee, Jaeyoung; Murphy, Shawn N; Plenge, Robert M; Szolovits, Peter; Kohane, Isaac; Shaw, Stanley Y; Karlson, Elizabeth W; Cai, Tianxi
2015-01-01
Typically, algorithms to classify phenotypes using electronic medical record (EMR) data were developed to perform well in a specific patient population. There is increasing interest in analyses which can allow study of a specific outcome across different diseases. Such a study in the EMR would require an algorithm that can be applied across different patient populations. Our objectives were: (1) to develop an algorithm that would enable the study of coronary artery disease (CAD) across diverse patient populations; (2) to study the impact of adding narrative data extracted using natural language processing (NLP) in the algorithm. Additionally, we demonstrate how to implement CAD algorithm to compare risk across 3 chronic diseases in a preliminary study. We studied 3 established EMR based patient cohorts: diabetes mellitus (DM, n = 65,099), inflammatory bowel disease (IBD, n = 10,974), and rheumatoid arthritis (RA, n = 4,453) from two large academic centers. We developed a CAD algorithm using NLP in addition to structured data (e.g. ICD9 codes) in the RA cohort and validated it in the DM and IBD cohorts. The CAD algorithm using NLP in addition to structured data achieved specificity >95% with a positive predictive value (PPV) 90% in the training (RA) and validation sets (IBD and DM). The addition of NLP data improved the sensitivity for all cohorts, classifying an additional 17% of CAD subjects in IBD and 10% in DM while maintaining PPV of 90%. The algorithm classified 16,488 DM (26.1%), 457 IBD (4.2%), and 245 RA (5.0%) with CAD. In a cross-sectional analysis, CAD risk was 63% lower in RA and 68% lower in IBD compared to DM (p<0.0001) after adjusting for traditional cardiovascular risk factors. We developed and validated a CAD algorithm that performed well across diverse patient populations. The addition of NLP into the CAD algorithm improved the sensitivity of the algorithm, particularly in cohorts where the prevalence of CAD was low. Preliminary data suggest that CAD risk was significantly lower in RA and IBD compared to DM.
Liao, Katherine P.; Ananthakrishnan, Ashwin N.; Kumar, Vishesh; Xia, Zongqi; Cagan, Andrew; Gainer, Vivian S.; Goryachev, Sergey; Chen, Pei; Savova, Guergana K.; Agniel, Denis; Churchill, Susanne; Lee, Jaeyoung; Murphy, Shawn N.; Plenge, Robert M.; Szolovits, Peter; Kohane, Isaac; Shaw, Stanley Y.; Karlson, Elizabeth W.; Cai, Tianxi
2015-01-01
Background Typically, algorithms to classify phenotypes using electronic medical record (EMR) data were developed to perform well in a specific patient population. There is increasing interest in analyses which can allow study of a specific outcome across different diseases. Such a study in the EMR would require an algorithm that can be applied across different patient populations. Our objectives were: (1) to develop an algorithm that would enable the study of coronary artery disease (CAD) across diverse patient populations; (2) to study the impact of adding narrative data extracted using natural language processing (NLP) in the algorithm. Additionally, we demonstrate how to implement CAD algorithm to compare risk across 3 chronic diseases in a preliminary study. Methods and Results We studied 3 established EMR based patient cohorts: diabetes mellitus (DM, n = 65,099), inflammatory bowel disease (IBD, n = 10,974), and rheumatoid arthritis (RA, n = 4,453) from two large academic centers. We developed a CAD algorithm using NLP in addition to structured data (e.g. ICD9 codes) in the RA cohort and validated it in the DM and IBD cohorts. The CAD algorithm using NLP in addition to structured data achieved specificity >95% with a positive predictive value (PPV) 90% in the training (RA) and validation sets (IBD and DM). The addition of NLP data improved the sensitivity for all cohorts, classifying an additional 17% of CAD subjects in IBD and 10% in DM while maintaining PPV of 90%. The algorithm classified 16,488 DM (26.1%), 457 IBD (4.2%), and 245 RA (5.0%) with CAD. In a cross-sectional analysis, CAD risk was 63% lower in RA and 68% lower in IBD compared to DM (p<0.0001) after adjusting for traditional cardiovascular risk factors. Conclusions We developed and validated a CAD algorithm that performed well across diverse patient populations. The addition of NLP into the CAD algorithm improved the sensitivity of the algorithm, particularly in cohorts where the prevalence of CAD was low. Preliminary data suggest that CAD risk was significantly lower in RA and IBD compared to DM. PMID:26301417
Application of ant colony Algorithm and particle swarm optimization in architectural design
NASA Astrophysics Data System (ADS)
Song, Ziyi; Wu, Yunfa; Song, Jianhua
2018-02-01
By studying the development of ant colony algorithm and particle swarm algorithm, this paper expounds the core idea of the algorithm, explores the combination of algorithm and architectural design, sums up the application rules of intelligent algorithm in architectural design, and combines the characteristics of the two algorithms, obtains the research route and realization way of intelligent algorithm in architecture design. To establish algorithm rules to assist architectural design. Taking intelligent algorithm as the beginning of architectural design research, the authors provide the theory foundation of ant colony Algorithm and particle swarm algorithm in architectural design, popularize the application range of intelligent algorithm in architectural design, and provide a new idea for the architects.
NASA Technical Reports Server (NTRS)
Nyangweso, Emmanuel; Bole, Brian
2014-01-01
Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.
[GNU Pattern: open source pattern hunter for biological sequences based on SPLASH algorithm].
Xu, Ying; Li, Yi-xue; Kong, Xiang-yin
2005-06-01
To construct a high performance open source software engine based on IBM SPLASH algorithm for later research on pattern discovery. Gpat, which is based on SPLASH algorithm, was developed by using open source software. GNU Pattern (Gpat) software was developped, which efficiently implemented the core part of SPLASH algorithm. Full source code of Gpat was also available for other researchers to modify the program under the GNU license. Gpat is a successful implementation of SPLASH algorithm and can be used as a basic framework for later research on pattern recognition in biological sequences.
NASA Technical Reports Server (NTRS)
Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.
1986-01-01
The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.
Crisis management during anaesthesia: the development of an anaesthetic crisis management manual
Runciman, W; Kluger, M; Morris, R; Paix, A; Watterson, L; Webb, R
2005-01-01
Background: All anaesthetists have to handle life threatening crises with little or no warning. However, some cognitive strategies and work practices that are appropriate for speed and efficiency under normal circumstances may become maladaptive in a crisis. It was judged in a previous study that the use of a structured "core" algorithm (based on the mnemonic COVER ABCD–A SWIFT CHECK) would diagnose and correct the problem in 60% of cases and provide a functional diagnosis in virtually all of the remaining 40%. It was recommended that specific sub-algorithms be developed for managing the problems underlying the remaining 40% of crises and assembled in an easy-to-use manual. Sub-algorithms were therefore developed for these problems so that they could be checked for applicability and validity against the first 4000 anaesthesia incidents reported to the Australian Incident Monitoring Study (AIMS). Methods: The need for 24 specific sub-algorithms was identified. Teams of practising anaesthetists were assembled and sets of incidents relevant to each sub-algorithm were identified from the first 4000 reported to AIMS. Based largely on successful strategies identified in these reports, a set of 24 specific sub-algorithms was developed for trial against the 4000 AIMS reports and assembled into an easy-to-use manual. A process was developed for applying each component of the core algorithm COVER at one of four levels (scan-check-alert/ready-emergency) according to the degree of perceived urgency, and incorporated into the manual. The manual was disseminated at a World Congress and feedback was obtained. Results: Each of the 24 specific crisis management sub-algorithms was tested against the relevant incidents among the first 4000 reported to AIMS and compared with the actual management by the anaesthetist at the time. It was judged that, if the core algorithm had been correctly applied, the appropriate sub-algorithm would have been resolved better and/or faster in one in eight of all incidents, and would have been unlikely to have caused harm to any patient. The descriptions of the validation of each of the 24 sub-algorithms constitute the remaining 24 papers in this set. Feedback from five meetings each attended by 60–100 anaesthetists was then collated and is included. Conclusion: The 24 sub-algorithms developed form the basis for developing a rational evidence-based approach to crisis management during anaesthesia. The COVER component has been found to be satisfactory in real life resuscitation situations and the sub-algorithms have been used successfully for several years. It would now be desirable for carefully designed simulator based studies, using naive trainees at the start of their training, to systematically examine the merits and demerits of various aspects of the sub-algorithms. It would seem prudent that these sub-algorithms be regarded, for the moment, as decision aids to support and back up clinicians' natural responses to a crisis when all is not progressing as expected. PMID:15933282
Crisis management during anaesthesia: the development of an anaesthetic crisis management manual.
Runciman, W B; Kluger, M T; Morris, R W; Paix, A D; Watterson, L M; Webb, R K
2005-06-01
All anaesthetists have to handle life threatening crises with little or no warning. However, some cognitive strategies and work practices that are appropriate for speed and efficiency under normal circumstances may become maladaptive in a crisis. It was judged in a previous study that the use of a structured "core" algorithm (based on the mnemonic COVER ABCD-A SWIFT CHECK) would diagnose and correct the problem in 60% of cases and provide a functional diagnosis in virtually all of the remaining 40%. It was recommended that specific sub-algorithms be developed for managing the problems underlying the remaining 40% of crises and assembled in an easy-to-use manual. Sub-algorithms were therefore developed for these problems so that they could be checked for applicability and validity against the first 4000 anaesthesia incidents reported to the Australian Incident Monitoring Study (AIMS). The need for 24 specific sub-algorithms was identified. Teams of practising anaesthetists were assembled and sets of incidents relevant to each sub-algorithm were identified from the first 4000 reported to AIMS. Based largely on successful strategies identified in these reports, a set of 24 specific sub-algorithms was developed for trial against the 4000 AIMS reports and assembled into an easy-to-use manual. A process was developed for applying each component of the core algorithm COVER at one of four levels (scan-check-alert/ready-emergency) according to the degree of perceived urgency, and incorporated into the manual. The manual was disseminated at a World Congress and feedback was obtained. Each of the 24 specific crisis management sub-algorithms was tested against the relevant incidents among the first 4000 reported to AIMS and compared with the actual management by the anaesthetist at the time. It was judged that, if the core algorithm had been correctly applied, the appropriate sub-algorithm would have been resolved better and/or faster in one in eight of all incidents, and would have been unlikely to have caused harm to any patient. The descriptions of the validation of each of the 24 sub-algorithms constitute the remaining 24 papers in this set. Feedback from five meetings each attended by 60-100 anaesthetists was then collated and is included. The 24 sub-algorithms developed form the basis for developing a rational evidence-based approach to crisis management during anaesthesia. The COVER component has been found to be satisfactory in real life resuscitation situations and the sub-algorithms have been used successfully for several years. It would now be desirable for carefully designed simulator based studies, using naive trainees at the start of their training, to systematically examine the merits and demerits of various aspects of the sub-algorithms. It would seem prudent that these sub-algorithms be regarded, for the moment, as decision aids to support and back up clinicians' natural responses to a crisis when all is not progressing as expected.
NASA Technical Reports Server (NTRS)
Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen
2015-01-01
The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes.
A Robustly Stabilizing Model Predictive Control Algorithm
NASA Technical Reports Server (NTRS)
Ackmece, A. Behcet; Carson, John M., III
2007-01-01
A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.
Computations involving differential operators and their actions on functions
NASA Technical Reports Server (NTRS)
Crouch, Peter E.; Grossman, Robert; Larson, Richard
1991-01-01
The algorithms derived by Grossmann and Larson (1989) are further developed for rewriting expressions involving differential operators. The differential operators involved arise in the local analysis of nonlinear dynamical systems. These algorithms are extended in two different directions: the algorithms are generalized so that they apply to differential operators on groups and the data structures and algorithms are developed to compute symbolically the action of differential operators on functions. Both of these generalizations are needed for applications.
Multiple shooting algorithms for jump-discontinuous problems in optimal control and estimation
NASA Technical Reports Server (NTRS)
Mook, D. J.; Lew, Jiann-Shiun
1991-01-01
Multiple shooting algorithms are developed for jump-discontinuous two-point boundary value problems arising in optimal control and optimal estimation. Examples illustrating the origin of such problems are given to motivate the development of the solution algorithms. The algorithms convert the necessary conditions, consisting of differential equations and transversality conditions, into algebraic equations. The solution of the algebraic equations provides exact solutions for linear problems. The existence and uniqueness of the solution are proved.
Algorithms for monitoring warfarin use: Results from Delphi Method.
Kano, Eunice Kazue; Borges, Jessica Bassani; Scomparini, Erika Burim; Curi, Ana Paula; Ribeiro, Eliane
2017-10-01
Warfarin stands as the most prescribed oral anticoagulant. New oral anticoagulants have been approved recently; however, their use is limited and the reversibility techniques of the anticoagulation effect are little known. Thus, our study's purpose was to develop algorithms for therapeutic monitoring of patients taking warfarin based on the opinion of physicians who prescribe this medicine in their clinical practice. The development of the algorithm was performed in two stages, namely: (i) literature review and (ii) algorithm evaluation by physicians using a Delphi Method. Based on the articles analyzed, two algorithms were developed: "Recommendations for the use of warfarin in anticoagulation therapy" and "Recommendations for the use of warfarin in anticoagulation therapy: dose adjustment and bleeding control." Later, these algorithms were analyzed by 19 medical doctors that responded to the invitation and agreed to participate in the study. Of these, 16 responded to the first round, 11 to the second and eight to the third round. A 70% consensus or higher was reached for most issues and at least 50% for six questions. We were able to develop algorithms to monitor the use of warfarin by physicians using a Delphi Method. The proposed method is inexpensive and involves the participation of specialists, and it has proved adequate for the intended purpose. Further studies are needed to validate these algorithms, enabling them to be used in clinical practice.
Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert M.
2013-01-01
A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.
Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2013-01-01
A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.
Parallel conjugate gradient algorithms for manipulator dynamic simulation
NASA Technical Reports Server (NTRS)
Fijany, Amir; Scheld, Robert E.
1989-01-01
Parallel conjugate gradient algorithms for the computation of multibody dynamics are developed for the specialized case of a robot manipulator. For an n-dimensional positive-definite linear system, the Classical Conjugate Gradient (CCG) algorithms are guaranteed to converge in n iterations, each with a computation cost of O(n); this leads to a total computational cost of O(n sq) on a serial processor. A conjugate gradient algorithms is presented that provide greater efficiency using a preconditioner, which reduces the number of iterations required, and by exploiting parallelism, which reduces the cost of each iteration. Two Preconditioned Conjugate Gradient (PCG) algorithms are proposed which respectively use a diagonal and a tridiagonal matrix, composed of the diagonal and tridiagonal elements of the mass matrix, as preconditioners. Parallel algorithms are developed to compute the preconditioners and their inversions in O(log sub 2 n) steps using n processors. A parallel algorithm is also presented which, on the same architecture, achieves the computational time of O(log sub 2 n) for each iteration. Simulation results for a seven degree-of-freedom manipulator are presented. Variants of the proposed algorithms are also developed which can be efficiently implemented on the Robot Mathematics Processor (RMP).
Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk
2014-01-01
We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.
Advanced biologically plausible algorithms for low-level image processing
NASA Astrophysics Data System (ADS)
Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan
1999-08-01
At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.
Developing an eco-routing application.
DOT National Transportation Integrated Search
2014-01-01
The study develops eco-routing algorithms and investigates and quantifies the system-wide impacts of implementing an eco-routing system. Two eco-routing algorithms are developed: one based on vehicle sub-populations (ECO-Subpopulation Feedback Assign...
NASA Astrophysics Data System (ADS)
Morita, Yoshifumi; Hirose, Akinori; Uno, Takashi; Uchid, Masaki; Ukai, Hiroyuki; Matsui, Nobuyuki
2007-12-01
In this paper we propose a new rehabilitation training support system for upper limbs. The proposed system enables therapists to quantitatively evaluate the therapeutic effect of upper limb motor function during training, to easily change the load of resistance of training and to easily develop a new training program suitable for the subjects. For this purpose we develop control algorithms of training programs in the 3D force display robot. The 3D force display robot has parallel link mechanism with three motors. The control algorithm simulating sanding training is developed for the 3D force display robot. Moreover the teaching/training function algorithm is developed. It enables the therapists to easily make training trajectory suitable for subject's condition. The effectiveness of the developed control algorithms is verified by experiments.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
A Science Data System Approach for the SMAP Mission
NASA Technical Reports Server (NTRS)
Woollard, David; Kwoun, Oh-ig; Bicknell, Tom; West, Richard; Leung, Kon
2009-01-01
Though Science Data System (SDS) development has not traditionally been part of the mission concept phase, lessons learned and study of past Earth science missions indicate that SDS functionality can greatly benefit algorithm developers in all mission phases. We have proposed a SDS approach for the SMAP Mission that incorporates early support for an algorithm testbed, allowing scientists to develop codes and seamlessly integrate them into the operational SDS. This approach will greatly reduce both the costs and risks involved in algorithm transitioning and SDS development.
ERIC Educational Resources Information Center
Avancena, Aimee Theresa; Nishihara, Akinori; Vergara, John Paul
2012-01-01
This paper presents the online cognitive and algorithm tests, which were developed in order to determine if certain cognitive factors and fundamental algorithms correlate with the performance of students in their introductory computer science course. The tests were implemented among Management Information Systems majors from the Philippines and…
USDA-ARS?s Scientific Manuscript database
In this research, a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at two wavebands, 664 nm and 690 nm, for co...
Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed
NASA Technical Reports Server (NTRS)
Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie
2009-01-01
Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.
Natural Inspired Intelligent Visual Computing and Its Application to Viticulture.
Ang, Li Minn; Seng, Kah Phooi; Ge, Feng Lu
2017-05-23
This paper presents an investigation of natural inspired intelligent computing and its corresponding application towards visual information processing systems for viticulture. The paper has three contributions: (1) a review of visual information processing applications for viticulture; (2) the development of natural inspired computing algorithms based on artificial immune system (AIS) techniques for grape berry detection; and (3) the application of the developed algorithms towards real-world grape berry images captured in natural conditions from vineyards in Australia. The AIS algorithms in (2) were developed based on a nature-inspired clonal selection algorithm (CSA) which is able to detect the arcs in the berry images with precision, based on a fitness model. The arcs detected are then extended to perform the multiple arcs and ring detectors information processing for the berry detection application. The performance of the developed algorithms were compared with traditional image processing algorithms like the circular Hough transform (CHT) and other well-known circle detection methods. The proposed AIS approach gave a Fscore of 0.71 compared with Fscores of 0.28 and 0.30 for the CHT and a parameter-free circle detection technique (RPCD) respectively.
Signal and image processing algorithm performance in a virtual and elastic computing environment
NASA Astrophysics Data System (ADS)
Bennett, Kelly W.; Robertson, James
2013-05-01
The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.
NASA Astrophysics Data System (ADS)
Knypiński, Łukasz
2017-12-01
In this paper an algorithm for the optimization of excitation system of line-start permanent magnet synchronous motors will be presented. For the basis of this algorithm, software was developed in the Borland Delphi environment. The software consists of two independent modules: an optimization solver, and a module including the mathematical model of a synchronous motor with a self-start ability. The optimization module contains the bat algorithm procedure. The mathematical model of the motor has been developed in an Ansys Maxwell environment. In order to determine the functional parameters of the motor, additional scripts in Visual Basic language were developed. Selected results of the optimization calculation are presented and compared with results for the particle swarm optimization algorithm.
Adaptive Control Strategies for Flexible Robotic Arm
NASA Technical Reports Server (NTRS)
Bialasiewicz, Jan T.
1996-01-01
The control problem of a flexible robotic arm has been investigated. The control strategies that have been developed have a wide application in approaching the general control problem of flexible space structures. The following control strategies have been developed and evaluated: neural self-tuning control algorithm, neural-network-based fuzzy logic control algorithm, and adaptive pole assignment algorithm. All of the above algorithms have been tested through computer simulation. In addition, the hardware implementation of a computer control system that controls the tip position of a flexible arm clamped on a rigid hub mounted directly on the vertical shaft of a dc motor, has been developed. An adaptive pole assignment algorithm has been applied to suppress vibrations of the described physical model of flexible robotic arm and has been successfully tested using this testbed.
A Computational Algorithm for Functional Clustering of Proteome Dynamics During Development
Wang, Yaqun; Wang, Ningtao; Hao, Han; Guo, Yunqian; Zhen, Yan; Shi, Jisen; Wu, Rongling
2014-01-01
Phenotypic traits, such as seed development, are a consequence of complex biochemical interactions among genes, proteins and metabolites, but the underlying mechanisms that operate in a coordinated and sequential manner remain elusive. Here, we address this issue by developing a computational algorithm to monitor proteome changes during the course of trait development. The algorithm is built within the mixture-model framework in which each mixture component is modeled by a specific group of proteins that display a similar temporal pattern of expression in trait development. A nonparametric approach based on Legendre orthogonal polynomials was used to fit dynamic changes of protein expression, increasing the power and flexibility of protein clustering. By analyzing a dataset of proteomic dynamics during early embryogenesis of the Chinese fir, the algorithm has successfully identified several distinct types of proteins that coordinate with each other to determine seed development in this forest tree commercially and environmentally important to China. The algorithm will find its immediate applications for the characterization of mechanistic underpinnings for any other biological processes in which protein abundance plays a key role. PMID:24955031
Development and Application of a Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane
2007-01-01
This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.
Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai
2009-01-01
Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.
Update on Development of Mesh Generation Algorithms in MeshKit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Rajeev; Vanderzee, Evan; Mahadevan, Vijay
2015-09-30
MeshKit uses a graph-based design for coding all its meshing algorithms, which includes the Reactor Geometry (and mesh) Generation (RGG) algorithms. This report highlights the developmental updates of all the algorithms, results and future work. Parallel versions of algorithms, documentation and performance results are reported. RGG GUI design was updated to incorporate new features requested by the users; boundary layer generation and parallel RGG support were added to the GUI. Key contributions to the release, upgrade and maintenance of other SIGMA1 libraries (CGM and MOAB) were made. Several fundamental meshing algorithms for creating a robust parallel meshing pipeline in MeshKitmore » are under development. Results and current status of automated, open-source and high quality nuclear reactor assembly mesh generation algorithms such as trimesher, quadmesher, interval matching and multi-sweeper are reported.« less
Runtime support for parallelizing data mining algorithms
NASA Astrophysics Data System (ADS)
Jin, Ruoming; Agrawal, Gagan
2002-03-01
With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.
NASA Astrophysics Data System (ADS)
Acciarri, R.; Adams, C.; An, R.; Anthony, J.; Asaadi, J.; Auger, M.; Bagby, L.; Balasubramanian, S.; Baller, B.; Barnes, C.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Cohen, E.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fadeeva, A. A.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garcia-Gamez, D.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; Hourlier, A.; Huang, E.-C.; James, C.; Jan de Vries, J.; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Piasetzky, E.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; Rudolf von Rohr, C.; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Smith, A.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van De Pontseele, W.; Van de Water, R. G.; Viren, B.; Weber, M.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Yates, L.; Zeller, G. P.; Zennamo, J.; Zhang, C.
2018-01-01
The development and operation of liquid-argon time-projection chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.
JPSS Cryosphere Algorithms: Integration and Testing in Algorithm Development Library (ADL)
NASA Astrophysics Data System (ADS)
Tsidulko, M.; Mahoney, R. L.; Meade, P.; Baldwin, D.; Tschudi, M. A.; Das, B.; Mikles, V. J.; Chen, W.; Tang, Y.; Sprietzer, K.; Zhao, Y.; Wolf, W.; Key, J.
2014-12-01
JPSS is a next generation satellite system that is planned to be launched in 2017. The satellites will carry a suite of sensors that are already on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite. The NOAA/NESDIS/STAR Algorithm Integration Team (AIT) works within the Algorithm Development Library (ADL) framework which mimics the operational JPSS Interface Data Processing Segment (IDPS). The AIT contributes in development, integration and testing of scientific algorithms employed in the IDPS. This presentation discusses cryosphere related activities performed in ADL. The addition of a new ancillary data set - NOAA Global Multisensor Automated Snow/Ice data (GMASI) - with ADL code modifications is described. Preliminary GMASI impact on the gridded Snow/Ice product is estimated. Several modifications to the Ice Age algorithm that demonstrates mis-classification of ice type for certain areas/time periods are tested in the ADL. Sensitivity runs for day time, night time and terminator zone are performed and presented. Comparisons between the original and modified versions of the Ice Age algorithm are also presented.
NASA Technical Reports Server (NTRS)
Zaychik, Kirill B.; Cardullo, Frank M.
2012-01-01
Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.
Robust crop and weed segmentation under uncontrolled outdoor illumination.
Jeon, Hong Y; Tian, Lei F; Zhu, Heping
2011-01-01
An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA).
The Langley Parameterized Shortwave Algorithm (LPSA) for Surface Radiation Budget Studies. 1.0
NASA Technical Reports Server (NTRS)
Gupta, Shashi K.; Kratz, David P.; Stackhouse, Paul W., Jr.; Wilber, Anne C.
2001-01-01
An efficient algorithm was developed during the late 1980's and early 1990's by W. F. Staylor at NASA/LaRC for the purpose of deriving shortwave surface radiation budget parameters on a global scale. While the algorithm produced results in good agreement with observations, the lack of proper documentation resulted in a weak acceptance by the science community. The primary purpose of this report is to develop detailed documentation of the algorithm. In the process, the algorithm was modified whenever discrepancies were found between the algorithm and its referenced literature sources. In some instances, assumptions made in the algorithm could not be justified and were replaced with those that were justifiable. The algorithm uses satellite and operational meteorological data for inputs. Most of the original data sources have been replaced by more recent, higher quality data sources, and fluxes are now computed on a higher spatial resolution. Many more changes to the basic radiation scheme and meteorological inputs have been proposed to improve the algorithm and make the product more useful for new research projects. Because of the many changes already in place and more planned for the future, the algorithm has been renamed the Langley Parameterized Shortwave Algorithm (LPSA).
NASA Astrophysics Data System (ADS)
Thieberger, P.; Gassner, D.; Hulsart, R.; Michnoff, R.; Miller, T.; Minty, M.; Sorrell, Z.; Bartnik, A.
2018-04-01
A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.
NASA Astrophysics Data System (ADS)
Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro
1995-02-01
We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.
Thieberger, Peter; Gassner, D.; Hulsart, R.; ...
2018-04-25
Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thieberger, Peter; Gassner, D.; Hulsart, R.
Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less
Thieberger, P; Gassner, D; Hulsart, R; Michnoff, R; Miller, T; Minty, M; Sorrell, Z; Bartnik, A
2018-04-01
A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.
Development and Testing of Data Mining Algorithms for Earth Observation
NASA Technical Reports Server (NTRS)
Glymour, Clark
2005-01-01
The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.
Rocketdyne Safety Algorithm: Space Shuttle Main Engine Fault Detection
NASA Technical Reports Server (NTRS)
Norman, Arnold M., Jr.
1994-01-01
The Rocketdyne Safety Algorithm (RSA) has been developed to the point of use on the TTBE at MSFC on Task 4 of LeRC contract NAS3-25884. This document contains a description of the work performed, the results of the nominal test of the major anomaly test cases and a table of the resulting cutoff times, a plot of the RSA value vs. time for each anomaly case, a logic flow description of the algorithm, the algorithm code, and a development plan for future efforts.
Development of a Compound Optimization Approach Based on Imperialist Competitive Algorithm
NASA Astrophysics Data System (ADS)
Wang, Qimei; Yang, Zhihong; Wang, Yong
In this paper, an improved novel approach is developed for the imperialist competitive algorithm to achieve a greater performance. The Nelder-Meand simplex method is applied to execute alternately with the original procedures of the algorithm. The approach is tested on twelve widely-used benchmark functions and is also compared with other relative studies. It is shown that the proposed approach has a faster convergence rate, better search ability, and higher stability than the original algorithm and other relative methods.
A New Approximate Chimera Donor Cell Search Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Nixon, David (Technical Monitor)
1998-01-01
The objectives of this study were to develop chimera-based full potential methodology which is compatible with overflow (Euler/Navier-Stokes) chimera flow solver and to develop a fast donor cell search algorithm that is compatible with the chimera full potential approach. Results of this work included presenting a new donor cell search algorithm suitable for use with a chimera-based full potential solver. This algorithm was found to be extremely fast and simple producing donor cells as fast as 60,000 per second.
NASA Technical Reports Server (NTRS)
Dinar, N.
1978-01-01
Several aspects of multigrid methods are briefly described. The main subjects include the development of very efficient multigrid algorithms for systems of elliptic equations (Cauchy-Riemann, Stokes, Navier-Stokes), as well as the development of control and prediction tools (based on local mode Fourier analysis), used to analyze, check and improve these algorithms. Preliminary research on multigrid algorithms for time dependent parabolic equations is also described. Improvements in existing multigrid processes and algorithms for elliptic equations were studied.
An algorithm for calculi segmentation on ureteroscopic images.
Rosa, Benoît; Mozer, Pierre; Szewczyk, Jérôme
2011-03-01
The purpose of the study is to develop an algorithm for the segmentation of renal calculi on ureteroscopic images. In fact, renal calculi are common source of urological obstruction, and laser lithotripsy during ureteroscopy is a possible therapy. A laser-based system to sweep the calculus surface and vaporize it was developed to automate a very tedious manual task. The distal tip of the ureteroscope is directed using image guidance, and this operation is not possible without an efficient segmentation of renal calculi on the ureteroscopic images. We proposed and developed a region growing algorithm to segment renal calculi on ureteroscopic images. Using real video images to compute ground truth and compare our segmentation with a reference segmentation, we computed statistics on different image metrics, such as Precision, Recall, and Yasnoff Measure, for comparison with ground truth. The algorithm and its parameters were established for the most likely clinical scenarii. The segmentation results are encouraging: the developed algorithm was able to correctly detect more than 90% of the surface of the calculi, according to an expert observer. Implementation of an algorithm for the segmentation of calculi on ureteroscopic images is feasible. The next step is the integration of our algorithm in the command scheme of a motorized system to build a complete operating prototype.
Sliding mode fault tolerant control dealing with modeling uncertainties and actuator faults.
Wang, Tao; Xie, Wenfang; Zhang, Youmin
2012-05-01
In this paper, two sliding mode control algorithms are developed for nonlinear systems with both modeling uncertainties and actuator faults. The first algorithm is developed under an assumption that the uncertainty bounds are known. Different design parameters are utilized to deal with modeling uncertainties and actuator faults, respectively. The second algorithm is an adaptive version of the first one, which is developed to accommodate uncertainties and faults without utilizing exact bounds information. The stability of the overall control systems is proved by using a Lyapunov function. The effectiveness of the developed algorithms have been verified on a nonlinear longitudinal model of Boeing 747-100/200. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
On the Hilbert-Huang Transform Theoretical Foundation
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Blank, Karin; Huang, Norden E.
2004-01-01
The Hilbert-Huang Transform [HHT] is a novel empirical method for spectrum analysis of non-linear and non-stationary signals. The HHT is a recent development and much remains to be done to establish the theoretical foundation of the HHT algorithms. This paper develops the theoretical foundation for the convergence of the HHT sifting algorithm and it proves that the finest spectrum scale will always be the first generated by the HHT Empirical Mode Decomposition (EMD) algorithm. The theoretical foundation for cutting an extrema data points set into two parts is also developed. This then allows parallel signal processing for the HHT computationally complex sifting algorithm and its optimization in hardware.
Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1997-01-01
Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm and Reduced Parallel Diagonal Dominant (RPDD) algorithm have been carefully studied on different parallel platforms for different applications, and a NASA simulation code developed by Man M. Rai and his colleagues has been parallelized and implemented based on data dependency analysis. These achievements are addressed in detail in the paper.
Improved algorithms for estimating Total Alkalinity in Northern Gulf of Mexico
NASA Astrophysics Data System (ADS)
Devkota, M.; Dash, P.
2017-12-01
Ocean Acidification (OA) is one of the serious challenges that have significant impacts on ocean. About 25% of anthropologically generated CO2 is absorbed by the oceans which decreases average ocean pH. This change has critical impacts on marine species, ocean ecology, and associated economics. 35 years of observation concluded that the rate of alteration in OA parameters varies geographically with higher variations in the northern Gulf of Mexico (N-GoM). Several studies have suggested that the Mississippi River affects the carbon dynamics of the N-GoM coastal ecosystem significantly. Total Alkalinity (TA) algorithms developed for major ocean basins produce inaccurate estimations in this region. Hence, a local algorithm to estimate TA is the need for this region, which would incorporate the local effects of oceanographic processes and complex spatial influences. In situ data collected in N-GoM region during the GOMECC-I and II cruises, and GISR Cruises (G-1, 3, 5) from 2007 to 2013 were assimilated and used to calculate the efficiency of the existing TA algorithm that uses Sea Surface Temperature (SST) and Sea Surface Salinity (SSS) as explanatory variables. To improve this algorithm, firstly, statistical analyses were performed to improve the coefficients and the functional form of this algorithm. Then, chlorophyll a (Chl-a) was included as an additional explanatory variable in the multiple linear regression approach in addition to SST and SSS. Based on the average concentration of Chl-a for last 15 years, the N-GoM was divided into two regions, and two separate algorithms were developed for each region. Finally, to address spatial non-stationarity, a Geographically Weighted Regression (GWR) algorithm was developed. The existing TA algorithm resulted considerable algorithm bias with a larger bias in the coastal waters. Chl-a as an additional explanatory variable reduced the bias in the residuals and improved the algorithm efficiency. Chl-a worked as a proxy for addressing the organic pump's pronounced effects in the coastal waters. The GWR algorithm provided a raster surface of the coefficients with even more reliable algorithms to estimate TA with least error. The GWR algorithm addressed the spatial non-stationarity of OA in N-GoM, which apparently was not addressed in the previously developed algorithms.
ANALYZING ENVIRONMENTAL IMPACTS WITH THE WAR ALGORITHM: REVIEW AND UPDATE
This presentation will review uses of the WAR algorithm and current developments and possible future directions. The WAR algorithm is a methodology for analyzing potential environmental impacts of 1600+ chemicals used in the chemical processing and other industries. The algorithm...
NASA Technical Reports Server (NTRS)
Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David
2015-01-01
The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and detection and responses that can be tested in VMET and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM. The plan for VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.
ERIC Educational Resources Information Center
Pliszka, Steven R.; Crismon, M. Lynn; Hughes, Carroll W.; Corners, C. Keith; Emslie, Graham J.; Jensen, Peter S.; McCracken, James T.; Swanson, James M.; Lopez, Molly
2006-01-01
Objective: In 1998, the Texas Department of Mental Health and Mental Retardation developed algorithms for medication treatment of attention-deficit/hyperactivity disorder (ADHD). Advances in the psychopharmacology of ADHD and results of a feasibility study of algorithm use in community mental health centers caused the algorithm to be modified and…
ERIC Educational Resources Information Center
Végh, Ladislav
2016-01-01
The first data structure that first-year undergraduate students learn during the programming and algorithms courses is the one-dimensional array. For novice programmers, it might be hard to understand different algorithms on arrays (e.g. searching, mirroring, sorting algorithms), because the algorithms dynamically change the values of elements. In…
Image processing meta-algorithm development via genetic manipulation of existing algorithm graphs
NASA Astrophysics Data System (ADS)
Schalkoff, Robert J.; Shaaban, Khaled M.
1999-07-01
Automatic algorithm generation for image processing applications is not a new idea, however previous work is either restricted to morphological operates or impractical. In this paper, we show recent research result in the development and use of meta-algorithms, i.e. algorithms which lead to new algorithms. Although the concept is generally applicable, the application domain in this work is restricted to image processing. The meta-algorithm concept described in this paper is based upon out work in dynamic algorithm. The paper first present the concept of dynamic algorithms which, on the basis of training and archived algorithmic experience embedded in an algorithm graph (AG), dynamically adjust the sequence of operations applied to the input image data. Each node in the tree-based representation of a dynamic algorithm with out degree greater than 2 is a decision node. At these nodes, the algorithm examines the input data and determines which path will most likely achieve the desired results. This is currently done using nearest-neighbor classification. The details of this implementation are shown. The constrained perturbation of existing algorithm graphs, coupled with a suitable search strategy, is one mechanism to achieve meta-algorithm an doffers rich potential for the discovery of new algorithms. In our work, a meta-algorithm autonomously generates new dynamic algorithm graphs via genetic recombination of existing algorithm graphs. The AG representation is well suited to this genetic-like perturbation, using a commonly- employed technique in artificial neural network synthesis, namely the blueprint representation of graphs. A number of exam. One of the principal limitations of our current approach is the need for significant human input in the learning phase. Efforts to overcome this limitation are discussed. Future research directions are indicated.
Substructure System Identification for Finite Element Model Updating
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.; Blades, Eric L.
1997-01-01
This report summarizes research conducted under a NASA grant on the topic 'Substructure System Identification for Finite Element Model Updating.' The research concerns ongoing development of the Substructure System Identification Algorithm (SSID Algorithm), a system identification algorithm that can be used to obtain mathematical models of substructures, like Space Shuttle payloads. In the present study, particular attention was given to the following topics: making the algorithm robust to noisy test data, extending the algorithm to accept experimental FRF data that covers a broad frequency bandwidth, and developing a test analytical model (TAM) for use in relating test data to reduced-order finite element models.
Boiler-turbine control system design using a genetic algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimeo, R.; Lee, K.Y.
1995-12-01
This paper discusses the application of a genetic algorithm to control system design for a boiler-turbine plant. In particular the authors study the ability of the genetic algorithm to develop a proportional-integral (PI) controller and a state feedback controller for a non-linear multi-input/multi-output (MIMO) plant model. The plant model is presented along with a discussion of the inherent difficulties in such controller development. A sketch of the genetic algorithm (GA) is presented and its strategy as a method of control system design is discussed. Results are presented for two different control systems that have been designed with the genetic algorithm.
NASA Technical Reports Server (NTRS)
Falkowski, Paul G.; Behrenfeld, Michael J.; Esaias, Wayne E.; Balch, William; Campbell, Janet W.; Iverson, Richard L.; Kiefer, Dale A.; Morel, Andre; Yoder, James A.; Hooker, Stanford B. (Editor);
1998-01-01
Two issues regarding primary productivity, as it pertains to the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Program and the National Aeronautics and Space Administration (NASA) Mission to Planet Earth (MTPE) are presented in this volume. Chapter 1 describes the development of a science plan for deriving primary production for the world ocean using satellite measurements, by the Ocean Primary Productivity Working Group (OPPWG). Chapter 2 presents discussions by the same group, of algorithm classification, algorithm parameterization and data availability, algorithm testing and validation, and the benefits of a consensus primary productivity algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumuluru, Jaya Shankar; McCulloch, Richard Chet James
In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the mostmore » improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.« less
Balancing Contention and Synchronization on the Intel Paragon
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.; Nicol, David M.
1996-01-01
The Intel Paragon is a mesh-connected distributed memory parallel computer. It uses an oblivious and deterministic message routing algorithm: this permits us to develop highly optimized schedules for frequently needed communication patterns. The complete exchange is one such pattern. Several approaches are available for carrying it out on the mesh. We study an algorithm developed by Scott. This algorithm assumes that a communication link can carry one message at a time and that a node can only transmit one message at a time. It requires global synchronization to enforce a schedule of transmissions. Unfortunately global synchronization has substantial overhead on the Paragon. At the same time the powerful interconnection mechanism of this machine permits 2 or 3 messages to share a communication link with minor overhead. It can also overlap multiple message transmission from the same node to some extent. We develop a generalization of Scott's algorithm that executes complete exchange with a prescribed contention. Schedules that incur greater contention require fewer synchronization steps. This permits us to tradeoff contention against synchronization overhead. We describe the performance of this algorithm and compare it with Scott's original algorithm as well as with a naive algorithm that does not take interconnection structure into account. The Bounded contention algorithm is always better than Scott's algorithm and outperforms the naive algorithm for all but the smallest message sizes. The naive algorithm fails to work on meshes larger than 12 x 12. These results show that due consideration of processor interconnect and machine performance parameters is necessary to obtain peak performance from the Paragon and its successor mesh machines.
NASA Technical Reports Server (NTRS)
Gulick, V. C.; Morris, R. L.; Bishop, J.; Gazis, P.; Alena, R.; Sierhuis, M.
2002-01-01
We are developing science analyses algorithms to interface with a Geologist's Field Assistant device to allow robotic or human remote explorers to better sense their surroundings during limited surface excursions. Our algorithms will interpret spectral and imaging data obtained by various sensors. Additional information is contained in the original extended abstract.
Lai, Fu-Jou; Chang, Hong-Tsun; Wu, Wei-Sheng
2015-01-01
Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Allowing users to select eight existing performance indices and 15 existing algorithms for comparison, our web tool benefits researchers who are eager to comprehensively and objectively evaluate the performance of their newly developed algorithm. Thus, our tool greatly expedites the progress in the research of computational identification of cooperative TF pairs.
2015-01-01
Background Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. Results The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Conclusions Allowing users to select eight existing performance indices and 15 existing algorithms for comparison, our web tool benefits researchers who are eager to comprehensively and objectively evaluate the performance of their newly developed algorithm. Thus, our tool greatly expedites the progress in the research of computational identification of cooperative TF pairs. PMID:26677932
Fault-tolerant clock synchronization in distributed systems
NASA Technical Reports Server (NTRS)
Ramanathan, Parameswaran; Shin, Kang G.; Butler, Ricky W.
1990-01-01
Existing fault-tolerant clock synchronization algorithms are compared and contrasted. These include the following: software synchronization algorithms, such as convergence-averaging, convergence-nonaveraging, and consistency algorithms, as well as probabilistic synchronization; hardware synchronization algorithms; and hybrid synchronization. The worst-case clock skews guaranteed by representative algorithms are compared, along with other important aspects such as time, message, and cost overhead imposed by the algorithms. More recent developments such as hardware-assisted software synchronization and algorithms for synchronizing large, partially connected distributed systems are especially emphasized.
Genetic Algorithms and Local Search
NASA Technical Reports Server (NTRS)
Whitley, Darrell
1996-01-01
The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.
Identification of genomic islands in six plant pathogens.
Chen, Ling-Ling
2006-06-07
Genomic islands (GIs) play important roles in microbial evolution, which are acquired by horizontal gene transfer. In this paper, the GIs of six completely sequenced plant pathogens are identified using a windowless method based on Z curve representation of DNA sequences. Consequently, four, eight, four, one, two and four GIs are recognized with the length greater than 20-Kb in plant pathogens Agrobacterium tumefaciens str. C58, Rolstonia solanacearum GMI1000, Xanthomonas axonopodis pv. citri str. 306 (Xac), Xanthomonas campestris pv. campestris str. ATCC33913 (Xcc), Xylella fastidiosa 9a5c and Pseudomonas syringae pv. tomato str. DC3000, respectively. Most of these regions share a set of conserved features of GIs, including an abrupt change in GC content compared with that of the rest of the genome, the existence of integrase genes at the junction, the use of tRNA as the integration sites, the presence of genetic mobility genes, the difference of codon usage, codon preference and amino acid usage, etc. The identification of these GIs will benefit the research for the six important phytopathogens.
The Role of Monsoon-Like Zonally Asymmetric Heating in Interhemispheric Transport
NASA Technical Reports Server (NTRS)
Chen, Gang; Orbe, Clara; Waugh, Darryn
2017-01-01
While the importance of the seasonal migration of the zonally averaged Hadley circulation on interhemispheric transport of trace gases has been recognized, few studies have examined the role of the zonally asymmetric monsoonal circulation. This study investigates the role of monsoon-like zonally asymmetric heating on interhemispheric transport using a dry atmospheric model that is forced by idealized Newtonian relaxation to a prescribed radiative equilibrium temperature. When only the seasonal cycle of zonally symmetric heating is considered, the mean age of air in the Southern Hemisphere since last contact with the Northern Hemisphere midlatitude boundary layer, is much larger than the observations. The introduction of monsoon-like zonally asymmetric heating not only reduces the mean age of tropospheric air to more realistic values, but also produces an upper-tropospheric cross-equatorial transport pathway in boreal summer that resembles the transport pathway simulated in the NASA Global Modeling Initiative (GMI) Chemistry Transport Model driven with MERRA meteorological fields. These results highlight the monsoon-induced eddy circulation plays an important role in the interhemispheric transport of long-lived chemical constituents.
Unsupervised, Robust Estimation-based Clustering for Multispectral Images
NASA Technical Reports Server (NTRS)
Netanyahu, Nathan S.
1997-01-01
To prepare for the challenge of handling the archiving and querying of terabyte-sized scientific spatial databases, the NASA Goddard Space Flight Center's Applied Information Sciences Branch (AISB, Code 935) developed a number of characterization algorithms that rely on supervised clustering techniques. The research reported upon here has been aimed at continuing the evolution of some of these supervised techniques, namely the neural network and decision tree-based classifiers, plus extending the approach to incorporating unsupervised clustering algorithms, such as those based on robust estimation (RE) techniques. The algorithms developed under this task should be suited for use by the Intelligent Information Fusion System (IIFS) metadata extraction modules, and as such these algorithms must be fast, robust, and anytime in nature. Finally, so that the planner/schedule module of the IlFS can oversee the use and execution of these algorithms, all information required by the planner/scheduler must be provided to the IIFS development team to ensure the timely integration of these algorithms into the overall system.
Real time target allocation in cooperative unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Kudleppanavar, Ganesh
The prolific development of Unmanned Aerial Vehicles (UAV's) in recent years has the potential to provide tremendous advantages in military, commercial and law enforcement applications. While safety and performance take precedence in the development lifecycle, autonomous operations and, in particular, cooperative missions have the ability to significantly enhance the usability of these vehicles. The success of cooperative missions relies on the optimal allocation of targets while taking into consideration the resource limitation of each vehicle. The task allocation process can be centralized or decentralized. This effort presents the development of a real time target allocation algorithm that considers available stored energy in each vehicle while minimizing the communication between each UAV. The algorithm utilizes a nearest neighbor search algorithm to locate new targets with respect to existing targets. Simulations show that this novel algorithm compares favorably to the mixed integer linear programming method, which is computationally more expensive. The implementation of this algorithm on Arduino and Xbee wireless modules shows the capability of the algorithm to execute efficiently on hardware with minimum computation complexity.
Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Peissig, Peggy L; Denny, Joshua C; Kho, Abel N; Miller, Aaron; Pathak, Jyotishman
2012-01-01
The development of Electronic Health Record (EHR)-based phenotype selection algorithms is a non-trivial and highly iterative process involving domain experts and informaticians. To make it easier to port algorithms across institutions, it is desirable to represent them using an unambiguous formal specification language. For this purpose we evaluated the recently developed National Quality Forum (NQF) information model designed for EHR-based quality measures: the Quality Data Model (QDM). We selected 9 phenotyping algorithms that had been previously developed as part of the eMERGE consortium and translated them into QDM format. Our study concluded that the QDM contains several core elements that make it a promising format for EHR-driven phenotyping algorithms for clinical research. However, we also found areas in which the QDM could be usefully extended, such as representing information extracted from clinical text, and the ability to handle algorithms that do not consist of Boolean combinations of criteria.
Computing border bases using mutant strategies
NASA Astrophysics Data System (ADS)
Ullah, E.; Abbas Khan, S.
2014-01-01
Border bases, a generalization of Gröbner bases, have actively been addressed during recent years due to their applicability to industrial problems. In cryptography and coding theory a useful application of border based is to solve zero-dimensional systems of polynomial equations over finite fields, which motivates us for developing optimizations of the algorithms that compute border bases. In 2006, Kehrein and Kreuzer formulated the Border Basis Algorithm (BBA), an algorithm which allows the computation of border bases that relate to a degree compatible term ordering. In 2007, J. Ding et al. introduced mutant strategies bases on finding special lower degree polynomials in the ideal. The mutant strategies aim to distinguish special lower degree polynomials (mutants) from the other polynomials and give them priority in the process of generating new polynomials in the ideal. In this paper we develop hybrid algorithms that use the ideas of J. Ding et al. involving the concept of mutants to optimize the Border Basis Algorithm for solving systems of polynomial equations over finite fields. In particular, we recall a version of the Border Basis Algorithm which is actually called the Improved Border Basis Algorithm and propose two hybrid algorithms, called MBBA and IMBBA. The new mutants variants provide us space efficiency as well as time efficiency. The efficiency of these newly developed hybrid algorithms is discussed using standard cryptographic examples.
Model reference adaptive control of robots
NASA Technical Reports Server (NTRS)
Steinvorth, Rodrigo
1991-01-01
This project presents the results of controlling two types of robots using new Command Generator Tracker (CGT) based Direct Model Reference Adaptive Control (MRAC) algorithms. Two mathematical models were used to represent a single-link, flexible joint arm and a Unimation PUMA 560 arm; and these were then controlled in simulation using different MRAC algorithms. Special attention was given to the performance of the algorithms in the presence of sudden changes in the robot load. Previously used CGT based MRAC algorithms had several problems. The original algorithm that was developed guaranteed asymptotic stability only for almost strictly positive real (ASPR) plants. This condition is very restrictive, since most systems do not satisfy this assumption. Further developments to the algorithm led to an expansion of the number of plants that could be controlled, however, a steady state error was introduced in the response. These problems led to the introduction of some modifications to the algorithms so that they would be able to control a wider class of plants and at the same time would asymptotically track the reference model. This project presents the development of two algorithms that achieve the desired results and simulates the control of the two robots mentioned before. The results of the simulations are satisfactory and show that the problems stated above have been corrected in the new algorithms. In addition, the responses obtained show that the adaptively controlled processes are resistant to sudden changes in the load.
Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K
2015-01-01
Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (p<0.001) and integrated discrimination improvement (p=0.04). The HALT-C model had a c-statistic of 0.60 (95%CI 0.50-0.70) in the validation cohort and was outperformed by the machine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC. PMID:24169273
Dobson-Belaire, Wendy; Goodfield, Jason; Borrelli, Richard; Liu, Fei Fei; Khan, Zeba M
2018-01-01
Using diagnosis code-based algorithms is the primary method of identifying patient cohorts for retrospective studies; nevertheless, many databases lack reliable diagnosis code information. To develop precise algorithms based on medication claims/prescriber visits (MCs/PVs) to identify psoriasis (PsO) patients and psoriatic patients with arthritic conditions (PsO-AC), a proxy for psoriatic arthritis, in Canadian databases lacking diagnosis codes. Algorithms were developed using medications with narrow indication profiles in combination with prescriber specialty to define PsO and PsO-AC. For a 3-year study period from July 1, 2009, algorithms were validated using the PharMetrics Plus database, which contains both adjudicated medication claims and diagnosis codes. Positive predictive value (PPV), negative predictive value (NPV), sensitivity, and specificity of the developed algorithms were assessed using diagnosis code as the reference standard. Chosen algorithms were then applied to Canadian drug databases to profile the algorithm-identified PsO and PsO-AC cohorts. In the selected database, 183,328 patients were identified for validation. The highest PPVs for PsO (85%) and PsO-AC (65%) occurred when a predictive algorithm of two or more MCs/PVs was compared with the reference standard of one or more diagnosis codes. NPV and specificity were high (99%-100%), whereas sensitivity was low (≤30%). Reducing the number of MCs/PVs or increasing diagnosis claims decreased the algorithms' PPVs. We have developed an MC/PV-based algorithm to identify PsO patients with a high degree of accuracy, but accuracy for PsO-AC requires further investigation. Such methods allow researchers to conduct retrospective studies in databases in which diagnosis codes are absent. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Two Meanings of Algorithmic Mathematics.
ERIC Educational Resources Information Center
Maurer, Stephen B.
1984-01-01
Two mathematical topics are interpreted from the viewpoints of traditional (performing algorithms) and contemporary (creating algorithms and thinking in terms of them for solving problems and developing theory) algorithmic mathematics. The two topics are Horner's method for evaluating polynomials and Gauss's method for solving systems of linear…
Minimal-scan filtered backpropagation algorithms for diffraction tomography.
Pan, X; Anastasio, M A
1999-12-01
The filtered backpropagation (FBPP) algorithm, originally developed by Devaney [Ultrason. Imaging 4, 336 (1982)], has been widely used for reconstructing images in diffraction tomography. It is generally known that the FBPP algorithm requires scattered data from a full angular range of 2 pi for exact reconstruction of a generally complex-valued object function. However, we reveal that one needs scattered data only over the angular range 0 < or = phi < or = 3 pi/2 for exact reconstruction of a generally complex-valued object function. Using this insight, we develop and analyze a family of minimal-scan filtered backpropagation (MS-FBPP) algorithms, which, unlike the FBPP algorithm, use scattered data acquired from view angles over the range 0 < or = phi < or = 3 pi/2. We show analytically that these MS-FBPP algorithms are mathematically identical to the FBPP algorithm. We also perform computer simulation studies for validation, demonstration, and comparison of these MS-FBPP algorithms. The numerical results in these simulation studies corroborate our theoretical assertions.
Multivariate statistical model for 3D image segmentation with application to medical images.
John, Nigel M; Kabuka, Mansur R; Ibrahim, Mohamed O
2003-12-01
In this article we describe a statistical model that was developed to segment brain magnetic resonance images. The statistical segmentation algorithm was applied after a pre-processing stage involving the use of a 3D anisotropic filter along with histogram equalization techniques. The segmentation algorithm makes use of prior knowledge and a probability-based multivariate model designed to semi-automate the process of segmentation. The algorithm was applied to images obtained from the Center for Morphometric Analysis at Massachusetts General Hospital as part of the Internet Brain Segmentation Repository (IBSR). The developed algorithm showed improved accuracy over the k-means, adaptive Maximum Apriori Probability (MAP), biased MAP, and other algorithms. Experimental results showing the segmentation and the results of comparisons with other algorithms are provided. Results are based on an overlap criterion against expertly segmented images from the IBSR. The algorithm produced average results of approximately 80% overlap with the expertly segmented images (compared with 85% for manual segmentation and 55% for other algorithms).
NASA Astrophysics Data System (ADS)
Nikitin, I. A.; Sherstnev, V. S.; Sherstneva, A. I.; Botygin, I. A.
2017-02-01
The results of the research of existent routing protocols in wireless networks and their main features are discussed in the paper. Basing on the protocol data, the routing protocols in wireless networks, including search routing algorithms and phone directory exchange algorithms, are designed with the ‘WiFi-Direct’ technology. Algorithms without IP-protocol were designed, and that enabled one to increase the efficiency of the algorithms while working only with the MAC-addresses of the devices. The developed algorithms are expected to be used in the mobile software engineering with the Android platform taken as base. Easier algorithms and formats of the well-known route protocols, rejection of the IP-protocols enables to use the developed protocols on more primitive mobile devices. Implementation of the protocols to the engineering industry enables to create data transmission networks among working places and mobile robots without any access points.
Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.
1997-01-01
The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.
Adaptive control in the presence of unmodeled dynamics. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rohrs, C. E.
1982-01-01
Stability and robustness properties of a wide class of adaptive control algorithms in the presence of unmodeled dynamics and output disturbances were investigated. The class of adaptive algorithms considered are those commonly referred to as model reference adaptive control algorithms, self-tuning controllers, and dead beat adaptive controllers, developed for both continuous-time systems and discrete-time systems. A unified analytical approach was developed to examine the class of existing adaptive algorithms. It was discovered that all existing algorithms contain an infinite gain operator in the dynamic system that defines command reference errors and parameter errors; it is argued that such an infinite gain operator appears to be generic to all adaptive algorithms, whether they exhibit explicit or implicit parameter identification. It is concluded that none of the adaptive algorithms considered can be used with confidence in a practical control system design, because instability will set in with a high probability.
Development and implementation of clinical algorithms in occupational health practice.
Ghafur, Imran; Lalloo, Drushca; Macdonald, Ewan B; Menon, Manju
2013-12-01
Occupational health (OH) practice is framed by legal, ethical, and regulatory requirements. Integrating this information into daily practice can be a difficult task. We devised evidence-based framework standards of good practice that would aid clinical management, and assessed their impact. The clinical algorithm was the method deemed most appropriate to our needs. Using "the first OH consultation" as an example, the development, implementation, and evaluation of an algorithm is described. The first OH consultation algorithm was developed. Evaluation demonstrated an overall improvement in recording of information, specifically consent, recreational drug history, function, and review arrangements. Clinical algorithms can be a method for assimilating and succinctly presenting the various facets of OH practice, for use by all OH clinicians as a practical guide and as a way of improving quality in clinical record-keeping.
Brier, Jessica; Carolyn, Moalem; Haverly, Marsha; Januario, Mary Ellen; Padula, Cynthia; Tal, Ahuva; Triosh, Henia
2015-03-01
To develop a clinical algorithm to guide nurses' critical thinking through systematic surveillance, assessment, actions required and communication strategies. To achieve this, an international, multiphase project was initiated. Patients receive hospital care postoperatively because they require the skilled surveillance of nurses. Effective assessment of postoperative patients is essential for early detection of clinical deterioration and optimal care management. Despite the significant amount of time devoted to surveillance activities, there is lack of evidence that nurses use a consistent, systematic approach in surveillance, management and communication, potentially leading to less optimal outcomes. Several explanations for the lack of consistency have been suggested in the literature. Mixed methods approach. Retrospective chart review; semi-structured interviews conducted with expert nurses (n = 10); algorithm development. Themes developed from the semi-structured interviews, including (1) complete, systematic assessment, (2) something is not right (3) validating with others, (4) influencing factors and (5) frustration with lack of response when communicating findings were used as the basis for development of the Surveillance Algorithm for Post-Surgical Patients. The algorithm proved beneficial based on limited use in clinical settings. Further work is needed to fully test it in education and practice. The Surveillance Algorithm for Post-Surgical Patients represents the approach of expert nurses, and serves to guide less expert nurses' observations, critical thinking, actions and communication. Based on this approach, the algorithm assists nurses to develop skills promoting early detection, intervention and communication in cases of patient deterioration. © 2014 John Wiley & Sons Ltd.
Development of a pneumatic high-angle-of-attack Flush Airdata Sensing (HI-FADS) system
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.
1992-01-01
The HI-FADS system design is an evolution of the FADS systems (e.g., Larson et al., 1980, 1987), which emphasizes the entire airdata system development. This paper describes the HI-FADS measurement system, with particular consideration given to the basic measurement hardware and the development of the HI-FADS aerodynamic model and the basic nonlinear regression algorithm. Algorithm initialization techniques are developed, and potential algorithm divergence problems are discussed. Data derived from HI-FADS flight tests are used to demonstrate the system accuracies and to illustrate the developed concepts and methods.
Development of an algorithm to plan and simulate a new interventional procedure.
Fujita, Buntaro; Kütting, Maximilian; Scholtz, Smita; Utzenrath, Marc; Hakim-Meibodi, Kavous; Paluszkiewicz, Lech; Schmitz, Christoph; Börgermann, Jochen; Gummert, Jan; Steinseifer, Ulrich; Ensminger, Stephan
2015-07-01
The number of implanted biological valves for treatment of valvular heart disease is growing and a percentage of these patients will eventually undergo a transcatheter valve-in-valve (ViV) procedure. Some of these patients will represent challenging cases. The aim of this study was to develop a feasible algorithm to plan and in vitro simulate a new interventional procedure to improve patient outcome. In addition to standard diagnostic routine, our algorithm includes 3D printing of the annulus, hydrodynamic measurements and high-speed analysis of leaflet kinematics after simulation of the procedure in different prosthesis positions as well as X-ray imaging of the most suitable valve position to create a 'blueprint' for the patient procedure. This algorithm was developed for a patient with a degenerated Perceval aortic sutureless prosthesis requiring a ViV procedure. Different ViV procedures were assessed in the algorithm and based on these results the best option for the patient was chosen. The actual procedure went exactly as planned with help of this algorithm. Here we have developed a new technically feasible algorithm simulating important aspects of a novel interventional procedure prior to the actual procedure. This algorithm can be applied to virtually all patients requiring a novel interventional procedure to help identify risks and find optimal parameters for prosthesis selection and placement in order to maximize safety for the patient. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Muche-Borowski, Cathleen; Lühmann, Dagmar; Schäfer, Ingmar; Mundt, Rebekka; Wagner, Hans-Otto; Scherer, Martin
2017-06-22
The study aimed to develop a comprehensive algorithm (meta-algorithm) for primary care encounters of patients with multimorbidity. We used a novel, case-based and evidence-based procedure to overcome methodological difficulties in guideline development for patients with complex care needs. Systematic guideline development methodology including systematic evidence retrieval (guideline synopses), expert opinions and informal and formal consensus procedures. Primary care. The meta-algorithm was developed in six steps:1. Designing 10 case vignettes of patients with multimorbidity (common, epidemiologically confirmed disease patterns and/or particularly challenging health care needs) in a multidisciplinary workshop.2. Based on the main diagnoses, a systematic guideline synopsis of evidence-based and consensus-based clinical practice guidelines was prepared. The recommendations were prioritised according to the clinical and psychosocial characteristics of the case vignettes.3. Case vignettes along with the respective guideline recommendations were validated and specifically commented on by an external panel of practicing general practitioners (GPs).4. Guideline recommendations and experts' opinions were summarised as case specific management recommendations (N-of-one guidelines).5. Healthcare preferences of patients with multimorbidity were elicited from a systematic literature review and supplemented with information from qualitative interviews.6. All N-of-one guidelines were analysed using pattern recognition to identify common decision nodes and care elements. These elements were put together to form a generic meta-algorithm. The resulting meta-algorithm reflects the logic of a GP's encounter of a patient with multimorbidity regarding decision-making situations, communication needs and priorities. It can be filled with the complex problems of individual patients and hereby offer guidance to the practitioner. Contrary to simple, symptom-oriented algorithms, the meta-algorithm illustrates a superordinate process that permanently keeps the entire patient in view. The meta-algorithm represents the back bone of the multimorbidity guideline of the German College of General Practitioners and Family Physicians. This article presents solely the development phase; the meta-algorithm needs to be piloted before it can be implemented. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Brain-Inspired Constructive Learning Algorithms with Evolutionally Additive Nonlinear Neurons
NASA Astrophysics Data System (ADS)
Fang, Le-Heng; Lin, Wei; Luo, Qiang
In this article, inspired partially by the physiological evidence of brain’s growth and development, we developed a new type of constructive learning algorithm with evolutionally additive nonlinear neurons. The new algorithms have remarkable ability in effective regression and accurate classification. In particular, the algorithms are able to sustain a certain reduction of the loss function when the dynamics of the trained network are bogged down in the vicinity of the local minima. The algorithm augments the neural network by adding only a few connections as well as neurons whose activation functions are nonlinear, nonmonotonic, and self-adapted to the dynamics of the loss functions. Indeed, we analytically demonstrate the reduction dynamics of the algorithm for different problems, and further modify the algorithms so as to obtain an improved generalization capability for the augmented neural networks. Finally, through comparing with the classical algorithm and architecture for neural network construction, we show that our constructive learning algorithms as well as their modified versions have better performances, such as faster training speed and smaller network size, on several representative benchmark datasets including the MNIST dataset for handwriting digits.
NASA Technical Reports Server (NTRS)
Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David
2015-01-01
The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM) early in the development lifecycle for the SLS program, NASA formed the M&FM team as part of the Integrated Systems Health Management and Automation Branch under the Spacecraft Vehicle Systems Department at the Marshall Space Flight Center (MSFC). To support the development of the FM algorithms, the VMET developed by the M&FM team provides the ability to integrate the algorithms, perform test cases, and integrate vendor-supplied physics-based launch vehicle (LV) subsystem models. Additionally, the team has developed processes for implementing and validating the M&FM algorithms for concept validation and risk reduction. The flexibility of the VMET capabilities enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS, GNC, and others. One of the principal functions of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software test and validation processes. In any software development process there is inherent risk in the interpretation and implementation of concepts from requirements and test cases into flight software compounded with potential human errors throughout the development and regression testing lifecycle. Risk reduction is addressed by the M&FM group but in particular by the Analysis Team working with other organizations such as S&MA, Structures and Environments, GNC, Orion, Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission (LOM) and Loss of Crew (LOC) probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses to be tested in VMET to ensure reliable failure detection, and confirm responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - the ARINC 6535-partitioned Operating System, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by FSW. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure their effectiveness and performance in the exterior FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.
Information extraction and transmission techniques for spaceborne synthetic aperture radar images
NASA Technical Reports Server (NTRS)
Frost, V. S.; Yurovsky, L.; Watson, E.; Townsend, K.; Gardner, S.; Boberg, D.; Watson, J.; Minden, G. J.; Shanmugan, K. S.
1984-01-01
Information extraction and transmission techniques for synthetic aperture radar (SAR) imagery were investigated. Four interrelated problems were addressed. An optimal tonal SAR image classification algorithm was developed and evaluated. A data compression technique was developed for SAR imagery which is simple and provides a 5:1 compression with acceptable image quality. An optimal textural edge detector was developed. Several SAR image enhancement algorithms have been proposed. The effectiveness of each algorithm was compared quantitatively.
Infrared Algorithm Development for Ocean Observations with EOS/MODIS
NASA Technical Reports Server (NTRS)
Brown, Otis B.
1997-01-01
Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared measurements. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, development of experimental instrumentation, and participation in MODIS (project) related activities. Activities in this contract period have focused on radiative transfer modeling, evaluation of atmospheric correction methodologies, undertake field campaigns, analysis of field data, and participation in MODIS meetings.
Ozmutlu, H. Cenk
2014-01-01
We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204
Optimization of the double dosimetry algorithm for interventional cardiologists
NASA Astrophysics Data System (ADS)
Chumak, Vadim; Morgun, Artem; Bakhanova, Elena; Voloskiy, Vitalii; Borodynchik, Elena
2014-11-01
A double dosimetry method is recommended in interventional cardiology (IC) to assess occupational exposure; yet currently there is no common and universal algorithm for effective dose estimation. In this work, flexible and adaptive algorithm building methodology was developed and some specific algorithm applicable for typical irradiation conditions of IC procedures was obtained. It was shown that the obtained algorithm agrees well with experimental measurements and is less conservative compared to other known algorithms.
Health management system for rocket engines
NASA Technical Reports Server (NTRS)
Nemeth, Edward
1990-01-01
The functional framework of a failure detection algorithm for the Space Shuttle Main Engine (SSME) is developed. The basic algorithm is based only on existing SSME measurements. Supplemental measurements, expected to enhance failure detection effectiveness, are identified. To support the algorithm development, a figure of merit is defined to estimate the likelihood of SSME criticality 1 failure modes and the failure modes are ranked in order of likelihood of occurrence. Nine classes of failure detection strategies are evaluated and promising features are extracted as the basis for the failure detection algorithm. The failure detection algorithm provides early warning capabilities for a wide variety of SSME failure modes. Preliminary algorithm evaluation, using data from three SSME failures representing three different failure types, demonstrated indications of imminent catastrophic failure well in advance of redline cutoff in all three cases.
Using qualitative research to inform development of a diagnostic algorithm for UTI in children.
de Salis, Isabel; Whiting, Penny; Sterne, Jonathan A C; Hay, Alastair D
2013-06-01
Diagnostic and prognostic algorithms can help reduce clinical uncertainty. The selection of candidate symptoms and signs to be measured in case report forms (CRFs) for potential inclusion in diagnostic algorithms needs to be comprehensive, clearly formulated and relevant for end users. To investigate whether qualitative methods could assist in designing CRFs in research developing diagnostic algorithms. Specifically, the study sought to establish whether qualitative methods could have assisted in designing the CRF for the Health Technology Association funded Diagnosis of Urinary Tract infection in Young children (DUTY) study, which will develop a diagnostic algorithm to improve recognition of urinary tract infection (UTI) in children aged <5 years presenting acutely unwell to primary care. Qualitative methods were applied using semi-structured interviews of 30 UK doctors and nurses working with young children in primary care and a Children's Emergency Department. We elicited features that clinicians believed useful in diagnosing UTI and compared these for presence or absence and terminology with the DUTY CRF. Despite much agreement between clinicians' accounts and the DUTY CRFs, we identified a small number of potentially important symptoms and signs not included in the CRF and some included items that could have been reworded to improve understanding and final data analysis. This study uniquely demonstrates the role of qualitative methods in the design and content of CRFs used for developing diagnostic (and prognostic) algorithms. Research groups developing such algorithms should consider using qualitative methods to inform the selection and wording of candidate symptoms and signs.
NASA Astrophysics Data System (ADS)
Telban, Robert J.
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Algorithms and programming tools for image processing on the MPP, part 2
NASA Technical Reports Server (NTRS)
Reeves, Anthony P.
1986-01-01
A number of algorithms were developed for image warping and pyramid image filtering. Techniques were investigated for the parallel processing of a large number of independent irregular shaped regions on the MPP. In addition some utilities for dealing with very long vectors and for sorting were developed. Documentation pages for the algorithms which are available for distribution are given. The performance of the MPP for a number of basic data manipulations was determined. From these results it is possible to predict the efficiency of the MPP for a number of algorithms and applications. The Parallel Pascal development system, which is a portable programming environment for the MPP, was improved and better documentation including a tutorial was written. This environment allows programs for the MPP to be developed on any conventional computer system; it consists of a set of system programs and a library of general purpose Parallel Pascal functions. The algorithms were tested on the MPP and a presentation on the development system was made to the MPP users group. The UNIX version of the Parallel Pascal System was distributed to a number of new sites.
System for Anomaly and Failure Detection (SAFD) system development
NASA Technical Reports Server (NTRS)
Oreilly, D.
1992-01-01
This task specified developing the hardware and software necessary to implement the System for Anomaly and Failure Detection (SAFD) algorithm, developed under Technology Test Bed (TTB) Task 21, on the TTB engine stand. This effort involved building two units; one unit to be installed in the Block II Space Shuttle Main Engine (SSME) Hardware Simulation Lab (HSL) at Marshall Space Flight Center (MSFC), and one unit to be installed at the TTB engine stand. Rocketdyne personnel from the HSL performed the task. The SAFD algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failure as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient condition.
Generation and assessment of turntable SAR data for the support of ATR development
NASA Astrophysics Data System (ADS)
Cohen, Marvin N.; Showman, Gregory A.; Sangston, K. James; Sylvester, Vincent B.; Gostin, Lamar; Scheer, C. Ruby
1998-10-01
Inverse synthetic aperture radar (ISAR) imaging on a turntable-tower test range permits convenient generation of high resolution two-dimensional images of radar targets under controlled conditions for testing SAR image processing and for supporting automatic target recognition (ATR) algorithm development. However, turntable ISAR images are often obtained under near-field geometries and hence may suffer geometric distortions not present in airborne SAR images. In this paper, turntable data collected at Georgia Tech's Electromagnetic Test Facility are used to begin to assess the utility of two- dimensional ISAR imaging algorithms in forming images to support ATR development. The imaging algorithms considered include a simple 2D discrete Fourier transform (DFT), a 2-D DFT with geometric correction based on image domain resampling, and a computationally-intensive geometric matched filter solution. Images formed with the various algorithms are used to develop ATR templates, which are then compared with an eye toward utilization in an ATR algorithm.
Advancements to the planogram frequency–distance rebinning algorithm
Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact reconstruction) and planogram filtered backprojection image reconstruction algorithms. We show that the PFDRX algorithm produces images that are nearly as accurate as images reconstructed with the planogram filtered backprojection algorithm and more accurate than images reconstructed with the PFDR+FBP algorithm. Both the PFDR+FBP and PFDRX algorithms provide a dramatic improvement in computation time over the planogram filtered backprojection algorithm. PMID:20436790
Decision Aids for Naval Air ASW
1980-03-15
Algorithm for Zone Optimization Investigation) NADC Developing Sonobuoy Pattern for Air ASW Search DAISY (Decision Aiding Information System) Wharton...sion making behavior. 0 Artificial intelligence sequential pattern recognition algorithm for reconstructing the decision maker’s utility functions. 0...display presenting the uncertainty area of the target. 3.1.5 Algorithm for Zone Optimization Investigation (AZOI) -- Naval Air Development Center 0 A
Picking ChIP-seq peak detectors for analyzing chromatin modification experiments
Micsinai, Mariann; Parisi, Fabio; Strino, Francesco; Asp, Patrik; Dynlacht, Brian D.; Kluger, Yuval
2012-01-01
Numerous algorithms have been developed to analyze ChIP-Seq data. However, the complexity of analyzing diverse patterns of ChIP-Seq signals, especially for epigenetic marks, still calls for the development of new algorithms and objective comparisons of existing methods. We developed Qeseq, an algorithm to detect regions of increased ChIP read density relative to background. Qeseq employs critical novel elements, such as iterative recalibration and neighbor joining of reads to identify enriched regions of any length. To objectively assess its performance relative to other 14 ChIP-Seq peak finders, we designed a novel protocol based on Validation Discriminant Analysis (VDA) to optimally select validation sites and generated two validation datasets, which are the most comprehensive to date for algorithmic benchmarking of key epigenetic marks. In addition, we systematically explored a total of 315 diverse parameter configurations from these algorithms and found that typically optimal parameters in one dataset do not generalize to other datasets. Nevertheless, default parameters show the most stable performance, suggesting that they should be used. This study also provides a reproducible and generalizable methodology for unbiased comparative analysis of high-throughput sequencing tools that can facilitate future algorithmic development. PMID:22307239
Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan
2016-04-22
The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.
Picking ChIP-seq peak detectors for analyzing chromatin modification experiments.
Micsinai, Mariann; Parisi, Fabio; Strino, Francesco; Asp, Patrik; Dynlacht, Brian D; Kluger, Yuval
2012-05-01
Numerous algorithms have been developed to analyze ChIP-Seq data. However, the complexity of analyzing diverse patterns of ChIP-Seq signals, especially for epigenetic marks, still calls for the development of new algorithms and objective comparisons of existing methods. We developed Qeseq, an algorithm to detect regions of increased ChIP read density relative to background. Qeseq employs critical novel elements, such as iterative recalibration and neighbor joining of reads to identify enriched regions of any length. To objectively assess its performance relative to other 14 ChIP-Seq peak finders, we designed a novel protocol based on Validation Discriminant Analysis (VDA) to optimally select validation sites and generated two validation datasets, which are the most comprehensive to date for algorithmic benchmarking of key epigenetic marks. In addition, we systematically explored a total of 315 diverse parameter configurations from these algorithms and found that typically optimal parameters in one dataset do not generalize to other datasets. Nevertheless, default parameters show the most stable performance, suggesting that they should be used. This study also provides a reproducible and generalizable methodology for unbiased comparative analysis of high-throughput sequencing tools that can facilitate future algorithmic development.
Peck, Jay; Oluwole, Oluwayemisi O; Wong, Hsi-Wu; Miake-Lye, Richard C
2013-03-01
To provide accurate input parameters to the large-scale global climate simulation models, an algorithm was developed to estimate the black carbon (BC) mass emission index for engines in the commercial fleet at cruise. Using a high-dimensional model representation (HDMR) global sensitivity analysis, relevant engine specification/operation parameters were ranked, and the most important parameters were selected. Simple algebraic formulas were then constructed based on those important parameters. The algorithm takes the cruise power (alternatively, fuel flow rate), altitude, and Mach number as inputs, and calculates BC emission index for a given engine/airframe combination using the engine property parameters, such as the smoke number, available in the International Civil Aviation Organization (ICAO) engine certification databank. The algorithm can be interfaced with state-of-the-art aircraft emissions inventory development tools, and will greatly improve the global climate simulations that currently use a single fleet average value for all airplanes. An algorithm to estimate the cruise condition black carbon emission index for commercial aircraft engines was developed. Using the ICAO certification data, the algorithm can evaluate the black carbon emission at given cruise altitude and speed.
Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron
2008-01-01
In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.
Tactical Synthesis Of Efficient Global Search Algorithms
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2009-01-01
Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.
Bouchard, M
2001-01-01
In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.
A genetic algorithm for replica server placement
NASA Astrophysics Data System (ADS)
Eslami, Ghazaleh; Toroghi Haghighat, Abolfazl
2012-01-01
Modern distribution systems use replication to improve communication delay experienced by their clients. Some techniques have been developed for web server replica placement. One of the previous studies was Greedy algorithm proposed by Qiu et al, that needs knowledge about network topology. In This paper, first we introduce a genetic algorithm for web server replica placement. Second, we compare our algorithm with Greedy algorithm proposed by Qiu et al, and Optimum algorithm. We found that our approach can achieve better results than Greedy algorithm proposed by Qiu et al but it's computational time is more than Greedy algorithm.
A genetic algorithm for replica server placement
NASA Astrophysics Data System (ADS)
Eslami, Ghazaleh; Toroghi Haghighat, Abolfazl
2011-12-01
Modern distribution systems use replication to improve communication delay experienced by their clients. Some techniques have been developed for web server replica placement. One of the previous studies was Greedy algorithm proposed by Qiu et al, that needs knowledge about network topology. In This paper, first we introduce a genetic algorithm for web server replica placement. Second, we compare our algorithm with Greedy algorithm proposed by Qiu et al, and Optimum algorithm. We found that our approach can achieve better results than Greedy algorithm proposed by Qiu et al but it's computational time is more than Greedy algorithm.
CHEMINFORMATICS TOOLS FOR TOXICANT CHARACTERIZATION
Petersen, Bjørn Molt; Boel, Mikkel; Montag, Markus; Gardner, David K
2016-10-01
Can a generally applicable morphokinetic algorithm suitable for Day 3 transfers of time-lapse monitored embryos originating from different culture conditions and fertilization methods be developed for the purpose of supporting the embryologist's decision on which embryo to transfer back to the patient in assisted reproduction? The algorithm presented here can be used independently of culture conditions and fertilization method and provides predictive power not surpassed by other published algorithms for ranking embryos according to their blastocyst formation potential. Generally applicable algorithms have so far been developed only for predicting blastocyst formation. A number of clinics have reported validated implantation prediction algorithms, which have been developed based on clinic-specific culture conditions and clinical environment. However, a generally applicable embryo evaluation algorithm based on actual implantation outcome has not yet been reported. Retrospective evaluation of data extracted from a database of known implantation data (KID) originating from 3275 embryos transferred on Day 3 conducted in 24 clinics between 2009 and 2014. The data represented different culture conditions (reduced and ambient oxygen with various culture medium strategies) and fertilization methods (IVF, ICSI). The capability to predict blastocyst formation was evaluated on an independent set of morphokinetic data from 11 218 embryos which had been cultured to Day 5. PARTICIPANTS/MATERIALS, SETTING, The algorithm was developed by applying automated recursive partitioning to a large number of annotation types and derived equations, progressing to a five-fold cross-validation test of the complete data set and a validation test of different incubation conditions and fertilization methods. The results were expressed as receiver operating characteristics curves using the area under the curve (AUC) to establish the predictive strength of the algorithm. By applying the here developed algorithm (KIDScore), which was based on six annotations (the number of pronuclei equals 2 at the 1-cell stage, time from insemination to pronuclei fading at the 1-cell stage, time from insemination to the 2-cell stage, time from insemination to the 3-cell stage, time from insemination to the 5-cell stage and time from insemination to the 8-cell stage) and ranking the embryos in five groups, the implantation potential of the embryos was predicted with an AUC of 0.650. On Day 3 the KIDScore algorithm was capable of predicting blastocyst development with an AUC of 0.745 and blastocyst quality with an AUC of 0.679. In a comparison of blastocyst prediction including six other published algorithms and KIDScore, only KIDScore and one more algorithm surpassed an algorithm constructed on conventional Alpha/ESHRE consensus timings in terms of predictive power. Some morphological assessments were not available and consequently three of the algorithms in the comparison were not used in full and may therefore have been put at a disadvantage. Algorithms based on implantation data from Day 3 embryo transfers require adjustments to be capable of predicting the implantation potential of Day 5 embryo transfers. The current study is restricted by its retrospective nature and absence of live birth information. Prospective Randomized Controlled Trials should be used in future studies to establish the value of time-lapse technology and morphokinetic evaluation. Algorithms applicable to different culture conditions can be developed if based on large data sets of heterogeneous origin. This study was funded by Vitrolife A/S, Denmark and Vitrolife AB, Sweden. B.M.P.'s company BMP Analytics is performing consultancy for Vitrolife A/S. M.B. is employed at Vitrolife A/S. M.M.'s company ilabcomm GmbH received honorarium for consultancy from Vitrolife AB. D.K.G. received research support from Vitrolife AB. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology.
Petersen, Bjørn Molt; Boel, Mikkel; Montag, Markus; Gardner, David K.
2016-01-01
STUDY QUESTION Can a generally applicable morphokinetic algorithm suitable for Day 3 transfers of time-lapse monitored embryos originating from different culture conditions and fertilization methods be developed for the purpose of supporting the embryologist's decision on which embryo to transfer back to the patient in assisted reproduction? SUMMARY ANSWER The algorithm presented here can be used independently of culture conditions and fertilization method and provides predictive power not surpassed by other published algorithms for ranking embryos according to their blastocyst formation potential. WHAT IS KNOWN ALREADY Generally applicable algorithms have so far been developed only for predicting blastocyst formation. A number of clinics have reported validated implantation prediction algorithms, which have been developed based on clinic-specific culture conditions and clinical environment. However, a generally applicable embryo evaluation algorithm based on actual implantation outcome has not yet been reported. STUDY DESIGN, SIZE, DURATION Retrospective evaluation of data extracted from a database of known implantation data (KID) originating from 3275 embryos transferred on Day 3 conducted in 24 clinics between 2009 and 2014. The data represented different culture conditions (reduced and ambient oxygen with various culture medium strategies) and fertilization methods (IVF, ICSI). The capability to predict blastocyst formation was evaluated on an independent set of morphokinetic data from 11 218 embryos which had been cultured to Day 5. PARTICIPANTS/MATERIALS, SETTING, METHODS The algorithm was developed by applying automated recursive partitioning to a large number of annotation types and derived equations, progressing to a five-fold cross-validation test of the complete data set and a validation test of different incubation conditions and fertilization methods. The results were expressed as receiver operating characteristics curves using the area under the curve (AUC) to establish the predictive strength of the algorithm. MAIN RESULTS AND THE ROLE OF CHANCE By applying the here developed algorithm (KIDScore), which was based on six annotations (the number of pronuclei equals 2 at the 1-cell stage, time from insemination to pronuclei fading at the 1-cell stage, time from insemination to the 2-cell stage, time from insemination to the 3-cell stage, time from insemination to the 5-cell stage and time from insemination to the 8-cell stage) and ranking the embryos in five groups, the implantation potential of the embryos was predicted with an AUC of 0.650. On Day 3 the KIDScore algorithm was capable of predicting blastocyst development with an AUC of 0.745 and blastocyst quality with an AUC of 0.679. In a comparison of blastocyst prediction including six other published algorithms and KIDScore, only KIDScore and one more algorithm surpassed an algorithm constructed on conventional Alpha/ESHRE consensus timings in terms of predictive power. LIMITATIONS, REASONS FOR CAUTION Some morphological assessments were not available and consequently three of the algorithms in the comparison were not used in full and may therefore have been put at a disadvantage. Algorithms based on implantation data from Day 3 embryo transfers require adjustments to be capable of predicting the implantation potential of Day 5 embryo transfers. The current study is restricted by its retrospective nature and absence of live birth information. Prospective Randomized Controlled Trials should be used in future studies to establish the value of time-lapse technology and morphokinetic evaluation. WIDER IMPLICATIONS OF THE FINDINGS Algorithms applicable to different culture conditions can be developed if based on large data sets of heterogeneous origin. STUDY FUNDING/COMPETING INTEREST(S) This study was funded by Vitrolife A/S, Denmark and Vitrolife AB, Sweden. B.M.P.’s company BMP Analytics is performing consultancy for Vitrolife A/S. M.B. is employed at Vitrolife A/S. M.M.’s company ilabcomm GmbH received honorarium for consultancy from Vitrolife AB. D.K.G. received research support from Vitrolife AB. PMID:27609980