NASA Technical Reports Server (NTRS)
Falkowski, Paul G.; Behrenfeld, Michael J.; Esaias, Wayne E.; Balch, William; Campbell, Janet W.; Iverson, Richard L.; Kiefer, Dale A.; Morel, Andre; Yoder, James A.; Hooker, Stanford B. (Editor);
1998-01-01
Two issues regarding primary productivity, as it pertains to the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Program and the National Aeronautics and Space Administration (NASA) Mission to Planet Earth (MTPE) are presented in this volume. Chapter 1 describes the development of a science plan for deriving primary production for the world ocean using satellite measurements, by the Ocean Primary Productivity Working Group (OPPWG). Chapter 2 presents discussions by the same group, of algorithm classification, algorithm parameterization and data availability, algorithm testing and validation, and the benefits of a consensus primary productivity algorithm.
A review of ocean chlorophyll algorithms and primary production models
NASA Astrophysics Data System (ADS)
Li, Jingwen; Zhou, Song; Lv, Nan
2015-12-01
This paper mainly introduces the five ocean chlorophyll concentration inversion algorithm and 3 main models for computing ocean primary production based on ocean chlorophyll concentration. Through the comparison of five ocean chlorophyll inversion algorithm, sums up the advantages and disadvantages of these algorithm,and briefly analyzes the trend of ocean primary production model.
Joint; Groom
2000-07-30
A new generation of ocean colour satellites is now operational, with frequent observation of the global ocean. This paper reviews the potential to estimate marine primary production from satellite images. The procedures involved in retrieving estimates of phytoplankton biomass, as pigment concentrations, are discussed. Algorithms are applied to SeaWiFS ocean colour data to indicate seasonal variations in phytoplankton biomass in the Celtic Sea, on the continental shelf to the south west of the UK. Algorithms to estimate primary production rates from chlorophyll concentration are compared and the advantages and disadvantage discussed. The simplest algorithms utilise correlations between chlorophyll concentration and production rate and one equation is used to estimate daily primary production rates for the western English Channel and Celtic Sea; these estimates compare favourably with published values. Primary production for the central Celtic Sea in the period April to September inclusive is estimated from SeaWiFS data to be 102 gC m(-2) in 1998 and 93 gC m(-2) in 1999; published estimates, based on in situ incubations, are ca. 80 gC m(-2). The satellite data demonstrate large variations in primary production between 1998 and 1999, with a significant increase in late summer in 1998 which did not occur in 1999. Errors are quantified for the estimation of primary production from simple algorithms based on satellite-derived chlorophyll concentration. These data show the potential to obtain better estimates of marine primary production than are possible with ship-based methods, with the ability to detect short-lived phytoplankton blooms. In addition, the potential to estimate new production from satellite data is discussed.
Steffensen, Jon Lund; Dufault-Thompson, Keith; Zhang, Ying
2018-01-01
The metabolism of individual organisms and biological communities can be viewed as a network of metabolites connected to each other through chemical reactions. In metabolic networks, chemical reactions transform reactants into products, thereby transferring elements between these metabolites. Knowledge of how elements are transferred through reactant/product pairs allows for the identification of primary compound connections through a metabolic network. However, such information is not readily available and is often challenging to obtain for large reaction databases or genome-scale metabolic models. In this study, a new algorithm was developed for automatically predicting the element-transferring reactant/product pairs using the limited information available in the standard representation of metabolic networks. The algorithm demonstrated high efficiency in analyzing large datasets and provided accurate predictions when benchmarked with manually curated data. Applying the algorithm to the visualization of metabolic networks highlighted pathways of primary reactant/product connections and provided an organized view of element-transferring biochemical transformations. The algorithm was implemented as a new function in the open source software package PSAMM in the release v0.30 (https://zhanglab.github.io/psamm/).
NASA Technical Reports Server (NTRS)
Essias, Wayne E.; Abbott, Mark; Carder, Kendall; Campbell, Janet; Clark, Dennis; Evans, Robert; Brown, Otis; Kearns, Ed; Kilpatrick, Kay; Balch, W.
2003-01-01
Simplistic models relating global satellite ocean color, temperature, and light to ocean net primary production (ONPP) are sensitive to the accuracy and limitations of the satellite estimate of chlorophyll and other input fields, as well as the primary productivity model. The standard MODIS ONPP product uses the new semi-analytic chlorophyll algorithm as its input for two ONPP indexes. The three primary MODIS chlorophyll Q estimates from MODIS, as well as the SeaWiFS 4 chlorophyll product, were used to assess global and regional performance in estimating ONPP for the full mission, but concentrating on 2001. The two standard ONPP algorithms were examined with 8-day and 39 kilometer resolution to quantify chlorophyll algorithm dependency of ONPP. Ancillary data (MLD from FNMOC, MODIS SSTD1, and PAR from the GSFC DAO) were identical. The standard MODIS ONPP estimates for annual production in 2001 was 59 and 58 GT C for the two ONPP algorithms. Differences in ONPP using alternate chlorophylls were on the order of 10% for global annual ONPP, but ranged to 100% regionally. On all scales the differences in ONPP were smaller between MODIS and SeaWiFS than between ONPP models, or among chlorophyll algorithms within MODIS. Largest regional ONPP differences were found in the Southern Ocean (SO). In the SO, application of the semi-analytic chlorophyll resulted in not only a magnitude difference in ONPP (2x), but also a temporal shift in the time of maximum production compared to empirical algorithms when summed over standard oceanic areas. The resulting increase in global ONPP (6-7 GT) is supported by better performance of the semi-analytic chlorophyll in the SO and other high chlorophyll regions. The differences are significant in terms of understanding regional differences and dynamics of ocean carbon transformations.
NASA Astrophysics Data System (ADS)
Zakoldaev, D. A.; Shukalov, A. V.; Zharinov, I. O.; Zharinov, O. O.
2018-05-01
The task of the algorithm of choosing the type of mechanical assembly production of instrument making enterprises of Industry 4.0 is being studied. There is a comparison of two project algorithms for Industry 3.0 and Industry 4.0. The algorithm of choosing the type of mechanical assembly production of instrument making enterprises of Industry 4.0 is based on the technological route analysis of the manufacturing process in a company equipped with cyber and physical systems. This algorithm may give some project solutions selected from the primary part or the auxiliary one of the production. The algorithm decisive rules are based on the optimal criterion.
Evaluation of bio-optical algorithms to remotely sense marine primary production from space
NASA Technical Reports Server (NTRS)
Berthelot, Beatrice; Deschamps, Pierre-Yves
1994-01-01
In situ bio-optical measurements from several oceanographic campaigns were analyzed to derive a direct relationship between water column primary production P (sub t) ocean color as expressed by the ratio of reflectances R (sub 1) at 440 nm and R (sub 3) at 550 nm and photosynthetically available radiation (PAR). The study is restricted to the Morel case I waters for which the following algorithm is proposed: log (P(sub f)) = -4.286 - 1.390 log (R(sub 1)/R(sub3)) + 0.621 log (PAR), with P(sub t) in g C m(exp -2)/d and PAR in J m(exp -2)/d. Using this algorithm the rms accuracy of primary production estimate is 0.17 on a logarithmic scale, i.e., a factor of 1.5. Using spectral reflectance measurements in the entire visible spectral range, the central wavelength, spectral bandwidth, and radiometric noise level requirements are investigated for the channels to be used by an ocean color space mission dedicated to estimating global marine primary production and the associated carbon fluxes. Nearly all the useful information is provided by two channels centered at 440 nm and 550 nm, but the accuracy of primary production estimate appears weakly sensitive to spectral bandwidth, which, consequently, may be enlarged by several tens of nanometers. The sensitivity to radiometric noise, on the contrary, is strong, and a noise equivalent reflectance of 0.005 degraded the accuracy on the primary production estimate by a factor 2 (0.14-0.25 on a logarithmic scale). The results should be applicable to evaluating the primary production of oligotrophic and mesotrophic waters, which constitute most of the open ocean.
Chanona, J; Ribes, J; Seco, A; Ferrer, J
2006-01-01
This paper presents a model-knowledge based algorithm for optimising the primary sludge fermentation process design and operation. This is a recently used method to obtain the volatile fatty acids (VFA), needed to improve biological nutrient removal processes, directly from the raw wastewater. The proposed algorithm consists in a heuristic reasoning algorithm based on the expert knowledge of the process. Only effluent VFA and the sludge blanket height (SBH) have to be set as design criteria, and the optimisation algorithm obtains the minimum return sludge and waste sludge flow rates which fulfil those design criteria. A pilot plant fed with municipal raw wastewater was operated in order to obtain experimental results supporting the developed algorithm groundwork. The experimental results indicate that when SBH was increased, higher solids retention time was obtained in the settler and VFA production increased. Higher recirculation flow-rates resulted in higher VFA production too. Finally, the developed algorithm has been tested by simulating different design conditions with very good results. It has been able to find the optimal operation conditions in all cases on which preset design conditions could be achieved. Furthermore, this is a general algorithm that can be applied to any fermentation-elutriation scheme with or without fermentation reactor.
NASA Astrophysics Data System (ADS)
Hashimoto, H.; Wang, W.; Ganguly, S.; Li, S.; Michaelis, A.; Higuchi, A.; Takenaka, H.; Nemani, R. R.
2017-12-01
New geostationary sensors such as the AHI (Advanced Himawari Imager on Himawari-8) and the ABI (Advanced Baseline Imager on GOES-16) have the potential to advance ecosystem modeling particularly of diurnally varying phenomenon through frequent observations. These sensors have similar channels as in MODIS (MODerate resolution Imaging Spectroradiometer), and allow us to utilize the knowledge and experience in MODIS data processing. Here, we developed sub-hourly Gross Primary Production (GPP) algorithm, leverating the MODIS 17 GPP algorithm. We run the model at 1-km resolution over Japan and Australia using geo-corrected AHI data. Solar radiation was directly calculated from AHI using a neural network technique. The other necessary climate data were derived from weather stations and other satellite data. The sub-hourly estimates of GPP were first compared with ground-measured GPP at various Fluxnet sites. We also compared the AHI GPP with MODIS 17 GPP, and analyzed the differences in spatial patterns and the effect of diurnal changes in climate forcing. The sub-hourly GPP products require massive storage and strong computational power. We use NEX (NASA Earth Exchange) facility to produce the GPP products. This GPP algorithm can be applied to other geostationary satellites including GOES-16 in future.
NASA Technical Reports Server (NTRS)
Robinson, Michael; Steiner, Matthias; Wolff, David B.; Ferrier, Brad S.; Kessinger, Cathy; Einaudi, Franco (Technical Monitor)
2000-01-01
The primary function of the TRMM Ground Validation (GV) Program is to create GV rainfall products that provide basic validation of satellite-derived precipitation measurements for select primary sites. A fundamental and extremely important step in creating high-quality GV products is radar data quality control. Quality control (QC) processing of TRMM GV radar data is based on some automated procedures, but the current QC algorithm is not fully operational and requires significant human interaction to assure satisfactory results. Moreover, the TRMM GV QC algorithm, even with continuous manual tuning, still can not completely remove all types of spurious echoes. In an attempt to improve the current operational radar data QC procedures of the TRMM GV effort, an intercomparison of several QC algorithms has been conducted. This presentation will demonstrate how various radar data QC algorithms affect accumulated radar rainfall products. In all, six different QC algorithms will be applied to two months of WSR-88D radar data from Melbourne, Florida. Daily, five-day, and monthly accumulated radar rainfall maps will be produced for each quality-controlled data set. The QC algorithms will be evaluated and compared based on their ability to remove spurious echoes without removing significant precipitation. Strengths and weaknesses of each algorithm will be assessed based on, their abilit to mitigate both erroneous additions and reductions in rainfall accumulation from spurious echo contamination and true precipitation removal, respectively. Contamination from individual spurious echo categories will be quantified to further diagnose the abilities of each radar QC algorithm. Finally, a cost-benefit analysis will be conducted to determine if a more automated QC algorithm is a viable alternative to the current, labor-intensive QC algorithm employed by TRMM GV.
Decadal Changes in Global Ocean Annual Primary Production
NASA Technical Reports Server (NTRS)
Gregg, Watson; Conkright, Margarita E.; Behrenfeld, Michael J.; Ginoux, Paul; Casey, Nancy W.; Koblinsky, Chester J. (Technical Monitor)
2002-01-01
The Sea-viewing Wide Field-of-View Sensor (SeaWiFS) has produced the first multi-year time series of global ocean chlorophyll observations since the demise of the Coastal Zone Color Scanner (CZCS) in 1986. Global observations from 1997-present from SeaWiFS combined with observations from 1979-1986 from the CZCS should in principle provide an opportunity to observe decadal changes in global ocean annual primary production, since chlorophyll is the primary driver for estimates of primary production. However, incompatibilities between algorithms have so far precluded quantitative analysis. We have developed and applied compatible processing methods for the CZCS, using modern advances in atmospheric correction and consistent bio-optical algorithms to advance the CZCS archive to comparable quality with SeaWiFS. We applied blending methodologies, where in situ data observations are incorporated into the CZCS and SeaWiFS data records, to provide improvement of the residuals. These re-analyzed, blended data records provide maximum compatibility and permit, for the first time, a quantitative analysis of the changes in global ocean primary production in the early-to-mid 1980's and the present, using synoptic satellite observations. An intercomparison of the global and regional primary production from these blended satellite observations is important to understand global climate change and the effects on ocean biota. Photosynthesis by chlorophyll-containing phytoplankton is responsible for biotic uptake of carbon in the oceans and potentially ultimately from the atmosphere. Global ocean annual primary decreased from the CZCS record to SeaWiFS, by nearly 6% from the early 1980s to the present. Annual primary production in the high latitudes was responsible for most of the decadal change. Conversely, primary production in the low latitudes generally increased, with the exception of the tropical Pacific. The differences and similarities of the two data records provide evidence of how the Earth's climate may be changing and how ocean biota respond. Furthermore, the results have implications for the ocean carbon cycle.
NASA Technical Reports Server (NTRS)
Balch, William; Evans, Robert; Brown, Jim; Feldman, Gene; Mcclain, Charles; Esaias, Wayne
1992-01-01
Global pigment and primary productivity algorithms based on a new data compilation of over 12,000 stations occupied mostly in the Northern Hemisphere, from the late 1950s to 1988, were tested. The results showed high variability of the fraction of total pigment contributed by chlorophyll, which is required for subsequent predictions of primary productivity. Two models, which predict pigment concentration normalized to an attenuation length of euphotic depth, were checked against 2,800 vertical profiles of pigments. Phaeopigments consistently showed maxima at about one optical depth below the chlorophyll maxima. CZCS data coincident with the sea truth data were also checked. A regression of satellite-derived pigment vs ship-derived pigment had a coefficient of determination. The satellite underestimated the true pigment concentration in mesotrophic and oligotrophic waters and overestimated the pigment concentration in eutrophic waters. The error in the satellite estimate showed no trends with time between 1978 and 1986.
Effects of sea ice cover on satellite-detected primary production in the Arctic Ocean
Lee, Zhongping; Mitchell, B. Greg; Nevison, Cynthia D.
2016-01-01
The influence of decreasing Arctic sea ice on net primary production (NPP) in the Arctic Ocean has been considered in multiple publications but is not well constrained owing to the potentially large errors in satellite algorithms. In particular, the Arctic Ocean is rich in coloured dissolved organic matter (CDOM) that interferes in the detection of chlorophyll a concentration of the standard algorithm, which is the primary input to NPP models. We used the quasi-analytic algorithm (Lee et al. 2002 Appl. Opti. 41, 5755−5772. (doi:10.1364/AO.41.005755)) that separates absorption by phytoplankton from absorption by CDOM and detrital matter. We merged satellite data from multiple satellite sensors and created a 19 year time series (1997–2015) of NPP. During this period, both the estimated annual total and the summer monthly maximum pan-Arctic NPP increased by about 47%. Positive monthly anomalies in NPP are highly correlated with positive anomalies in open water area during the summer months. Following the earlier ice retreat, the start of the high-productivity season has become earlier, e.g. at a mean rate of −3.0 d yr−1 in the northern Barents Sea, and the length of the high-productivity period has increased from 15 days in 1998 to 62 days in 2015. While in some areas, the termination of the productive season has been extended, owing to delayed ice formation, the termination has also become earlier in other areas, likely owing to limited nutrients. PMID:27881759
Effects of sea ice cover on satellite-detected primary production in the Arctic Ocean.
Kahru, Mati; Lee, Zhongping; Mitchell, B Greg; Nevison, Cynthia D
2016-11-01
The influence of decreasing Arctic sea ice on net primary production (NPP) in the Arctic Ocean has been considered in multiple publications but is not well constrained owing to the potentially large errors in satellite algorithms. In particular, the Arctic Ocean is rich in coloured dissolved organic matter (CDOM) that interferes in the detection of chlorophyll a concentration of the standard algorithm, which is the primary input to NPP models. We used the quasi-analytic algorithm (Lee et al 2002 Appl. Opti. 41, 5755-5772. (doi:10.1364/AO.41.005755)) that separates absorption by phytoplankton from absorption by CDOM and detrital matter. We merged satellite data from multiple satellite sensors and created a 19 year time series (1997-2015) of NPP. During this period, both the estimated annual total and the summer monthly maximum pan-Arctic NPP increased by about 47%. Positive monthly anomalies in NPP are highly correlated with positive anomalies in open water area during the summer months. Following the earlier ice retreat, the start of the high-productivity season has become earlier, e.g. at a mean rate of -3.0 d yr -1 in the northern Barents Sea, and the length of the high-productivity period has increased from 15 days in 1998 to 62 days in 2015. While in some areas, the termination of the productive season has been extended, owing to delayed ice formation, the termination has also become earlier in other areas, likely owing to limited nutrients. © 2016 The Author(s).
The GOES-R Product Generation Architecture
NASA Astrophysics Data System (ADS)
Dittberner, G. J.; Kalluri, S.; Hansen, D.; Weiner, A.; Tarpley, A.; Marley, S.
2011-12-01
The GOES-R system will substantially improve users' ability to succeed in their work by providing data with significantly enhanced instruments, higher resolution, much shorter relook times, and an increased number and diversity of products. The Product Generation architecture is designed to provide the computer and memory resources necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
The GOES-R Product Generation Architecture - Post CDR Update
NASA Astrophysics Data System (ADS)
Dittberner, G.; Kalluri, S.; Weiner, A.
2012-12-01
The GOES-R system will substantially improve the accuracy of information available to users by providing data from significantly enhanced instruments, which will generate an increased number and diversity of products with higher resolution, and much shorter relook times. Considerably greater compute and memory resources are necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
GOES-R GS Product Generation Infrastructure Operations
NASA Astrophysics Data System (ADS)
Blanton, M.; Gundy, J.
2012-12-01
GOES-R GS Product Generation Infrastructure Operations: The GOES-R Ground System (GS) will produce a much larger set of products with higher data density than previous GOES systems. This requires considerably greater compute and memory resources to achieve the necessary latency and availability for these products. Over time, new algorithms could be added and existing ones removed or updated, but the GOES-R GS cannot go down during this time. To meet these GOES-R GS processing needs, the Harris Corporation will implement a Product Generation (PG) infrastructure that is scalable, extensible, extendable, modular and reliable. The primary parts of the PG infrastructure are the Service Based Architecture (SBA), which includes the Distributed Data Fabric (DDF). The SBA is the middleware that encapsulates and manages science algorithms that generate products. The SBA is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. The SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DDF to provide this data communication layer between algorithms. The DDF provides an abstract interface over a distributed and persistent multi-layered storage system (memory based caching above disk-based storage) and an event system that allows algorithm services to know when data is available and to get the data that they need to begin processing when they need it. Together, the SBA and the DDF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
Response of ocean ecosystems to climate warming
NASA Astrophysics Data System (ADS)
Sarmiento, J. L.; Slater, R.; Barber, R.; Bopp, L.; Doney, S. C.; Hirst, A. C.; Kleypas, J.; Matear, R.; Mikolajewicz, U.; Monfray, P.; Soldatov, V.; Spall, S. A.; Stouffer, R.
2004-09-01
We examine six different coupled climate model simulations to determine the ocean biological response to climate warming between the beginning of the industrial revolution and 2050. We use vertical velocity, maximum winter mixed layer depth, and sea ice cover to define six biomes. Climate warming leads to a contraction of the highly productive marginal sea ice biome by 42% in the Northern Hemisphere and 17% in the Southern Hemisphere, and leads to an expansion of the low productivity permanently stratified subtropical gyre biome by 4.0% in the Northern Hemisphere and 9.4% in the Southern Hemisphere. In between these, the subpolar gyre biome expands by 16% in the Northern Hemisphere and 7% in the Southern Hemisphere, and the seasonally stratified subtropical gyre contracts by 11% in both hemispheres. The low-latitude (mostly coastal) upwelling biome area changes only modestly. Vertical stratification increases, which would be expected to decrease nutrient supply everywhere, but increase the growing season length in high latitudes. We use satellite ocean color and climatological observations to develop an empirical model for predicting chlorophyll from the physical properties of the global warming simulations. Four features stand out in the response to global warming: (1) a drop in chlorophyll in the North Pacific due primarily to retreat of the marginal sea ice biome, (2) a tendency toward an increase in chlorophyll in the North Atlantic due to a complex combination of factors, (3) an increase in chlorophyll in the Southern Ocean due primarily to the retreat of and changes at the northern boundary of the marginal sea ice zone, and (4) a tendency toward a decrease in chlorophyll adjacent to the Antarctic continent due primarily to freshening within the marginal sea ice zone. We use three different primary production algorithms to estimate the response of primary production to climate warming based on our estimated chlorophyll concentrations. The three algorithms give a global increase in primary production of 0.7% at the low end to 8.1% at the high end, with very large regional differences. The main cause of both the response to warming and the variation between algorithms is the temperature sensitivity of the primary production algorithms. We also show results for the period between the industrial revolution and 2050 and 2090.
NASA Technical Reports Server (NTRS)
Brenner, Anita C.; Zwally, H. Jay; Bentley, Charles R.; Csatho, Bea M.; Harding, David J.; Hofton, Michelle A.; Minster, Jean-Bernard; Roberts, LeeAnne; Saba, Jack L.; Thomas, Robert H.;
2012-01-01
The primary purpose of the GLAS instrument is to detect ice elevation changes over time which are used to derive changes in ice volume. Other objectives include measuring sea ice freeboard, ocean and land surface elevation, surface roughness, and canopy heights over land. This Algorithm Theoretical Basis Document (ATBD) describes the theory and implementation behind the algorithms used to produce the level 1B products for waveform parameters and global elevation and the level 2 products that are specific to ice sheet, sea ice, land, and ocean elevations respectively. These output products, are defined in detail along with the associated quality, and the constraints, and assumptions used to derive them.
Validation of SMAP surface soil moisture products with core validation sites
USDA-ARS?s Scientific Manuscript database
The NASA Soil Moisture Active Passive (SMAP) mission has utilized a set of core validation sites as the primary methodology in assessing the soil moisture retrieval algorithm performance. Those sites provide well-calibrated in situ soil moisture measurements within SMAP product grid pixels for diver...
NASA Astrophysics Data System (ADS)
Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.
2013-05-01
Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.
NASA Technical Reports Server (NTRS)
Platt, Trevor; Sathyendranath, Shubha
1993-01-01
Various conclusions by Balch et al. (1992) about the current state of modeling primary production in the sea (lack of improvement in primary production models, since 1957, utility of analytical models, and merits or weaknesses of complex models) are commented on. It is argued that since they are based on a false premise, these conclusions are not robust, and that the approach used by Balch et al. (the model of Platt and Sathyendranath, 1988) was inadequate for the question they set out to address. The present criticism is based mainly on the issue of whether implementation was correct with respect to parameter selection. It is concluded that the findings of Balch et al. with respect to the model of Platt and Sathyendranath is unreliable. Balch replies that satellite-derived estimates of primary production should be compared directly to that measured in situ in as many regions as possible. This will provide a first-order estimate of the magnitude of the error involved in estimating primary production from space.
Soil Moisture Active Passive Mission L4_C Data Product Assessment (Version 2 Validated Release)
NASA Technical Reports Server (NTRS)
Kimball, John S.; Jones, Lucas A.; Glassy, Joseph; Stavros, E. Natasha; Madani, Nima; Reichle, Rolf H.; Jackson, Thomas; Colliander, Andreas
2016-01-01
The SMAP satellite was successfully launched January 31st 2015, and began acquiring Earth observation data following in-orbit sensor calibration. Global data products derived from the SMAP L-band microwave measurements include Level 1 calibrated and geolocated radiometric brightness temperatures, Level 23 surface soil moisture and freezethaw geophysical retrievals mapped to a fixed Earth grid, and model enhanced Level 4 data products for surface to root zone soil moisture and terrestrial carbon (CO2) fluxes. The post-launch SMAP mission CalVal Phase had two primary objectives for each science product team: 1) calibrate, verify, and improve the performance of the science algorithms, and 2) validate accuracies of the science data products as specified in the L1 science requirements. This report provides analysis and assessment of the SMAP Level 4 Carbon (L4_C) product pertaining to the validated release. The L4_C validated product release effectively replaces an earlier L4_C beta-product release (Kimball et al. 2015). The validated release described in this report incorporates a longer data record and benefits from algorithm and CalVal refinements acquired during the SMAP post-launch CalVal intensive period. The SMAP L4_C algorithms utilize a terrestrial carbon flux model informed by SMAP soil moisture inputs along with optical remote sensing (e.g. MODIS) vegetation indices and other ancillary biophysical data to estimate global daily net ecosystem CO2 exchange (NEE) and component carbon fluxes for vegetation gross primary production (GPP) and ecosystem respiration (Reco). Other L4_C product elements include surface (10 cm depth) soil organic carbon (SOC) stocks and associated environmental constraints to these processes, including soil moisture and landscape freeze/thaw (FT) controls on GPP and respiration (Kimball et al. 2012). The L4_C product encapsulates SMAP carbon cycle science objectives by: 1) providing a direct link between terrestrial carbon fluxes and underlying FT and soil moisture constraints to these processes, 2) documenting primary connections between terrestrial water, energy and carbon cycles, and 3) improving understanding of terrestrial carbon sink activity in northern ecosystems. There are no L1 science requirements for the L4_C product; however self-imposed requirements have been established focusing on NEE as the primary product field for validation, and on demonstrating L4_C accuracy and success in meeting product science requirements (Jackson et al. 2012). The other L4_C product fields also have strong utility for carbon science applications; however, analysis of these other fields is considered secondary relative to primary validation activities focusing on NEE. The L4_C targeted accuracy requirements are to meet or exceed a mean unbiased accuracy (ubRMSE) for NEE of 1.6 g C/sq m/d or 30 g C/sq m/yr, emphasizing northern (45N) boreal and arctic ecosystems; this is similar to the estimated accuracy level of in situ tower eddy covariance measurement-based observations (Baldocchi 2008).
Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.
Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709
NASA Technical Reports Server (NTRS)
Brow, Chirstopher; Subramaniam, Ajit; Culver, Mary; Brock, John C.
2000-01-01
Monitoring the health of U.S. coastal waters is an important goal of the National Oceanic and Atmospheric Administration (NOAA). Satellite sensors are capable of providing daily synoptic data of large expanses of the U.S. coast. Ocean color sensor, in particular, can be used to monitor the water quality of coastal waters on an operational basis. To appraise the validity of satellite-derived measurements, such as chlorophyll concentration, the bio-optical algorithms used to derive them must be evaluated in coastal environments. Towards this purpose, over 21 cruises in diverse U.S. coastal waters have been conducted. Of these 21 cruises, 12 have been performed in conjunction with and under the auspices of the NASA/SIMBIOS Project. The primary goal of these cruises has been to obtain in-situ measurements of downwelling irradiance, upwelling radiance, and chlorophyll concentrations in order to evaluate bio-optical algorithms that estimate chlorophyll concentration. In this Technical Memorandum, we evaluate the ability of five bio-optical algorithms, including the current SeaWiFS algorithm, to estimate chlorophyll concentration in surface waters of the South Atlantic Bight (SAB). The SAB consists of a variety of environments including coastal and continental shelf regimes, Gulf Stream waters, and the Sargasso Sea. The biological and optical characteristics of the region is complicated by temporal and spatial variability in phytoplankton composition, primary productivity, and the concentrations of colored dissolved organic matter (CDOM) and suspended sediment. As such, the SAB is an ideal location to test the robustness of algorithms for coastal use.
Estimation of crop gross primary production (GPP): fAPAR_chl versus MOD15A2 FPAR
USDA-ARS?s Scientific Manuscript database
Within leaf chloroplasts chlorophylls absorb photosynthetically active radiation (PAR) for photosynthesis (PSN). The MOD15A2 FPAR (fraction of PAR absorbed by canopy, i.e., fAPARcanopy) product has been widely used to compute absorbed PAR for PSN (APARPSN). The MOD17A2 algorithm uses MOD15A2 FPAR i...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manteuffel, T.A.
The objective of this project is the development of numerical solution techniques for deterministic models of the transport of neutral and charged particles and the demonstration of their effectiveness in both a production environment and on advanced architecture computers. The primary focus is on various versions of the linear Boltzman equation. These equations are fundamental in many important applications. This project is an attempt to integrate the development of numerical algorithms with the process of developing production software. A major thrust of this reject will be the implementation of these algorithms on advanced architecture machines that reside at the Advancedmore » Computing Laboratory (ACL) at Los Alamos National Laboratories (LANL).« less
Dynamic ocean provinces: a multi-sensor approach to global marine ecophysiology
NASA Astrophysics Data System (ADS)
Dowell, M.; Campbell, J.; Moore, T.
The concept of oceanic provinces or domains has existed for well over a century. Such systems, whether real or only conceptual, provide a useful framework for understanding the mechanisms controlling biological, physical and chemical processes and their interactions. Criteria have been established for defining provinces based on physical forcings, availability of light and nutrients, complexity of the marine food web, and other factors. In general, such classification systems reflect the heterogeneous nature of the ocean environment, and the effort of scientists to comprehend the whole system by understanding its various homogeneous components. If provinces are defined strictly on the basis of geospatial or temporal criteria (e.g., latitude zones, bathymetry, or season), the resulting maps exhibit discontinuities that are uncharacteristic of the ocean. While this may be useful for many purposes, it is unsatisfactory in that it does not capture the dynamic nature of fluid boundaries in the ocean. Boundaries fixed in time and space do not allow us to observe interannual or longer-term variability (e.g., regime shifts) that may result from climate change. The current study illustrates the potential of using fuzzy logic as a means of classifying the ocean into objectively defined provinces using properties measurable from satellite sensors (MODIS and SeaWiFS). This approach accommodates the dynamic variability of provinces which can be updated as each image is processed. We adopt this classification as the basis for parameterizing specific algorithms for each of the classes. Once the class specific algorithms have been applied, retrievals are then recomposed into a single blended product based on the "weighted" fuzzy memberships. This will be demonstrated through animations of multi-year time- series of monthly composites of the individual classes or provinces. The provinces themselves are identified on the basis of global fields of chlorophyll, sea surface temperature and PAR which will also be subsequently used to parameterize primary production (PP) algorithms. Two applications of the proposed dynamic classification are presented. The first applies different peer-reviewed PP algorithms to the different classes and objectively evaluates their performance to select the algorithm which performs best, and then merges results into a single primary production product. A second application illustrates the variability of P I parameters in each province and- analyzes province specific variability in the quantum yield of photosynthesis. Finally results illustrating how this approach is implemented in estimating global oceanic primary production are presented.
NASA Technical Reports Server (NTRS)
Brown, Christopher W.; Subramaniam, Ajit; Culver, Mary; Brock, John C.
2001-01-01
Monitoring the health of US coastal waters is an important goal of the National Oceanic and Atmospheric Administration (NOAA). Satellite sensors are capable of providing daily synoptic data of large expanses of the US coast. Ocean color sensors, in particular, can be used to monitor the water quality of coastal waters on an operational basis. To appraise the validity of satellite-derived measurements, such as chlorophyll concentration, the bio-optical algorithms used to derive them must be evaluated in coastal environments. Towards this purpose, over 21 cruises in diverse US coastal waters have been conducted. Of these 21 cruises, 12 have been performed in conjunction with and under the auspices of the NASA/Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project. The primary goal of these cruises has been to obtain in-situ measurements of downwelling irradiance, upwelling radiance, and chlorophyll concentrations in order to evaluate bio-optical algorithms that estimate chlorophyll concentration. In this Technical Memorandum, we evaluate the ability of five bio-optical algorithms, including the current Sea-Viewing Wide Field-of-view Sensor (SeaWiFS) algorithm, to estimate chlorophyll concentration in surface waters of the South Atlantic Bight (SAB). The SAB consists of a variety of environments including coastal and continental shelf regimes, Gulf Stream waters, and the Sargasso Sea. The biological and optical characteristics of the region is complicated by temporal and spatial variability in phytoplankton composition, primary productivity, and the concentrations of colored dissolved organic matter (CDOM) and suspended sediment. As such, the SAB is an ideal location to test the robustness of algorithms for coastal use.
NASA Astrophysics Data System (ADS)
Stoykov, S.; Atanassov, E.; Margenov, S.
2016-10-01
Many of the scientific applications involve sparse or dense matrix operations, such as solving linear systems, matrix-matrix products, eigensolvers, etc. In what concerns structural nonlinear dynamics, the computations of periodic responses and the determination of stability of the solution are of primary interest. Shooting method iswidely used for obtaining periodic responses of nonlinear systems. The method involves simultaneously operations with sparse and dense matrices. One of the computationally expensive operations in the method is multiplication of sparse by dense matrices. In the current work, a new algorithm for sparse matrix by dense matrix products is presented. The algorithm takes into account the structure of the sparse matrix, which is obtained by space discretization of the nonlinear Mindlin's plate equation of motion by the finite element method. The algorithm is developed to use the vector engine of Intel Xeon Phi coprocessors. It is compared with the standard sparse matrix by dense matrix algorithm and the one developed by Intel MKL and it is shown that by considering the properties of the sparse matrix better algorithms can be developed.
Climatological Processing and Product Development for the TRMM Ground Validation Program
NASA Technical Reports Server (NTRS)
Marks, D. A.; Kulie, M. S.; Robinson, M.; Silberstein, D. S.; Wolff, D. B.; Ferrier, B. S.; Amitai, E.; Fisher, B.; Wang, J.; Augustine, D.;
2000-01-01
The Tropical Rainfall Measuring Mission (TRMM) satellite was successfully launched in November 1997.The main purpose of TRMM is to sample tropical rainfall using the first active spaceborne precipitation radar. To validate TRMM satellite observations, a comprehensive Ground Validation (GV) Program has been implemented. The primary goal of TRMM GV is to provide basic validation of satellite-derived precipitation measurements over monthly climatologies for the following primary sites: Melbourne, FL; Houston, TX; Darwin, Australia- and Kwajalein Atoll, RMI As part of the TRMM GV effort, research analysts at NASA Goddard Space Flight Center (GSFC) generate standardized rainfall products using quality-controlled ground-based radar data from the four primary GV sites. This presentation will provide an overview of TRMM GV climatological processing and product generation. A description of the data flow between the primary GV sites, NASA GSFC, and the TRMM Science and Data Information System (TSDIS) will be presented. The radar quality control algorithm, which features eight adjustable height and reflectivity parameters, and its effect on monthly rainfall maps, will be described. The methodology used to create monthly, gauge-adjusted rainfall products for each primary site will also be summarized. The standardized monthly rainfall products are developed in discrete, modular steps with distinct intermediate products. A summary of recently reprocessed official GV rainfall products available for TRMM science users will be presented. Updated basic standardized product results involving monthly accumulation, Z-R relationship, and gauge statistics for each primary GV site will also be displayed.
Plumes and Blooms: Observations, Analysis and Modeling for SIMBIOS
NASA Technical Reports Server (NTRS)
Maritorena, S.; Siegel, D. A.; Nelson, N. B.
2004-01-01
The goal of the Plumes and Blooms (PnB) project is to develop, validate and apply to imagery state-of-the-art ocean color algorithms for quantifying sediment plumes and phytoplankton blooms for the Case II environment of the Santa Barbara Channel. We conduct monthly to twice-monthly transect observations across the Santa Barbara Channel to develop an algorithm development and product validation data set. A primary goal is the use the PnB field data set to objectively tune semi-analytical models of ocean color for this site and apply them using available satellite imagery (SeaWiFS and MODIS). However, the comparison between PnB field observations and satellite estimates of primary products has been disappointing. We find that field estimates of water-leaving radiance correspond poorly to satellite estimates for both SeaWiFS and MODIS local area coverage imagery. We believe this is due to poor atmospheric correction due to complex mixtures of aerosol types found in these near-coastal regions.
NASA Technical Reports Server (NTRS)
Goldman, Joel C.; Brink, Kenneth K.; Gawarkiewicz, Glen; Sosik, Heidi M.
1997-01-01
This research program was a collaborative effort to investigate the impact of rapid changes in the water column during coastal upwelling, on biological and optical properties. These properties are important for constructing region or event-specific algorithms for remote sensing of pigment concentration and primary productivity and for comparing these algorithms with those used for the development of large scale maps from ocean color. We successfully achieved the primary objective of this research project which was to study in situ the dynamics of rapid spatial and temporal changes in properties of the water column during, coastal upwelling off the Crimean Coast in the Black Sea. The work was a collaborative effort between a group of biological and physical oceanographers from the Woods Hole Oceanographic Institution and from two oceanographic research institutions in the Crimea, Ukraine, located near the study site, the Marine Hydrophysical Institute (MHI) and the Institute of Biology of the Southern Seas (IBSS). The site was an ideal experimental model, both from a technical and economic standpoint, because of the predictable summer upwelling that occurs in the region and because of the availability of both a ship on call and laboratory and remote sensing facilities at the nearby marine institutes. We used a combination of shipboard measurements and remote sensing to investigate the physical evolution of rapid upwelling events and their impact on photoplankton and water column optical properties. The field work involved a two day cruise for mooring, deployment and a three day baseline survey cruise, followed by an eleven day primary cruise during, a summer upwelling event (anticipated by monitoring local winds and tracked by remote sensing imaging). An MHI ship was outfitted and used for these purposes.
NASA Astrophysics Data System (ADS)
Saberi, S. J.; Weathers, K. C.; Norouzi, H.; Prakash, S.; Solomon, C.; Boucher, J. M.
2016-12-01
Lakes contribute to local and regional climate conditions, cycle nutrients, and are viable indicators of climate change due to their sensitivity to disturbances in their water and airsheds. Utilizing spaceborne remote sensing (RS) techniques has considerable potential in studying lake dynamics because it allows for coherent and consistent spatial and temporal observations as well as estimates of lake functions without in situ measurements. However, in order for RS products to be useful, algorithms that relate in situ measurements to RS data must be developed. Estimates of lake metabolic rates are of particular scientific interest since they are indicative of lakes' roles in carbon cycling and ecological function. Currently, there are few existing algorithms relating remote sensing products to in-lake estimates of metabolic rates and more in-depth studies are still required. Here we use satellite surface temperature observations from Moderate Resolution Imaging Spectroradiometer (MODIS) product (MYD11A2) and published in-lake gross primary production (GPP) estimates for eleven globally distributed lakes during a one-year period to produce a univariate quadratic equation model. The general model was validated using other lakes during an equivalent one-year time period (R2=0.76). The statistical analyses reveal significant positive relationships between MODIS temperature data and the previously modeled in-lake GPP. Lake-specific models for Lake Mendota (USA), Rotorua (New Zealand), and Taihu (China) showed stronger relationships than the general combined model, pointing to local influences such as watershed characteristics on in-lake GPP in some cases. These validation data suggest that the developed algorithm has a potential to predict lake GPP on a global scale.
NASA Astrophysics Data System (ADS)
Jacox, M.; Edwards, C. A.; Kahru, M.; Rudnick, D. L.; Kudela, R. M.
2012-12-01
A 26-year record of depth integrated primary productivity (PP) in the Southern California Current System (SCCS) is analyzed with the goal of improving satellite net primary productivity (PP) estimates. The ratio of integrated primary productivity to surface chlorophyll correlates strongly to surface chlorophyll concentration (chl0). However, chl0 does not correlate to chlorophyll-specific productivity, and appears to be a proxy for vertical phytoplankton distribution rather than phytoplankton physiology. Modest improvements in PP model performance are achieved by tuning existing algorithms for the SCCS, particularly by empirical parameterization of photosynthetic efficiency in the Vertically Generalized Production Model. Much larger improvements are enabled by improving accuracy of subsurface chlorophyll and light profiles. In a simple vertically resolved production model, substitution of in situ surface data for remote sensing estimates offers only marginal improvements in model r2 and total log10 root mean squared difference, while inclusion of in situ chlorophyll and light profiles improves these metrics significantly. Autonomous underwater gliders, capable of measuring subsurface fluorescence on long-term, long-range deployments, significantly improve PP model fidelity in the SCCS. We suggest their use (and that of other autonomous profilers such as Argo floats) in conjunction with satellites as a way forward for improved PP estimation in coastal upwelling systems.
Ship and satellite bio-optical research in the California Bight
NASA Technical Reports Server (NTRS)
Smith, R. C.; Baker, K. S.
1982-01-01
Mesoscale biological patterns and processes in productive coastal waters were studied. The physical and biological processes leading to chlorophyll variability were investigated. The ecological and evolutionary significance of this variability, and its relation to the prediction of fish recruitment and marine mammal distributions was studied. Seasonal primary productivity (using chlorophyll as an indication of phytoplankton biomass) for the entire Southern California Bight region was assessed. Complementary and contemporaneous ship and satellite (Nimbus 7-CZCS) bio-optical data from the Southern California Bight and surrounding waters were obtained and analyzed. These data were also utilized for the development of multi-platform sampling strategies and the optimization of algorithms for the estimation of phytoplankton biomass and primary production from satellite imagery.
NASA Technical Reports Server (NTRS)
Koster, Randal D. (Editor); Kimball, John S.; Jones, Lucas A.; Glassy, Joseph; Stavros, E. Natasha; Madani, Nima (Editor); Reichle, Rolf H.; Jackson, Thomas; Colliander, Andreas
2015-01-01
During the post-launch Cal/Val Phase of SMAP there are two objectives for each science product team: 1) calibrate, verify, and improve the performance of the science algorithms, and 2) validate accuracies of the science data products as specified in the L1 science requirements according to the Cal/Val timeline. This report provides analysis and assessment of the SMAP Level 4 Carbon (L4_C) product specifically for the beta release. The beta-release version of the SMAP L4_C algorithms utilizes a terrestrial carbon flux model informed by SMAP soil moisture inputs along with optical remote sensing (e.g. MODIS) vegetation indices and other ancillary biophysical data to estimate global daily NEE and component carbon fluxes, particularly vegetation gross primary production (GPP) and ecosystem respiration (Reco). Other L4_C product elements include surface (<10 cm depth) soil organic carbon (SOC) stocks and associated environmental constraints to these processes, including soil moisture and landscape FT controls on GPP and Reco (Kimball et al. 2012). The L4_C product encapsulates SMAP carbon cycle science objectives by: 1) providing a direct link between terrestrial carbon fluxes and underlying freeze/thaw and soil moisture constraints to these processes, 2) documenting primary connections between terrestrial water, energy and carbon cycles, and 3) improving understanding of terrestrial carbon sink activity in northern ecosystems.
Estimators of primary production for interpretation of remotely sensed data on ocean color
NASA Technical Reports Server (NTRS)
Platt, Trevor; Sathyendranath, Shubha
1993-01-01
The theoretical basis is explained for some commonly used estimators of daily primary production in a vertically uniform water column. These models are recast into a canonical form, with dimensionless arguments, to facilitate comparison with each other and with an analytic solution. The limitations of each model are examined. The values of the photoadaptation parameter I(k) observed in the ocean are analyzed, and I(k) is used as a scale to normalize the surface irradiance. The range of this scaled irradiance is presented. An equation is given for estimation of I(k) from recent light history. It is shown how the models for water column production can be adapted for estimation of the production in finite layers. The distinctions between model formulation, model implementation and model evaluation are discussed. Recommendations are given on the choice of algorithm for computation of daily production according to the degree of approximation acceptable in the result.
Optimization of a Lunar Pallet Lander Reinforcement Structure Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Burt, Adam
2014-01-01
In this paper, a unique system level spacecraft design optimization will be presented. A Genetic Algorithm is used to design the global pattern of the reinforcing structure, while a gradient routine is used to adequately stiffen the sub-structure. The system level structural design includes determining the optimal physical location (and number) of reinforcing beams of a lunar pallet lander deck structure. Design of the substructure includes determining placement of secondary stiffeners and the number of rivets required for assembly.. In this optimization, several considerations are taken into account. The primary objective was to raise the primary natural frequencies of the structure such that the Pallet Lander primary structure does not significantly couple with the launch vehicle. A secondary objective is to determine how to properly stiffen the reinforcing beams so that the beam web resists the shear buckling load imparted by the spacecraft components mounted to the pallet lander deck during launch and landing. A third objective is that the calculated stress does not exceed the allowable strength of the material. These design requirements must be met while, minimizing the overall mass of the spacecraft. The final paper will discuss how the optimization was implemented as well as the results. While driven by optimization algorithms, the primary purpose of this effort was to demonstrate the capability of genetic algorithms to enable design automation in the preliminary design cycle. By developing a routine that can automatically generate designs through the use of Finite Element Analysis, considerable design efficiencies, both in time and overall product, can be obtained over more traditional brute force design methods.
NASA Astrophysics Data System (ADS)
Wimmer, G.
2008-01-01
In this paper we introduce two confidence and two prediction regions for statistical characterization of concentration measurements of product ions in order to discriminate various groups of persons for prospective better detection of primary lung cancer. Two MATLAB algorithms have been created for more adequate description of concentration measurements of volatile organic compounds in human breath gas for potential detection of primary lung cancer and for evaluation of the appropriate confidence and prediction regions.
The Langley Parameterized Shortwave Algorithm (LPSA) for Surface Radiation Budget Studies. 1.0
NASA Technical Reports Server (NTRS)
Gupta, Shashi K.; Kratz, David P.; Stackhouse, Paul W., Jr.; Wilber, Anne C.
2001-01-01
An efficient algorithm was developed during the late 1980's and early 1990's by W. F. Staylor at NASA/LaRC for the purpose of deriving shortwave surface radiation budget parameters on a global scale. While the algorithm produced results in good agreement with observations, the lack of proper documentation resulted in a weak acceptance by the science community. The primary purpose of this report is to develop detailed documentation of the algorithm. In the process, the algorithm was modified whenever discrepancies were found between the algorithm and its referenced literature sources. In some instances, assumptions made in the algorithm could not be justified and were replaced with those that were justifiable. The algorithm uses satellite and operational meteorological data for inputs. Most of the original data sources have been replaced by more recent, higher quality data sources, and fluxes are now computed on a higher spatial resolution. Many more changes to the basic radiation scheme and meteorological inputs have been proposed to improve the algorithm and make the product more useful for new research projects. Because of the many changes already in place and more planned for the future, the algorithm has been renamed the Langley Parameterized Shortwave Algorithm (LPSA).
The GLAS Science Algorithm Software (GSAS) User's Guide Version 7
NASA Technical Reports Server (NTRS)
Lee, Jeffrey E.
2013-01-01
The Geoscience Laser Altimeter System (GLAS) is the primary instrument for the ICESat (Ice, Cloud and Land Elevation Satellite) laser altimetry mission. ICESat was the benchmark Earth Observing System (EOS) mission for measuring ice sheet mass balance, cloud and aerosol heights, as well as land topography and vegetation characteristics. From 2003 to 2009, the ICESat mission provided multi-year elevation data needed to determine ice sheet mass balance as well as cloud property information, especially for stratospheric clouds common over polar areas. It also provided topography and vegetation data around the globe, in addition to the polar-specific coverage over the Greenland and Antarctic ice sheets.This document is the final version of the GLAS Science Algorithm Software Users Guide document. It contains the instructions to install the GLAS Science Algorithm Software (GSAS) in the production environment that was used to create the standard data products. It also describes the usage of each GSAS program in that environment with their required inputs and outputs. Included are a number of utility programs that are used to create ancillary data files that are used in the processing but generally are not distributed to the public as data products. Of importance is the values for the large number of constants used in the GSAS algorithm during processing are provided in an appendix.
Spatial scaling of net primary productivity using subpixel landcover information
NASA Astrophysics Data System (ADS)
Chen, X. F.; Chen, Jing M.; Ju, Wei M.; Ren, L. L.
2008-10-01
Gridding the land surface into coarse homogeneous pixels may cause important biases on ecosystem model estimations of carbon budget components at local, regional and global scales. These biases result from overlooking subpixel variability of land surface characteristics. Vegetation heterogeneity is an important factor introducing biases in regional ecological modeling, especially when the modeling is made on large grids. This study suggests a simple algorithm that uses subpixel information on the spatial variability of land cover type to correct net primary productivity (NPP) estimates, made at coarse spatial resolutions where the land surface is considered as homogeneous within each pixel. The algorithm operates in such a way that NPP obtained from calculations made at coarse spatial resolutions are multiplied by simple functions that attempt to reproduce the effects of subpixel variability of land cover type on NPP. Its application to a carbon-hydrology coupled model(BEPS-TerrainLab model) estimates made at a 1-km resolution over a watershed (named Baohe River Basin) located in the southwestern part of Qinling Mountains, Shaanxi Province, China, improved estimates of average NPP as well as its spatial variability.
NASA Astrophysics Data System (ADS)
Uitz, Julia; Stramski, Dariusz; Gentili, Bernard; D'Ortenzio, Fabrizio; Claustre, Hervé
2012-06-01
An approach that combines a recently developed procedure for improved estimation of surface chlorophyll a concentration (Chlsurf) from ocean color and a phytoplankton class-specific bio-optical model was used to examine primary production in the Mediterranean Sea. Specifically, this approach was applied to the 10 year time series of satellite Chlsurfdata from the Sea-viewing Wide Field-of-view Sensor. We estimated the primary production associated with three major phytoplankton classes (micro, nano, and picophytoplankton), which also yielded new estimates of the total primary production (Ptot). These estimates of Ptot (e.g., 68 g C m-2 yr-1for the entire Mediterranean basin) are lower by a factor of ˜2 and show a different seasonal cycle when compared with results from conventional approaches based on standard ocean color chlorophyll algorithm and a non-class-specific primary production model. Nanophytoplankton are found to be dominant contributors to Ptot (43-50%) throughout the year and entire basin. Micro and picophytoplankton exhibit variable contributions to Ptot depending on the season and ecological regime. In the most oligotrophic regime, these contributions are relatively stable all year long with picophytoplankton (˜32%) playing a larger role than microphytoplankton (˜22%). In the blooming regime, picophytoplankton dominate over microphytoplankton most of the year, except during the spring bloom when microphytoplankton (27-38%) are considerably more important than picophytoplankton (20-27%).
A Parallel Algorithm for Contact in a Finite Element Hydrocode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Timothy G.
A parallel algorithm is developed for contact/impact of multiple three dimensional bodies undergoing large deformation. As time progresses the relative positions of contact between the multiple bodies changes as collision and sliding occurs. The parallel algorithm is capable of tracking these changes and enforcing an impenetrability constraint and momentum transfer across the surfaces in contact. Portions of the various surfaces of the bodies are assigned to the processors of a distributed-memory parallel machine in an arbitrary fashion, known as the primary decomposition. A secondary, dynamic decomposition is utilized to bring opposing sections of the contacting surfaces together on the samemore » processors, so that opposing forces may be balanced and the resultant deformation of the bodies calculated. The secondary decomposition is accomplished and updated using only local communication with a limited subset of neighbor processors. Each processor represents both a domain of the primary decomposition and a domain of the secondary, or contact, decomposition. Thus each processor has four sets of neighbor processors: (a) those processors which represent regions adjacent to it in the primary decomposition, (b) those processors which represent regions adjacent to it in the contact decomposition, (c) those processors which send it the data from which it constructs its contact domain, and (d) those processors to which it sends its primary domain data, from which they construct their contact domains. The latter three of these neighbor sets change dynamically as the simulation progresses. By constraining all communication to these sets of neighbors, all global communication, with its attendant nonscalable performance, is avoided. A set of tests are provided to measure the degree of scalability achieved by this algorithm on up to 1024 processors. Issues related to the operating system of the test platform which lead to some degradation of the results are analyzed. This algorithm has been implemented as the contact capability of the ALE3D multiphysics code, and is currently in production use.« less
A Model-based Approach to Scaling GPP and NPP in Support of MODIS Land Product Validation
NASA Astrophysics Data System (ADS)
Turner, D. P.; Cohen, W. B.; Gower, S. T.; Ritts, W. D.
2003-12-01
Global products from the Earth-orbiting MODIS sensor include land cover, leaf area index (LAI), FPAR, 8-day gross primary production (GPP), and annual net primary production (NPP) at the 1 km spatial resolution. The BigFoot Project was designed specifically to validate MODIS land products, and has initiated ground measurements at 9 sites representing a wide array of vegetation types. An ecosystem process model (Biome-BGC) is used to generate estimates of GPP and NPP for each 5 km x 5 km BigFoot site. Model inputs include land cover and LAI (from Landsat ETM+), daily meteorological data (from a centrally located eddy covariance flux tower), and soil characteristics. Model derived outputs are validated against field-measured NPP and flux tower-derived GPP. The resulting GPP and NPP estimates are then aggregated to the 1 km resolution for direct spatial comparison with corresponding MODIS products. At the high latitude sites (tundra and boreal forest), the MODIS GPP phenology closely tracks the BigFoot GPP, but there is a high bias in the MODIS GPP. In the temperate zone sites, problems with the timing and magnitude of the MODIS FPAR introduce differences in MODIS GPP compared to the validation data at some sites. However, the MODIS LAI/FPAR data are currently being reprocessed (=Collection 4) and new comparisons will be made for 2002. The BigFoot scaling approach permits precise overlap in spatial and temporal resolution between the MODIS products and BigFoot products, and thus permits the evaluation of specific components of the MODIS NPP algorithm. These components include meteorological inputs from the NASA Data Assimilation Office, LAI and FPAR from other MODIS algorithms, and biome-specific parameters for base respiration rate and light use efficiency.
USDA-ARS?s Scientific Manuscript database
The development of sensors that provide geospatial information on crop and soil conditions has been a primary success for precision agriculture. However, further developments are needed to integrate geospatial data into computer algorithms that spatially optimize crop production while considering po...
Empirical retrieval of sea spray aerosol production using satellite microwave radiometry
NASA Astrophysics Data System (ADS)
Savelyev, I. B.; Yelland, M. J.; Norris, S. J.; Salisbury, D.; Pascal, R. W.; Bettenhausen, M. H.; Prytherch, J.; Anguelova, M. D.; Brooks, I. M.
2017-12-01
This study presents a novel approach to obtaining global sea spray aerosol (SSA) production source term by relying on direct satellite observations of the ocean surface, instead of more traditional approaches driven by surface meteorology. The primary challenge in developing this empirical algorithm is to compile a calibrated, consistent dataset of SSA surface flux collected offshore over a variety of conditions (i.e., regions and seasons), thus representative of the global SSA production variability. Such dataset includes observations from SEASAW, HiWASE, and WAGES field campaigns, during which the SSA flux was measured from the bow of a research vessel using consistent and state-of-the-art eddy covariance methodology. These in situ data are matched to observations of the state of the ocean surface from Windsat polarimetric microwave satellite radiometer. Previous studies demonstrated the ability of WindSat to detect variations in surface waves slopes, roughness and foam, which led to the development of retrieval algorithms for surface wind vector and more recently whitecap fraction. Similarly, in this study, microwave emissions from the ocean surface are matched to and calibrated against in situ observations of the SSA production flux. The resulting calibrated empirical algorithm is applicable for retrieval of SSA source term throughout the duration of Windsat mission, from 2003 to present.
Mateo, Jordi; Pla, Lluis M; Solsona, Francesc; Pagès, Adela
2016-01-01
Production planning models are achieving more interest for being used in the primary sector of the economy. The proposed model relies on the formulation of a location model representing a set of farms susceptible of being selected by a grocery shop brand to supply local fresh products under seasonal contracts. The main aim is to minimize overall procurement costs and meet future demand. This kind of problem is rather common in fresh vegetable supply chains where producers are located in proximity either to processing plants or retailers. The proposed two-stage stochastic model determines which suppliers should be selected for production contracts to ensure high quality products and minimal time from farm-to-table. Moreover, Lagrangian relaxation and parallel computing algorithms are proposed to solve these instances efficiently in a reasonable computational time. The results obtained show computational gains from our algorithmic proposals in front of the usage of plain CPLEX solver. Furthermore, the results ensure the competitive advantages of using the proposed model by purchase managers in the fresh vegetables industry.
NASA Astrophysics Data System (ADS)
Matheson, J.; Johnson, R. J.; Bates, N. R.; Parsons, R. J.
2016-02-01
Attempts to model primary production in the subsurface of the Sargasso Sea frequently use HPLC marker pigments to infer phytoplankton community structure, which relies upon assumptions about the phytoplankton community typically determined with limited site-specific data. Recent estimates suggest that nano- and picoplankton account for 90% of the phytoplankton community at BATS and factors such as elevated growth rates and high abundances likely allow these two size classes to exert a strong influence on primary production. To help assess the contribution of nano- and picoplankton on primary production at the BATS site we determine abundances and biovolumes through direct measurements with epifluorescence microscopy in conjunction with flow cytometer picoplankton counts. Using this approach we are able to quantify prymnesiophytes, heterotrophic nano- and dinoflagellates, mixotrophic dinoflagellates, ciliates, diatoms, pico- and nano eukaryotes, and Prochlorococcus. Preliminary analysis of summertime distributions show prymnesiophytes are the dominant nanoplankton group (average upper 140 m concentration of 500 cells ml-1) although heterotrophic nano- and dinoflagellates makeup a greater fraction of nanoplankton biovolume. During the summer period, pico-eukaryotes and Prochlorococcus were found to be the dominant picoplankton groups, which both increased with depth down to the deep chlorophyll maximum where they appear to drive variability. Using these direct observations we investigate the seasonal relationship between phytoplankton community and primary production, specifically by contrasting the stratified summer phase with a well-mixed winter system. Finally, we use these community structure observations with HPLC data to develop algorithms for taxonomy models (i.e. CHEMTAX) to assess modes of variability in phytoplankton community and consequential influences on primary production for the past 25 years at the BATS site.
Including Memory Friction in Single- and Two-State Quantum Dynamics Simulations.
Brown, Paul A; Messina, Michael
2016-03-03
We present a simple computational algorithm that allows for the inclusion of memory friction in a quantum dynamics simulation of a small, quantum, primary system coupled to many atoms in the surroundings. We show how including a memory friction operator, F̂, in the primary quantum system's Hamiltonian operator builds memory friction into the dynamics of the primary quantum system. We show that, in the harmonic, semi-classical limit, this friction operator causes the classical phase-space centers of a wavepacket to evolve exactly as if it were a classical particle experiencing memory friction. We also show that this friction operator can be used to include memory friction in the quantum dynamics of an anharmonic primary system. We then generalize the algorithm so that it can be used to treat a primary quantum system that is evolving, non-adiabatically on two coupled potential energy surfaces, i.e., a model that can be used to model H atom transfer, for example. We demonstrate this approach's computational ease and flexibility by showing numerical results for both harmonic and anharmonic primary quantum systems in the single surface case. Finally, we present numerical results for a model of non-adiabatic H atom transfer between a reactant and product state that includes memory friction on one or both of the non-adiabatic potential energy surfaces and uncover some interesting dynamical effects of non-memory friction on the H atom transfer process.
Coburn, T.C.; Freeman, P.A.; Attanasi, E.D.
2012-01-01
The primary objectives of this research were to (1) investigate empirical methods for establishing regional trends in unconventional gas resources as exhibited by historical production data and (2) determine whether or not incorporating additional knowledge of a regional trend in a suite of previously established local nonparametric resource prediction algorithms influences assessment results. Three different trend detection methods were applied to publicly available production data (well EUR aggregated to 80-acre cells) from the Devonian Antrim Shale gas play in the Michigan Basin. This effort led to the identification of a southeast-northwest trend in cell EUR values across the play that, in a very general sense, conforms to the primary fracture and structural orientations of the province. However, including this trend in the resource prediction algorithms did not lead to improved results. Further analysis indicated the existence of clustering among cell EUR values that likely dampens the contribution of the regional trend. The reason for the clustering, a somewhat unexpected result, is not completely understood, although the geological literature provides some possible explanations. With appropriate data, a better understanding of this clustering phenomenon may lead to important information about the factors and their interactions that control Antrim Shale gas production, which may, in turn, help establish a more general protocol for better estimating resources in this and other shale gas plays. ?? 2011 International Association for Mathematical Geology (outside the USA).
NASA Astrophysics Data System (ADS)
Jacox, Michael G.; Edwards, Christopher A.; Kahru, Mati; Rudnick, Daniel L.; Kudela, Raphael M.
2015-02-01
A 26-year record of depth integrated primary productivity (PP) in the Southern California Current System (SCCS) is analyzed with the goal of improving satellite net primary productivity (PP) estimates. Modest improvements in PP model performance are achieved by tuning existing algorithms for the SCCS, particularly by parameterizing carbon fixation rate in the vertically generalized production model as a function of surface chlorophyll concentration and distance from shore. Much larger improvements are enabled by improving the accuracy of subsurface chlorophyll and light profiles. In a simple vertically resolved production model for the SCCS (VRPM-SC), substitution of in situ surface data for remote sensing estimates offers only marginal improvements in model r2 (from 0.54 to 0.56) and total log10 root mean squared difference (from 0.22 to 0.21), while inclusion of in situ chlorophyll and light profiles improves these metrics to 0.77 and 0.15, respectively. Autonomous underwater gliders, capable of measuring subsurface properties on long-term, long-range deployments, significantly improve PP model fidelity in the SCCS. We suggest their use (and that of other autonomous profilers such as Argo floats) in conjunction with satellites as a way forward for large-scale improvements in PP estimation.
A conceptual approach to the masking effect of measures of disproportionality.
Maignen, Francois; Hauben, Manfred; Hung, Eric; Holle, Lionel Van; Dogne, Jean-Michel
2014-02-01
Masking is a statistical issue by which true signals of disproportionate reporting are hidden by the presence of other products in the database. Masking is currently not perfectly understood. There is no algorithm to identify the potential masking drugs to remove them for subsequent analyses of disproportionality. The primary objective of our study is to develop a mathematical framework for assessing the extent and impact of the masking effect of measures of disproportionality. We have developed a masking ratio that quantifies the masking effect of a given product. We have conducted a simulation study to validate our algorithm. The masking ratio is a measure of the strength of the masking effect whether the analysis is performed at the report or event level, and the manner in which reports are allocated to cells in the contingency table significantly impact the masking mechanisms. The reports containing both the product of interest and the masking product need to be handled appropriately. The proposed algorithm can use simplified masking provided that underlying assumptions (in particular the size of the database) are verified. For any event, the strongest masking effect is associated with the drug with the highest number of records (reports excluding the product of interest). Our study provides significant insights with practical implications for real-world pharmacovigilance that are supported by both real and simulated data. The public health impact of masking is still unknown. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.
2015-02-01
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high energy density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.
NASA Technical Reports Server (NTRS)
Mitchell, B. Greg; Kahru, Mati; Marra, John (Technical Monitor)
2002-01-01
Support for this project was used to develop satellite ocean color and temperature indices (SOCTI) for the California Current System (CCS) using the historic record of CZCS West Coast Time Series (WCTS), OCTS, WiFS and AVHRR SST. The ocean color satellite data have been evaluated in relation to CalCOFI data sets for chlorophyll (CZCS) and ocean spectral reflectance and chlorophyll OCTS and SeaWiFS. New algorithms for the three missions have been implemented based on in-water algorithm data sets, or in the case of CZCS, by comparing retrieved pigments with ship-based observations. New algorithms for absorption coefficients, diffuse attenuation coefficients and primary production have also been evaluated. Satellite retrievals are being evaluated based on our large data set of pigments and optics from CalCOFI.
NASA Astrophysics Data System (ADS)
Lund, M.; Zona, D.; Jackowicz-Korczynski, M.; Xu, X.
2017-12-01
The eddy covariance methodology is the primary tool for studying landscape-scale land-atmosphere exchange of greenhouse gases. Since the choice of instrumental setup and processing algorithms may influence the results, efforts within the international flux community have been made towards methodological harmonization and standardization. Performing eddy covariance measurements in high-latitude, Arctic tundra sites involves several challenges, related not only to remoteness and harsh climate conditions but also to the choice of processing algorithms. Partitioning of net ecosystem exchange (NEE) of CO2 into gross primary production (GPP) and ecosystem respiration (Reco) in the FLUXNET2015 dataset is made using either Nighttime or Daytime methods. These variables, GPP and Reco, are essential for calibration and validation of Earth system models. North of the Arctic Circle, sun remains visible at local midnight for a period of time, the number of days per year with midnight sun being dependent on latitude. The absence of nighttime conditions during Arctic summers renders the Nighttime method uncertain, however, no extensive assessment on the implications for flux partitioning has yet been made. In this study, we will assess the performance and validity of both partitioning methods along a latitudinal transect of northern sites included in the FLUXNET2015 dataset. We will evaluate the partitioned flux components against model simulations using the Community Land Model (CLM). Our results will be valuable for users interested in simulating Arctic and global carbon cycling.
NASA Astrophysics Data System (ADS)
Lee, Sang Heon; Ryu, Jongseong; Park, Jung-woo; Lee, Dabin; Kwon, Jae-Il; Zhao, Jingping; Son, SeungHyun
2018-03-01
The Bering and Chukchi seas are an important conduit to the Arctic Ocean and are reported to be one of the most productive regions in the world's oceans in terms of high primary productivity that sustains large numbers of fishes, marine mammals, and sea birds as well as benthic animals. Climate-induced changes in primary production and production at higher trophic levels also have been observed in the northern Bering and Chukchi seas. Satellite ocean color observations could enable the monitoring of relatively long term patterns in chlorophyll-a (Chl-a) concentrations that would serve as an indicator of phytoplankton biomass. The performance of existing global and regional Chl-a algorithms for satellite ocean color data was investigated in the northeastern Bering Sea and southern Chukchi Sea using in situ optical measurements from the Healy 2007 cruise. The model-derived Chl-a data using the previous Chl-a algorithms present striking uncertainties regarding Chl-a concentrations-for example, overestimation in lower Chl-a concentrations or systematic overestimation in the northeastern Bering Sea and southern Chukchi Sea. Accordingly, a simple two band ratio (R rs(443)/R rs(555)) algorithm of Chl-a for the satellite ocean color data was devised for the northeastern Bering Sea and southern Chukchi Sea. The MODIS-derived Chl-a data from July 2002 to December 2014 were produced using the new Chl-a algorithm to investigate the seasonal and interannual variations of Chl-a in the northern Bering Sea and the southern Chukchi Sea. The seasonal distribution of Chl-a shows that the highest (spring bloom) Chl-a concentrations are in May and the lowest are in July in the overall area. Chl-a concentrations relatively decreased in June, particularly in the open ocean waters of the Bering Sea. The Chl-a concentrations start to increase again in August and become quite high in September. In October, Chl-a concentrations decreased in the western area of the Study area and the Alaskan coastal waters. Strong interannual variations are shown in Chl-a concentrations in all areas. There is a slightly increasing trend in Chl-a concentrations in the northern Bering Strait (SECS). This increasing trend may be related to recent increases in the extent and duration of open waters due to the early break up of sea ice and the late formation of sea ice in the Chukchi Sea.
NASA Technical Reports Server (NTRS)
Hood, Raleigh R.
1995-01-01
A simplified, nonspectral derivation of a classical theory in plant physiology is presented and used to derive an absorption-based primary productivity algorithm. Field observations from a meridional transect (4 deg N to 42 deg S) in the Atlantic Ocean are then described and interpreted in this theoretical context. The observations include photosynthesis-irradiance curve parameters (alpha and P(sub max)), chlorophyll a and phaeopigment concentration, and estimated phytoplankton absorption coefficients at wavelength = 440 nm (a(sub ph)(440)). Observations near the top (50% I(sub 0)) and bottom (6% I(sub 0)) of the euphotic zone are contrasted. At both light levels, alpha, P(sub max), a(sub ph)(440), and pigment concentration varied similarly along the transect: values were highest at the equator and at the southern end of the transect and lowest in the central South Atlantic. It is concluded that this pattern was related to increased nutrient availability due to equatorial upwelling in the north, and increased wind mixing in the south. At the 50% light level, alpha increased relative to a(sub ph) at the southern end of the transect. This result appears to reflect a large-scale meridional (southward) increase in the average quantum efficiency of the photosynthetic units of the phytoplankton. A correlation analysis of the data reveals that at the 50% light level, variations in P(sub max) were more closely related to a(sub ph)(440) than chlorophyll concentration and that phytoplankton absorption explains 90% of the variability in P(sub max). In theory, this shows that the ratio of the average quantum efficiency of the photosynthetic units of the phytoplankton to the product of their average absorption cross section and turnover time is relatively constant. This result is used to simplify the absorption-based primary productivity algorithm derived previously. The feasibility of using this model to estimate production rate from satellite ocean color observations is discussed. It is concluded that an absorption-based algorithm should provide more accurate production rate estimates than one based upon chlorophyll (pigment) concentration.
Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2014-01-01
Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.
Regional crop gross primary production and yield estimation using fused Landsat-MODIS data
NASA Astrophysics Data System (ADS)
He, M.; Kimball, J. S.; Maneta, M. P.; Maxwell, B. D.; Moreno, A.
2017-12-01
Accurate crop yield assessments using satellite-based remote sensing are of interest for the design of regional policies that promote agricultural resiliency and food security. However, the application of current vegetation productivity algorithms derived from global satellite observations are generally too coarse to capture cropland heterogeneity. Merging information from sensors with reciprocal spatial and temporal resolution can improve the accuracy of these retrievals. In this study, we estimate annual crop yields for seven important crop types -alfalfa, barley, corn, durum wheat, peas, spring wheat and winter wheat over Montana, United States (U.S.) from 2008 to 2015. Yields are estimated as the product of gross primary production (GPP) and a crop-specific harvest index (HI) at 30 m spatial resolution. To calculate GPP we used a modified form of the MOD17 LUE algorithm driven by a 30 m 8-day fused NDVI dataset constructed by blending Landsat (5 or 7) and MODIS Terra reflectance data. The fused 30-m NDVI record shows good consistency with the original Landsat and MODIS data, but provides better spatiotemporal information on cropland vegetation growth. The resulting GPP estimates capture characteristic cropland patterns and seasonal variations, while the estimated annual 30 m crop yield results correspond favorably with county-level crop yield data (r=0.96, p<0.05). The estimated crop yield performance was generally lower, but still favorable in relation to field-scale crop yield surveys (r=0.42, p<0.01). Our methods and results are suitable for operational applications at regional scales.
Validation of SMAP Surface Soil Moisture Products with Core Validation Sites
NASA Technical Reports Server (NTRS)
Colliander, A.; Jackson, T. J.; Bindlish, R.; Chan, S.; Das, N.; Kim, S. B.; Cosh, M. H.; Dunbar, R. S.; Dang, L.; Pashaian, L.;
2017-01-01
The NASA Soil Moisture Active Passive (SMAP) mission has utilized a set of core validation sites as the primary methodology in assessing the soil moisture retrieval algorithm performance. Those sites provide well calibrated in situ soil moisture measurements within SMAP product grid pixels for diverse conditions and locations.The estimation of the average soil moisture within the SMAP product grid pixels based on in situ measurements is more reliable when location specific calibration of the sensors has been performed and there is adequate replication over the spatial domain, with an up-scaling function based on analysis using independent estimates of the soil moisture distribution. SMAP fulfilled these requirements through a collaborative CalVal Partner program.This paper presents the results from 34 candidate core validation sites for the first eleven months of the SMAP mission. As a result of the screening of the sites prior to the availability of SMAP data, out of the 34 candidate sites 18 sites fulfilled all the requirements at one of the resolution scales (at least). The rest of the sites are used as secondary information in algorithm evaluation. The results indicate that the SMAP radiometer-based soil moisture data product meets its expected performance of 0.04 cu m/cu m volumetric soil moisture (unbiased root mean square error); the combined radar-radiometer product is close to its expected performance of 0.04 cu m/cu m, and the radar-based product meets its target accuracy of 0.06 cu m/cu m (the lengths of the combined and radar-based products are truncated to about 10 weeks because of the SMAP radar failure). Upon completing the intensive CalVal phase of the mission the SMAP project will continue to enhance the products in the primary and extended geographic domains, in co-operation with the CalVal Partners, by continuing the comparisons over the existing core validation sites and inclusion of candidate sites that can address shortcomings.
Consistency of Global Modis Aerosol Optical Depths over Ocean on Terra and Aqua Ceres SSF Datasets
NASA Technical Reports Server (NTRS)
Ignatov, Alexander; Minnis, Patrick; Miller, Walter F.; Wielicki, Bruce A.; Remer, Lorraine
2006-01-01
Aerosol retrievals over ocean from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard Terra and Aqua platforms are available from the Clouds and the Earth's Radiant Energy System (CERES) Single Scanner Footprint (SSF) datasets generated at NASA Langley Research Center (LaRC). Two aerosol products are reported side-by-side. The primary M product is generated by sub-setting and remapping the multi-spectral (0.47-2.1 micrometer) MODIS produced oceanic aerosol (MOD04/MYD04 for Terra/Aqua) onto CERES footprints. M*D04 processing uses cloud screening and aerosol algorithms developed by the MODIS science team. The secondary AVHRR-like A product is generated in only two MODIS bands 1 and 6 (on Aqua, bands 1 and 7). The A processing uses the CERES cloud screening algorithm, and NOAA/NESDIS glint identification, and single-channel aerosol retrieval algorithms. The M and A products have been documented elsewhere and preliminarily compared using 2 weeks of global Terra CERES SSF Edition 1A data in which the M product was based on MOD04 collection 3. In this study, the comparisons between the M and A aerosol optical depths (AOD) in MODIS band 1 (0.64 micrometers), tau(sub 1M) and tau(sub 1A) are re-examined using 9 days of global CERES SSF Terra Edition 2A and Aqua Edition 1B data from 13 - 21 October 2002, and extended to include cross-platform comparisons. The M and A products on the new CERES SSF release are generated using the same aerosol algorithms as before, but with different preprocessing and sampling procedures, lending themselves to a simple sensitivity check to non-aerosol factors. Both tau(sub 1M) and tau(sub 1A) generally compare well across platforms. However, the M product shows some differences, which increase with ambient cloud amount and towards the solar side of the orbit. Three types of comparisons conducted in this study - cross-platform, cross-product, and cross-release confirm the previously made observation that the major area for improvement in the current aerosol processing lies in a more formalized and standardized sampling (and most importantly, cloud screening) whereas optimization of the aerosol algorithm is deemed to be an important yet less critical element.
Revisiting the choice of the driving temperature for eddy covariance CO2 flux partitioning
Wohlfahrt, Georg; Galvagno, Marta
2017-01-01
So-called CO2 flux partitioning algorithms are widely used to partition the net ecosystem CO2 exchange into the two component fluxes, gross primary productivity and ecosystem respiration. Common CO2 flux partitioning algorithms conceptualize ecosystem respiration to originate from a single source, requiring the choice of a corresponding driving temperature. Using a conceptual dual-source respiration model, consisting of an above- and a below-ground respiration source each driven by a corresponding temperature, we demonstrate that the typical phase shift between air and soil temperature gives rise to a hysteresis relationship between ecosystem respiration and temperature. The hysteresis proceeds in a clockwise fashion if soil temperature is used to drive ecosystem respiration, while a counter-clockwise response is observed when ecosystem respiration is related to air temperature. As a consequence, nighttime ecosystem respiration is smaller than daytime ecosystem respiration when referenced to soil temperature, while the reverse is true for air temperature. We confirm these qualitative modelling results using measurements of day and night ecosystem respiration made with opaque chambers in a short-statured mountain grassland. Inferring daytime from nighttime ecosystem respiration or vice versa, as attempted by CO2 flux partitioning algorithms, using a single-source respiration model is thus an oversimplification resulting in biased estimates of ecosystem respiration. We discuss the likely magnitude of the bias, options for minimizing it and conclude by emphasizing that the systematic uncertainty of gross primary productivity and ecosystem respiration inferred through CO2 flux partitioning needs to be better quantified and reported. PMID:28439145
Climatological Processing of Radar Data for the TRMM Ground Validation Program
NASA Technical Reports Server (NTRS)
Kulie, Mark; Marks, David; Robinson, Michael; Silberstein, David; Wolff, David; Ferrier, Brad; Amitai, Eyal; Fisher, Brad; Wang, Jian-Xin; Augustine, David;
2000-01-01
The Tropical Rainfall Measuring Mission (TRMM) satellite was successfully launched in November, 1997. The main purpose of TRMM is to sample tropical rainfall using the first active spaceborne precipitation radar. To validate TRMM satellite observations, a comprehensive Ground Validation (GV) Program has been implemented. The primary goal of TRMM GV is to provide basic validation of satellite-derived precipitation measurements over monthly climatologies for the following primary sites: Melbourne, FL; Houston, TX; Darwin, Australia; and Kwajalein Atoll, RMI. As part of the TRMM GV effort, research analysts at NASA Goddard Space Flight Center (GSFC) generate standardized TRMM GV products using quality-controlled ground-based radar data from the four primary GV sites as input. This presentation will provide an overview of the TRMM GV climatological processing system. A description of the data flow between the primary GV sites, NASA GSFC, and the TRMM Science and Data Information System (TSDIS) will be presented. The radar quality control algorithm, which features eight adjustable height and reflectivity parameters, and its effect on monthly rainfall maps will be described. The methodology used to create monthly, gauge-adjusted rainfall products for each primary site will also be summarized. The standardized monthly rainfall products are developed in discrete, modular steps with distinct intermediate products. These developmental steps include: (1) extracting radar data over the locations of rain gauges, (2) merging rain gauge and radar data in time and space with user-defined options, (3) automated quality control of radar and gauge merged data by tracking accumulations from each instrument, and (4) deriving Z-R relationships from the quality-controlled merged data over monthly time scales. A summary of recently reprocessed official GV rainfall products available for TRMM science users will be presented. Updated basic standardized product results and trends involving monthly accumulation, Z-R relationship, and gauge statistics for each primary GV site will be also displayed.
The Collection 6 'dark-target' MODIS Aerosol Products
NASA Technical Reports Server (NTRS)
Levy, Robert C.; Mattoo, Shana; Munchak, Leigh A.; Kleidman, Richard G.; Patadia, Falguni; Gupta, Pawan; Remer, Lorraine
2013-01-01
Aerosol retrieval algorithms are applied to Moderate resolution Imaging Spectroradiometer (MODIS) sensors on both Terra and Aqua, creating two streams of decade-plus aerosol information. Products of aerosol optical depth (AOD) and aerosol size are used for many applications, but the primary concern is that these global products are comprehensive and consistent enough for use in climate studies. One of our major customers is the international modeling comparison study known as AEROCOM, which relies on the MODIS data as a benchmark. In order to keep up with the needs of AEROCOM and other MODIS data users, while utilizing new science and tools, we have improved the algorithms and products. The code, and the associated products, will be known as Collection 6 (C6). While not a major overhaul from the previous Collection 5 (C5) version, there are enough changes that there are significant impacts to the products and their interpretation. In its entirety, the C6 algorithm is comprised of three sub-algorithms for retrieving aerosol properties over different surfaces: These include the dark-target DT algorithms to retrieve over (1) ocean and (2) vegetated-dark-soiled land, plus the (3) Deep Blue (DB) algorithm, originally developed to retrieve over desert-arid land. Focusing on the two DT algorithms, we have updated assumptions for central wavelengths, Rayleigh optical depths and gas (H2O, O3, CO2, etc.) absorption corrections, while relaxing the solar zenith angle limit (up to 84) to increase pole-ward coverage. For DT-land, we have updated the cloud mask to allow heavy smoke retrievals, fine-tuned the assignments for aerosol type as function of season location, corrected bugs in the Quality Assurance (QA) logic, and added diagnostic parameters such as topographic altitude. For DT-ocean, improvements include a revised cloud mask for thin-cirrus detection, inclusion of wind speed dependence in the retrieval, updates to logic of QA Confidence flag (QAC) assignment, and additions of important diagnostic information. At the same time as we have introduced algorithm changes, we have also accounted for upstream changes including: new instrument calibration, revised land-sea masking, and changed cloud masking. Upstream changes also impact the coverage and global statistics of the retrieved AOD. Although our responsibility is to the DT code and products, we have also added a product that merges DT and DB product over semi-arid land surfaces to provide a more gap-free dataset, primarily for visualization purposes. Preliminary validation shows that compared to surface-based sunphotometer data, the C6, Level 2 (along swath) DT-products compare at least as well as those from C5. C6 will include new diagnostic information about clouds in the aerosol field, including an aerosol cloud mask at 500 m resolution, and calculations of the distance to the nearest cloud from clear pixels. Finally, we have revised the strategy for aggregating and averaging the Level 2 (swath) data to become Level 3 (gridded) data. All together, the changes to the DT algorithms will result in reduced global AOD (by 0.02) over ocean and increased AOD (by 0.02) over land, along with changes in spatial coverage. Changes in calibration will have more impact to Terras time series, especially over land. This will result in a significant reduction in artificial differences in the Terra and Aqua datasets, and will stabilize the MODIS data as a target for AEROCOM studie
Using simple environmental variables to estimate below-ground productivity in grasslands
Gill, R.A.; Kelly, R.H.; Parton, W.J.; Day, K.A.; Jackson, R.B.; Morgan, J.A.; Scurlock, J.M.O.; Tieszen, L.L.; Castle, J.V.; Ojima, D.S.; Zhang, X.S.
2002-01-01
In many temperate and annual grasslands, above-ground net primary productivity (NPP) can be estimated by measuring peak above-ground biomass. Estimates of below-ground net primary productivity and, consequently, total net primary productivity, are more difficult. We addressed one of the three main objectives of the Global Primary Productivity Data Initiative for grassland systems to develop simple models or algorithms to estimate missing components of total system NPP. Any estimate of below-ground NPP (BNPP) requires an accounting of total root biomass, the percentage of living biomass and annual turnover of live roots. We derived a relationship using above-ground peak biomass and mean annual temperature as predictors of below-ground biomass (r2 = 0.54; P = 0.01). The percentage of live material was 0.6, based on published values. We used three different functions to describe root turnover: constant, a direct function of above-ground biomass, or as a positive exponential relationship with mean annual temperature. We tested the various models against a large database of global grassland NPP and the constant turnover and direct function models were approximately equally descriptive (r2 = 0.31 and 0.37), while the exponential function had a stronger correlation with the measured values (r2 = 0.40) and had a better fit than the other two models at the productive end of the BNPP gradient. When applied to extensive data we assembled from two grassland sites with reliable estimates of total NPP, the direct function was most effective, especially at lower productivity sites. We provide some caveats for its use in systems that lie at the extremes of the grassland gradient and stress that there are large uncertainties associated with measured and modelled estimates of BNPP.
FLUXNET to MODIS: Connecting the dots to capture heterogenious biosphere metabolism
NASA Astrophysics Data System (ADS)
Woods, K. D.; Schwalm, C.; Huntzinger, D. N.; Massey, R.; Poulter, B.; Kolb, T.
2015-12-01
Eddy co-variance flux towers provide our most widely distributed network of direct observations for land-atmosphere carbon exchange. Carbon flux sensitivity analysis is a method that uses in situ networks to understand how ecosystems respond to changes in climatic variables. Flux towers concurrently observe key ecosystem metabolic processes (e..g. gross primary productivity) and micrometeorological variation, but only over small footprints. Remotely sensed vegetation indices from MODIS offer continuous observations of the vegetated land surface, but are less direct, as they are based on light use efficiency algorithms, and not on the ground observations. The marriage of these two data products offers an opportunity to validate remotely sensed indices with in situ observations and translate information derived from tower sites to globally gridded products. Here we provide correlations between Enhanced Vegetation Index (EVI), Leaf Area Index (LAI) and MODIS gross primary production with FLUXNET derived estimates of gross primary production, respiration and net ecosystem exchange. We demonstrate remotely sensed vegetation products which have been transformed to gridded estimates of terrestrial biosphere metabolism on a regional-to-global scale. We demonstrate anomalies in gross primary production, respiration, and net ecosystem exchange as predicted by both MODIS-carbon flux sensitivities and meteorological driver-carbon flux sensitivities. We apply these sensitivities to recent extreme climatic events and demonstrate both our ability to capture changes in biosphere metabolism, and differences in the calculation of carbon flux anomalies based on method. The quantification of co-variation in these two methods of observation is important as it informs both how remotely sensed vegetation indices are correlated with on the ground tower observations, and with what certainty we can expand these observations and relationships.
NASA Astrophysics Data System (ADS)
Jerg, M.; Stengel, M.; Hollmann, R.; Poulsen, C.
2012-04-01
The ultimate objective of the ESA Climate Change Initiative (CCI) Cloud project is to provide long-term coherent cloud property data sets exploiting and improving on the synergetic capabilities of past, existing, and upcoming European and American satellite missions. The synergetic approach allows not only for improved accuracy and extended temporal and spatial sampling of retrieved cloud properties better than those provided by single instruments alone but potentially also for improved (inter-)calibration and enhanced homogeneity and stability of the derived time series. Such advances are required by the scientific community to facilitate further progress in satellite-based climate monitoring, which leads to a better understanding of climate. Some of the primary objectives of ESA Cloud CCI Cloud are (1) the development of inter-calibrated radiance data sets, so called Fundamental Climate Data Records - for ESA and non ESA instruments through an international collaboration, (2) the development of an optimal estimation based retrieval framework for cloud related essential climate variables like cloud cover, cloud top height and temperature, liquid and ice water path, and (3) the development of two multi-annual global data sets for the mentioned cloud properties including uncertainty estimates. These two data sets are characterized by different combinations of satellite systems: the AVHRR heritage product comprising (A)ATSR, AVHRR and MODIS and the novel (A)ATSR - MERIS product which is based on a synergetic retrieval using both instruments. Both datasets cover the years 2007-2009 in the first project phase. ESA Cloud CCI will also carry out a comprehensive validation of the cloud property products and provide a common data base as in the framework of the Global Energy and Water Cycle Experiment (GEWEX). The presentation will give an overview of the ESA Cloud CCI project and its goals and approaches and then continue with results from the Round Robin algorithm comparison exercise carried out at the beginning of the project which included three algorithms. The purpose of the exercise was to assess and compare existing cloud retrieval algorithms in order to chose one of them as backbone of the retrieval system and also identify areas of potential improvement and general strengths and weaknesses of the algorithm. Furthermore the presentation will elaborate on the optimal estimation algorithm subsequently chosen to derive the heritage product and which is presently further developed and will be employed for the AVHRR heritage product. The algorithm's capabilities to coherently and simultaneously process all radiative input and yield retrieval parameters together with associated uncertainty estimates will be presented together with first results for the heritage product. In the course of the project the algorithm is being developed into a freely and publicly available community retrieval system for interested scientists.
Operational Processing of Ground Validation Data for the Tropical Rainfall Measuring Mission
NASA Technical Reports Server (NTRS)
Kulie, Mark S.; Robinson, Mike; Marks, David A.; Ferrier, Brad S.; Rosenfeld, Danny; Wolff, David B.
1999-01-01
The Tropical Rainfall Measuring Mission (TRMM) satellite was successfully launched in November 1997. A primary goal of TRMM is to sample tropical rainfall using the first active spaceborne precipitation radar. To validate TRMM satellite observations, a comprehensive Ground Validation (GV) Program has been implemented for this mission. A key component of GV is the analysis and quality control of meteorological ground-based radar data from four primary sites: Melbourne, FL; Houston, TX; Darwin, Australia; and Kwajalein Atoll, RMI. As part of the TRMM GV effort, the Joint Center for Earth Systems Technology (JCET) at the University of Maryland, Baltimore County, has been tasked with developing and implementing an operational system to quality control (QC), archive, and provide data for subsequent rainfall product generation from the four primary GV sites. This paper provides an overview of the JCET operational environment. A description of the QC algorithm and performance, in addition to the data flow procedure between JCET and the TRNM science and Data Information System (TSDIS), are presented. The impact of quality-controlled data on higher level rainfall and reflectivity products will also be addressed, Finally, a brief description of JCET's expanded role into producing reference rainfall products will be discussed.
NASA Astrophysics Data System (ADS)
Liu, S.; Zhuang, Q.
2016-12-01
Climatic change affects the plant physiological and biogeochemistry processes, and therefore on the ecosystem water use efficiency (WUE). Therefore, a comprehensive understanding of WUE would help us understand the adaptability of ecosystem to variable climate conditions. Tree ring data have great potential in addressing the forest response to climatic changes compared with mechanistic model simulations, eddy flux measurement and manipulative experiments. Here, we collected the tree ring isotopic carbon data in 12 boreal forest sites to develop a multiple linear regression model, and the model was extrapolated to the whole boreal region to obtain the WUE spatial and temporal variation from 1948 to 2010. Two algorithms were also used to estimate the inter-annual gross primary productivity (GPP) based on our derived WUE. Our results demonstrated that most of boreal regions showed significant increasing WUE trend during the period except parts of Alaska. The spatial averaged annual mean WUE was predicted to increase by 13%, from 2.3±0.4 g C kg-1 H2O at 1948 to 2.6±0.7 g C kg-1 H2O at 2012, which was much higher than other land surface models. Our predicted GPP by the WUE definition algorithm was comparable with site observation, while for the revised light use efficiency algorithm, GPP estimation was higher than site observation as well as than land surface models. In addition, the increasing GPP trends by two algorithms were similar with land surface model simulations. This is the first study to evaluate regional WUE and GPP in forest ecosystem based on tree ring data and future work should consider other variables (elevation, nitrogen deposition) that influence tree ring isotopic signals and the dual-isotope approach may help improve predicting the inter-annual WUE variation.
Teaching Computation in Primary School without Traditional Written Algorithms
ERIC Educational Resources Information Center
Hartnett, Judy
2015-01-01
Concerns regarding the dominance of the traditional written algorithms in schools have been raised by many mathematics educators, yet the teaching of these procedures remains a dominant focus in in primary schools. This paper reports on a project in one school where the staff agreed to put the teaching of the traditional written algorithm aside,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.
2015-01-12
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However,more » the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.« less
Diagnosis of paediatric HIV infection in a primary health care setting with a clinical algorithm.
Horwood, C.; Liebeschuetz, S.; Blaauw, D.; Cassol, S.; Qazi, S.
2003-01-01
OBJECTIVE: To determine the validity of an algorithm used by primary care health workers to identify children with symptomatic human immunodeficiency virus (HIV) infection. This HIV algorithm is being implemented in South Africa as part of the Integrated Management of Childhood Illness (IMCI), a strategy that aims to improve childhood morbidity and mortality by improving care at the primary care level. As AIDS is a leading cause of death in children in southern Africa, diagnosis and management of symptomatic HIV infection was added to the existing IMCI algorithm. METHODS: In total, 690 children who attended the outpatients department in a district hospital in South Africa were assessed with the HIV algorithm and by a paediatrician. All children were then tested for HIV viral load. The validity of the algorithm in detecting symptomatic HIV was compared with clinical diagnosis by a paediatrician and the result of an HIV test. Detailed clinical data were used to improve the algorithm. FINDINGS: Overall, 198 (28.7%) enrolled children were infected with HIV. The paediatrician correctly identified 142 (71.7%) children infected with HIV, whereas the IMCI/HIV algorithm identified 111 (56.1%). Odds ratios were calculated to identify predictors of HIV infection and used to develop an improved HIV algorithm that is 67.2% sensitive and 81.5% specific in clinically detecting HIV infection. CONCLUSIONS: Children with symptomatic HIV infection can be identified effectively by primary level health workers through the use of an algorithm. The improved HIV algorithm developed in this study could be used by countries with high prevalences of HIV to enable IMCI practitioners to identify and care for HIV-infected children. PMID:14997238
Improved Soundings and Error Estimates using AIRS/AMSU Data
NASA Technical Reports Server (NTRS)
Susskind, Joel
2006-01-01
AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.
Estimation of Carbon Flux of Forest Ecosystem over Qilian Mountains by BIOME-BGC Model
NASA Astrophysics Data System (ADS)
Yan, Min; Tian, Xin; Li, Zengyuan; Chen, Erxue; Li, Chunmei
2014-11-01
The gross primary production (GPP) and net ecosystem exchange (NEE) are important indicators for carbon fluxes. This study aims at evaluating the forest GPP and NEE over the Qilian Mountains using meteorological, remotely sensed and other ancillary data at large scale. To realize this, the widely used ecological-process-based model, Biome-BGC, and remote-sensing-based model, MODIS GPP algorithm, were selected for the simulation of the forest carbon fluxes. The combination of these two models was based on calibrating the Biome-BGC by the optimized MODIS GPP algorithm. The simulated GPP and NEE values were evaluated against the eddy covariance observed GPPs and NEEs, and the well agreements have been reached, with R2=0.76, 0.67 respectively.
Estimation of Carbon Flux of Forest Ecosystem over Qilian Mountains by BIOME-BGC Model
NASA Astrophysics Data System (ADS)
Yan, Min; Tian, Xin; Li, Zengyuan; Chen, Erxue; Li, Chunmei
2014-11-01
The gross primary production (GPP) and net ecosystem exchange (NEE) are important indicators for carbon fluxes. This study aims at evaluating the forest GPP and NEE over the Qilian Mountains using meteorological, remotely sensed and other ancillary data at large scale. To realize this, the widely used ecological-process- based model, Biome-BGC, and remote-sensing-based model, MODIS GPP algorithm, were selected for the simulation of the forest carbon fluxes. The combination of these two models was based on calibrating the Biome-BGC by the optimized MODIS GPP algorithm. The simulated GPP and NEE values were evaluated against the eddy covariance observed GPPs and NEEs, and the well agreements have been reached, with R2=0.76, 0.67 respectively.
Oceanographic applications of laser technology
NASA Technical Reports Server (NTRS)
Hoge, F. E.
1988-01-01
Oceanographic activities with the Airborne Oceanographic Lidar (AOL) for the past several years have primarily been focussed on using active (laser induced pigment fluorescence) and concurrent passive ocean color spectra to improve existing ocean color algorithms for estimating primary production in the world's oceans. The most significant results were the development of a technique for selecting optimal passive wavelengths for recovering phytoplankton photopigment concentration and the application of this technique, termed active-passive correlation spectroscopy (APCS), to various forms of passive ocean color algorithms. Included in this activity is use of airborne laser and passive ocean color for development of advanced satellite ocean color sensors. Promising on-wavelength subsurface scattering layer measurements were recently obtained. A partial summary of these results are shown.
Assistant for Analyzing Tropical-Rain-Mapping Radar Data
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
A document is defined that describes an approach for a Tropical Rain Mapping Radar Data System (TDS). TDS is composed of software and hardware elements incorporating a two-frequency spaceborne radar system for measuring tropical precipitation. The TDS would be used primarily in generating data products for scientific investigations. The most novel part of the TDS would be expert-system software to aid in the selection of algorithms for converting raw radar-return data into such primary observables as rain rate, path-integrated rain rate, and surface backscatter. The expert-system approach would address the issue that selection of algorithms for processing the data requires a significant amount of preprocessing, non-intuitive reasoning, and heuristic application, making it infeasible, in many cases, to select the proper algorithm in real time. In the TDS, tentative selections would be made to enable conversions in real time. The expert system would remove straightforwardly convertible data from further consideration, and would examine ambiguous data, performing analysis in depth to determine which algorithms to select. Conversions performed by these algorithms, presumed to be correct, would be compared with the corresponding real-time conversions. Incorrect real-time conversions would be updated using the correct conversions.
Coleman, Nathan; Halas, Gayle; Peeler, William; Casaclang, Natalie; Williamson, Tyler; Katz, Alan
2015-02-05
Electronic Medical Records (EMRs) are increasingly used in the provision of primary care and have been compiled into databases which can be utilized for surveillance, research and informing practice. The primary purpose of these records is for the provision of individual patient care; validation and examination of underlying limitations is crucial for use for research and data quality improvement. This study examines and describes the validity of chronic disease case definition algorithms and factors affecting data quality in a primary care EMR database. A retrospective chart audit of an age stratified random sample was used to validate and examine diagnostic algorithms applied to EMR data from the Manitoba Primary Care Research Network (MaPCReN), part of the Canadian Primary Care Sentinel Surveillance Network (CPCSSN). The presence of diabetes, hypertension, depression, osteoarthritis and chronic obstructive pulmonary disease (COPD) was determined by review of the medical record and compared to algorithm identified cases to identify discrepancies and describe the underlying contributing factors. The algorithm for diabetes had high sensitivity, specificity and positive predictive value (PPV) with all scores being over 90%. Specificities of the algorithms were greater than 90% for all conditions except for hypertension at 79.2%. The largest deficits in algorithm performance included poor PPV for COPD at 36.7% and limited sensitivity for COPD, depression and osteoarthritis at 72.0%, 73.3% and 63.2% respectively. Main sources of discrepancy included missing coding, alternative coding, inappropriate diagnosis detection based on medications used for alternate indications, inappropriate exclusion due to comorbidity and loss of data. Comparison to medical chart review shows that at MaPCReN the CPCSSN case finding algorithms are valid with a few limitations. This study provides the basis for the validated data to be utilized for research and informs users of its limitations. Analysis of underlying discrepancies provides the ability to improve algorithm performance and facilitate improved data quality.
Evaluation of Organic Proxies for Quantifying Past Primary Productivity
NASA Astrophysics Data System (ADS)
Raja, M.; Rosell-Melé, A.; Galbraith, E.
2017-12-01
Ocean primary productivity is a key element of the marine carbon cycle. However, its quantitative reconstruction in the past relies on the use of biogeochemical models as the available proxy approaches are qualitative at best. Here, we present an approach that evaluates the use of phytoplanktonic biomarkers (i.e. chlorins and alkenones) as quantitative proxies to reconstruct past changes in marine productivity. We compare biomarkers contents in a global suite of core-top sediments to sea-surface chlorophyll-a abundance estimated by satellites over the last 20 years, and the results are compared to total organic carbon (TOC). We also assess satellite data and detect satellite limitations and biases due to the complexity of optical properties and the actual defined algorithms. Our findings show that sedimentary chlorins can be used to track total sea-surface chlorophyll-a abundance as an indicator for past primary productivity. However, degradation processes restrict the application of this proxy to concentrations below a threshold value (1µg/g). Below this threshold, chlorins are a useful tool to identify reducing conditions when used as part of a multiproxy approach to assess redox sedimentary conditions (e.g. using Re, U). This is based on the link between anoxic/disoxic conditions and the flux of organic matter from the sea-surface to the sediments. We also show that TOC is less accurate than chlorins for estimating sea-surface chlorophyll-a due to the contribution of terrigenous organic matter, and the different degradation pathways of all organic compounds that TOC includes. Alkenones concentration also relates to primary productivity, but they are constrained by different processes in different regions. In conclusion, as lons as specific constraints are taken into account, our study evaluates the use of chlorins and alkenones as quantitative proxies of past primary productivity, with more accuracy than by using TOC.
Cui, Tianxiang; Wang, Yujie; Sun, Rui; Qiao, Chen; Fan, Wenjie; Jiang, Guoqing; Hao, Lvyuan; Zhang, Lei
2016-01-01
Estimating gross primary production (GPP) and net primary production (NPP) are significant important in studying carbon cycles. Using models driven by multi-source and multi-scale data is a promising approach to estimate GPP and NPP at regional and global scales. With a focus on data that are openly accessible, this paper presents a GPP and NPP model driven by remotely sensed data and meteorological data with spatial resolutions varying from 30 m to 0.25 degree and temporal resolutions ranging from 3 hours to 1 month, by integrating remote sensing techniques and eco-physiological process theories. Our model is also designed as part of the Multi-source data Synergized Quantitative (MuSyQ) Remote Sensing Production System. In the presented MuSyQ-NPP algorithm, daily GPP for a 10-day period was calculated as a product of incident photosynthetically active radiation (PAR) and its fraction absorbed by vegetation (FPAR) using a light use efficiency (LUE) model. The autotrophic respiration (Ra) was determined using eco-physiological process theories and the daily NPP was obtained as the balance between GPP and Ra. To test its feasibility at regional scales, our model was performed in an arid and semi-arid region of Heihe River Basin, China to generate daily GPP and NPP during the growing season of 2012. The results indicated that both GPP and NPP exhibit clear spatial and temporal patterns in their distribution over Heihe River Basin during the growing season due to the temperature, water and solar influx conditions. After validated against ground-based measurements, MODIS GPP product (MOD17A2H) and results reported in recent literature, we found the MuSyQ-NPP algorithm could yield an RMSE of 2.973 gC m(-2) d(-1) and an R of 0.842 when compared with ground-based GPP while an RMSE of 8.010 gC m(-2) d(-1) and an R of 0.682 can be achieved for MODIS GPP, the estimated NPP values were also well within the range of previous literature, which proved the reliability of our modelling results. This research suggested that the utilization of multi-source data with various scales would help to the establishment of an appropriate model for calculating GPP and NPP at regional scales with relatively high spatial and temporal resolution.
Cui, Tianxiang; Wang, Yujie; Sun, Rui; Qiao, Chen; Fan, Wenjie; Jiang, Guoqing; Hao, Lvyuan; Zhang, Lei
2016-01-01
Estimating gross primary production (GPP) and net primary production (NPP) are significant important in studying carbon cycles. Using models driven by multi-source and multi-scale data is a promising approach to estimate GPP and NPP at regional and global scales. With a focus on data that are openly accessible, this paper presents a GPP and NPP model driven by remotely sensed data and meteorological data with spatial resolutions varying from 30 m to 0.25 degree and temporal resolutions ranging from 3 hours to 1 month, by integrating remote sensing techniques and eco-physiological process theories. Our model is also designed as part of the Multi-source data Synergized Quantitative (MuSyQ) Remote Sensing Production System. In the presented MuSyQ-NPP algorithm, daily GPP for a 10-day period was calculated as a product of incident photosynthetically active radiation (PAR) and its fraction absorbed by vegetation (FPAR) using a light use efficiency (LUE) model. The autotrophic respiration (Ra) was determined using eco-physiological process theories and the daily NPP was obtained as the balance between GPP and Ra. To test its feasibility at regional scales, our model was performed in an arid and semi-arid region of Heihe River Basin, China to generate daily GPP and NPP during the growing season of 2012. The results indicated that both GPP and NPP exhibit clear spatial and temporal patterns in their distribution over Heihe River Basin during the growing season due to the temperature, water and solar influx conditions. After validated against ground-based measurements, MODIS GPP product (MOD17A2H) and results reported in recent literature, we found the MuSyQ-NPP algorithm could yield an RMSE of 2.973 gC m-2 d-1 and an R of 0.842 when compared with ground-based GPP while an RMSE of 8.010 gC m-2 d-1 and an R of 0.682 can be achieved for MODIS GPP, the estimated NPP values were also well within the range of previous literature, which proved the reliability of our modelling results. This research suggested that the utilization of multi-source data with various scales would help to the establishment of an appropriate model for calculating GPP and NPP at regional scales with relatively high spatial and temporal resolution. PMID:27088356
Two MODIS Aerosol Products over Ocean on the Terra and Aqua CERES SSF Datasets.
NASA Astrophysics Data System (ADS)
Ignatov, Alexander; Minnis, Patrick; Loeb, Norman; Wielicki, Bruce; Miller, Walter; Sun-Mack, Sunny; Tanré, Didier; Remer, Lorraine; Laszlo, Istvan; Geier, Erika
2005-04-01
Understanding the impact of aerosols on the earth's radiation budget and the long-term climate record requires consistent measurements of aerosol properties and radiative fluxes. The Clouds and the Earth's Radiant Energy System (CERES) Science Team combines satellite-based retrievals of aerosols, clouds, and radiative fluxes into Single Scanner Footprint (SSF) datasets from the Terra and Aqua satellites. Over ocean, two aerosol products are derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) using different sampling and aerosol algorithms. The primary, or M, product is taken from the standard multispectral aerosol product developed by the MODIS aerosol group while a simpler, secondary [Advanced Very High Resolution Radiometer (AVHRR) like], or A, product is derived by the CERES Science Team using a different cloud clearing method and a single-channel aerosol algorithm. Two aerosol optical depths (AOD), τA1 and τA2, are derived from MODIS bands 1 (0.644 μm) and 6 (1.632 μm) resembling the AVHRR/3 channels 1 and 3A, respectively. On Aqua the retrievals are made in band 7 (2.119 μm) because of poor quality data from band 6. The respective Ångström exponents can be derived from the values of τ. The A product serves as a backup for the M product. More importantly, the overlap of these aerosol products is essential for placing the 20+ year heritage AVHRR aerosol record in the context of more advanced aerosol sensors and algorithms such as that used for the M product.This study documents the M and A products, highlighting their CERES SSF specifics. Based on 2 weeks of global Terra data, coincident M and A AODs are found to be strongly correlated in both bands. However, both domains in which the M and A aerosols are available, and the respective τ/α statistics significantly differ because of discrepancies in sampling due to differences in cloud and sun-glint screening. In both aerosol products, correlation is observed between the retrieved aerosol parameters (τ/α) and ambient cloud amount, with the dependence in the M product being more pronounced than in the A product.
Validation Study of a Predictive Algorithm to Evaluate Opioid Use Disorder in a Primary Care Setting
Sharma, Maneesh; Lee, Chee; Kantorovich, Svetlana; Tedtaotao, Maria; Smith, Gregory A.
2017-01-01
Background: Opioid abuse in chronic pain patients is a major public health issue. Primary care providers are frequently the first to prescribe opioids to patients suffering from pain, yet do not always have the time or resources to adequately evaluate the risk of opioid use disorder (OUD). Purpose: This study seeks to determine the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm (“profile”) incorporating phenotypic and, more uniquely, genotypic risk factors. Methods and Results: In a validation study with 452 participants diagnosed with OUD and 1237 controls, the algorithm successfully categorized patients at high and moderate risk of OUD with 91.8% sensitivity. Regardless of changes in the prevalence of OUD, sensitivity of the algorithm remained >90%. Conclusion: The algorithm correctly stratifies primary care patients into low-, moderate-, and high-risk categories to appropriately identify patients in need for additional guidance, monitoring, or treatment changes. PMID:28890908
Sharma, Maneesh; Lee, Chee; Kantorovich, Svetlana; Tedtaotao, Maria; Smith, Gregory A; Brenton, Ashley
2017-01-01
Opioid abuse in chronic pain patients is a major public health issue. Primary care providers are frequently the first to prescribe opioids to patients suffering from pain, yet do not always have the time or resources to adequately evaluate the risk of opioid use disorder (OUD). This study seeks to determine the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm ("profile") incorporating phenotypic and, more uniquely, genotypic risk factors. In a validation study with 452 participants diagnosed with OUD and 1237 controls, the algorithm successfully categorized patients at high and moderate risk of OUD with 91.8% sensitivity. Regardless of changes in the prevalence of OUD, sensitivity of the algorithm remained >90%. The algorithm correctly stratifies primary care patients into low-, moderate-, and high-risk categories to appropriately identify patients in need for additional guidance, monitoring, or treatment changes.
MacRae, J; Darlow, B; McBain, L; Jones, O; Stubbe, M; Turner, N; Dowell, A
2015-08-21
To develop a natural language processing software inference algorithm to classify the content of primary care consultations using electronic health record Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care. Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation. Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between 1 January 2008-31 December 2013 for children under 18 years of age (n=754,242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three 'gold standard' sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm. Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgements of expert clinicians within the 1200 record gold standard validation set. The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other respiratory conditions) to 0.91 (95% CI 0.79 to 1.00; throat infections). A software inference algorithm that uses primary care Big Data can accurately classify the content of clinical consultations. This algorithm will enable accurate estimation of the prevalence of childhood respiratory illness in primary care and resultant service utilisation. The methodology can also be applied to other areas of clinical care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Uniform color space analysis of LACIE image products
NASA Technical Reports Server (NTRS)
Nalepka, R. F. (Principal Investigator); Balon, R. J.; Cicone, R. C.
1979-01-01
The author has identified the following significant results. Analysis and comparison of image products generated by different algorithms show that the scaling and biasing of data channels for control of PFC primaries lead to loss of information (in a probability-of misclassification sense) by two major processes. In order of importance they are: neglecting the input of one channel of data in any one image, and failing to provide sufficient color resolution of the data. The scaling and biasing approach tends to distort distance relationships in data space and provides less than desirable resolution when the data variation is typical of a developed, nonhazy agricultural scene.
NASA Astrophysics Data System (ADS)
Lee, Dasom; An, Yong Rock; Park, Kyum Joon; Kim, Hyun Woo; Lee, Dabin; Joo, Hui Tae; Oh, Young Geun; Kim, Su Min; Kang, Chang Keun; Lee, Sang Heon
2017-09-01
The minke whale (Balaenoptera acutorostrata) is the most common baleen whale among several marine mammal species observed in Korea. Since a high concentrated condition of prey to whales can be obtained by physical structures, the foraging whale distribution can be an indicator of biological hotspot. Our main objective is verifying the coastal upwelling-southwestern East Sea as a productive biological hotspot based on the geographical distribution of minke whales. Among the cetacean research surveys of the National Institute of Fisheries Science since 1999, 9 years data for the minke whales available in the East Sea were used for this study. The regional primary productivity derived from Moderate-Resolution Imaging Spectroradiometer (MODIS) was used for a proxy of biological productivity. Minke whales observed during the sighting surveys were mostly concentrated in May and found mostly (approximately 70%) in the southwestern coastal areas (< 300 m) where high chlorophyll concentrations and primary productivity were generally detected. Based on MODIS-derived primary productivity algorithm, the annual primary production (320 g C m-2 y-1) estimated in the southwestern coastal region of the East Sea belongs to the highly productive coastal upwelling regions in the world. A change in the main spatial distribution of minke whales was found in recent years, which indicate that the major habitats of mink whales have been shifted into the north of the common coastal upwelling regions. This is consistent with the recently reported unprecedented coastal upwelling in the mid-eastern coast of Korea. Based on high phytoplankton productivity and high distribution of minke whales, the southwestern coastal regions can be considered as one of biological hotspots in the East Sea. These regions are important for ecosystem dynamics and the population biology of top marine predators, especially migratory whales and needed to be carefully managed from a resource management perspective.
Onboard Science and Applications Algorithm for Hyperspectral Data Reduction
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Davies, Ashley G.; Silverman, Dorothy; Mandl, Daniel
2012-01-01
An onboard processing mission concept is under development for a possible Direct Broadcast capability for the HyspIRI mission, a Hyperspectral remote sensing mission under consideration for launch in the next decade. The concept would intelligently spectrally and spatially subsample the data as well as generate science products onboard to enable return of key rapid response science and applications information despite limited downlink bandwidth. This rapid data delivery concept focuses on wildfires and volcanoes as primary applications, but also has applications to vegetation, coastal flooding, dust, and snow/ice applications. Operationally, the HyspIRI team would define a set of spatial regions of interest where specific algorithms would be executed. For example, known coastal areas would have certain products or bands downlinked, ocean areas might have other bands downlinked, and during fire seasons other areas would be processed for active fire detections. Ground operations would automatically generate the mission plans specifying the highest priority tasks executable within onboard computation, setup, and data downlink constraints. The spectral bands of the TIR (thermal infrared) instrument can accurately detect the thermal signature of fires and send down alerts, as well as the thermal and VSWIR (visible to short-wave infrared) data corresponding to the active fires. Active volcanism also produces a distinctive thermal signature that can be detected onboard to enable spatial subsampling. Onboard algorithms and ground-based algorithms suitable for onboard deployment are mature. On HyspIRI, the algorithm would perform a table-driven temperature inversion from several spectral TIR bands, and then trigger downlink of the entire spectrum for each of the hot pixels identified. Ocean and coastal applications include sea surface temperature (using a small spectral subset of TIR data, but requiring considerable ancillary data), and ocean color applications to track biological activity such as harmful algal blooms. Measuring surface water extent to track flooding is another rapid response product leveraging VSWIR spectral information.
Production scheduling with ant colony optimization
NASA Astrophysics Data System (ADS)
Chernigovskiy, A. S.; Kapulin, D. V.; Noskova, E. E.; Yamskikh, T. N.; Tsarev, R. Yu
2017-10-01
The optimum solution of the production scheduling problem for manufacturing processes at an enterprise is crucial as it allows one to obtain the required amount of production within a specified time frame. Optimum production schedule can be found using a variety of optimization algorithms or scheduling algorithms. Ant colony optimization is one of well-known techniques to solve the global multi-objective optimization problem. In the article, the authors present a solution of the production scheduling problem by means of an ant colony optimization algorithm. A case study of the algorithm efficiency estimated against some others production scheduling algorithms is presented. Advantages of the ant colony optimization algorithm and its beneficial effect on the manufacturing process are provided.
Verification of a New NOAA/NSIDC Passive Microwave Sea-Ice Concentration Climate Record
NASA Technical Reports Server (NTRS)
Meier, Walter N.; Peng, Ge; Scott, Donna J.; Savoie, Matt H.
2014-01-01
A new satellite-based passive microwave sea-ice concentration product developed for the National Oceanic and Atmospheric Administration (NOAA)Climate Data Record (CDR) programme is evaluated via comparison with other passive microwave-derived estimates. The new product leverages two well-established concentration algorithms, known as the NASA Team and Bootstrap, both developed at and produced by the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC). The sea ice estimates compare well with similar GSFC products while also fulfilling all NOAA CDR initial operation capability (IOC) requirements, including (1) self describing file format, (2) ISO 19115-2 compliant collection-level metadata,(3) Climate and Forecast (CF) compliant file-level metadata, (4) grid-cell level metadata (data quality fields), (5) fully automated and reproducible processing and (6) open online access to full documentation with version control, including source code and an algorithm theoretical basic document. The primary limitations of the GSFC products are lack of metadata and use of untracked manual corrections to the output fields. Smaller differences occur from minor variations in processing methods by the National Snow and Ice Data Center (for the CDR fields) and NASA (for the GSFC fields). The CDR concentrations do have some differences from the constituent GSFC concentrations, but trends and variability are not substantially different.
Pre-Launch Tasks Proposed in our Contract of December 1991
NASA Technical Reports Server (NTRS)
1998-01-01
We propose, during the pre-EOS phase to: (1) develop, with other MODIS Team Members, a means of discriminating different major biome types with NDVI and other AVHRR-based data; (2) develop a simple ecosystem process model for each of these biomes, BIOME-BGC; (3) relate the seasonal trend of weekly composite NDVI to vegetation phenology and temperature limits to develop a satellite defined growing season for vegetation; and (4) define physiologically based energy to mass conversion factors for carbon and water for each biome. Our final core at-launch product will be simplified, completely satellite driven biome specific models for net primary production. We will build these biome specific satellite driven algorithms using a family of simple ecosystem process models as calibration models, collectively called BIOME-BGC, and establish coordination with an existing network of ecological study sites in order to test and validate these products. Field datasets will then be available for both BIOME-BGC development and testing, use for algorithm developments of other MODIS Team Members, and ultimately be our first test point for MODIS land vegetation products upon launch. We will use field sites from the National Science Foundation Long-Term Ecological Research network, and develop Glacier National Park as a major site for intensive validation.
Pre-Launch Tasks Proposed in our Contract of December 1991
NASA Technical Reports Server (NTRS)
Running, Steven W.; Nemani, Ramakrishna R.; Glassy, Joseph
1997-01-01
We propose, during the pre-EOS phase to: (1) develop, with other MODIS Team Members, a means of discriminating different major biome types with NDVI and other AVHRR-based data. (2) develop a simple ecosystem process model for each of these biomes, BIOME-BGC (3) relate the seasonal trend of weekly composite NDVI to vegetation phenology and temperature limits to develop a satellite defined growing season for vegetation; and (4) define physiologically based energy to mass conversion factors for carbon and water for each biome. Our final core at-launch product will be simplified, completely satellite driven biome specific models for net primary production. We will build these biome specific satellite driven algorithms using a family of simple ecosystem process models as calibration models, collectively called BIOME-BGC, and establish coordination with an existing network of ecological study sites in order to test and validate these products. Field datasets will then be available for both BIOME-BGC development and testing, use for algorithm developments of other MODIS Team Members, and ultimately be our first test point for MODIS land vegetation products upon launch. We will use field sites from the National Science Foundation Long-Term Ecological Research network, and develop Glacier National Park as a major site for intensive validation.
Optimum Image Formation for Spaceborne Microwave Radiometer Products.
Long, David G; Brodzik, Mary J
2016-05-01
This paper considers some of the issues of radiometer brightness image formation and reconstruction for use in the NASA-sponsored Calibrated Passive Microwave Daily Equal-Area Scalable Earth Grid 2.0 Brightness Temperature Earth System Data Record project, which generates a multisensor multidecadal time series of high-resolution radiometer products designed to support climate studies. Two primary reconstruction algorithms are considered: the Backus-Gilbert approach and the radiometer form of the scatterometer image reconstruction (SIR) algorithm. These are compared with the conventional drop-in-the-bucket (DIB) gridded image formation approach. Tradeoff study results for the various algorithm options are presented to select optimum values for the grid resolution, the number of SIR iterations, and the BG gamma parameter. We find that although both approaches are effective in improving the spatial resolution of the surface brightness temperature estimates compared to DIB, SIR requires significantly less computation. The sensitivity of the reconstruction to the accuracy of the measurement spatial response function (MRF) is explored. The partial reconstruction of the methods can tolerate errors in the description of the sensor measurement response function, which simplifies the processing of historic sensor data for which the MRF is not known as well as modern sensors. Simulation tradeoff results are confirmed using actual data.
Arts, E E A; Popa, C D; Den Broeder, A A; Donders, R; Sandoo, A; Toms, T; Rollefstad, S; Ikdahl, E; Semb, A G; Kitas, G D; Van Riel, P L C M; Fransen, J
2016-04-01
Predictive performance of cardiovascular disease (CVD) risk calculators appears suboptimal in rheumatoid arthritis (RA). A disease-specific CVD risk algorithm may improve CVD risk prediction in RA. The objectives of this study are to adapt the Systematic COronary Risk Evaluation (SCORE) algorithm with determinants of CVD risk in RA and to assess the accuracy of CVD risk prediction calculated with the adapted SCORE algorithm. Data from the Nijmegen early RA inception cohort were used. The primary outcome was first CVD events. The SCORE algorithm was recalibrated by reweighing included traditional CVD risk factors and adapted by adding other potential predictors of CVD. Predictive performance of the recalibrated and adapted SCORE algorithms was assessed and the adapted SCORE was externally validated. Of the 1016 included patients with RA, 103 patients experienced a CVD event. Discriminatory ability was comparable across the original, recalibrated and adapted SCORE algorithms. The Hosmer-Lemeshow test results indicated that all three algorithms provided poor model fit (p<0.05) for the Nijmegen and external validation cohort. The adapted SCORE algorithm mainly improves CVD risk estimation in non-event cases and does not show a clear advantage in reclassifying patients with RA who develop CVD (event cases) into more appropriate risk groups. This study demonstrates for the first time that adaptations of the SCORE algorithm do not provide sufficient improvement in risk prediction of future CVD in RA to serve as an appropriate alternative to the original SCORE. Risk assessment using the original SCORE algorithm may underestimate CVD risk in patients with RA. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Astrophysics Data System (ADS)
Abdulghafoor, O. B.; Shaat, M. M. R.; Ismail, M.; Nordin, R.; Yuwono, T.; Alwahedy, O. N. A.
2017-05-01
In this paper, the problem of resource allocation in OFDM-based downlink cognitive radio (CR) networks has been proposed. The purpose of this research is to decrease the computational complexity of the resource allocation algorithm for downlink CR network while concerning the interference constraint of primary network. The objective has been secured by adopting pricing scheme to develop power allocation algorithm with the following concerns: (i) reducing the complexity of the proposed algorithm and (ii) providing firm power control to the interference introduced to primary users (PUs). The performance of the proposed algorithm is tested for OFDM- CRNs. The simulation results show that the performance of the proposed algorithm approached the performance of the optimal algorithm at a lower computational complexity, i.e., O(NlogN), which makes the proposed algorithm suitable for more practical applications.
NASA Astrophysics Data System (ADS)
Wang, H.; Li, X.; Xiao, J.; Ma, M.
2017-12-01
Arid and semi-arid ecosystems cover more than one-third of the Earth's land surface, it is of great important to the global carbon cycle. However, the magnitude of carbon sequestration and its contribution to global atmospheric carbon cycle is poorly understood due to the worldwide paucity of measurements of carbon exchange in the arid ecosystems. Accurate and continuous monitoring the production of arid ecosystem is of great importance for regional carbon cycle estimation. The MOD17A2 product provides high frequency observations of terrestrial Gross Primary Productivity (GPP) over the world. Although there have been plenty of studies to validate the MODIS GPP products with ground based measurements over a range of biome types, few have comprehensively validated the performance of MODIS estimates in arid and semi-arid ecosystems. Thus, this study examined the performance of the MODIS-derived GPP comparing with the EC observed GPP at different timescales for the main arid ecosystems in the arid and semi-arid ecosystems in China, and optimized the performance of the MODIS GPP calculations by using the in-situ metrological forcing data, and optimization of biome-specific parameters with the Bayesian approach. Our result revealed that the MOD17 algorithm could capture the broad trends of GPP at 8-day time scales for all investigated sites on the whole. However, the GPP product was underestimated in most ecosystems in the arid region, especially the irrigated cropland and forest ecosystems, while the desert ecosystem was overestimated in the arid region. On the annual time scale, the best performance was observed in grassland and cropland, followed by forest and desert ecosystems. On the 8-day timescale, the RMSE between MOD17 products and in-situ flux observations of all sites was 2.22 gC/m2/d, and R2 was 0.69. By using the in-situ metrological data driven, optimizing the biome-based parameters of the algorithm, we improved the performances of the MODIS GPP calculation over the main ecosystems in arid region of China.
Optimization-based scatter estimation using primary modulation for computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao
Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function ismore » designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.« less
Tahriri, Farzad; Dawal, Siti Zawiah Md; Taha, Zahari
2014-01-01
A new multiobjective dynamic fuzzy genetic algorithm is applied to solve a fuzzy mixed-model assembly line sequencing problem in which the primary goals are to minimize the total make-span and minimize the setup number simultaneously. Trapezoidal fuzzy numbers are implemented for variables such as operation and travelling time in order to generate results with higher accuracy and representative of real-case data. An improved genetic algorithm called fuzzy adaptive genetic algorithm (FAGA) is proposed in order to solve this optimization model. In establishing the FAGA, five dynamic fuzzy parameter controllers are devised in which fuzzy expert experience controller (FEEC) is integrated with automatic learning dynamic fuzzy controller (ALDFC) technique. The enhanced algorithm dynamically adjusts the population size, number of generations, tournament candidate, crossover rate, and mutation rate compared with using fixed control parameters. The main idea is to improve the performance and effectiveness of existing GAs by dynamic adjustment and control of the five parameters. Verification and validation of the dynamic fuzzy GA are carried out by developing test-beds and testing using a multiobjective fuzzy mixed production assembly line sequencing optimization problem. The simulation results highlight that the performance and efficacy of the proposed novel optimization algorithm are more efficient than the performance of the standard genetic algorithm in mixed assembly line sequencing model. PMID:24982962
Activation Product Inverse Calculations with NDI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, Mark Girard
NDI based forward calculations of activation product concentrations can be systematically used to infer structural element concentrations from measured activation product concentrations with an iterative algorithm. The algorithm converges exactly for the basic production-depletion chain with explicit activation product production and approximately, in the least-squares sense, for the full production-depletion chain with explicit activation product production and nosub production-depletion chain. The algorithm is suitable for automation.
NASA Technical Reports Server (NTRS)
Wang, Menghua
2003-01-01
The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.
Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches.
Çelik, Ufuk; Yurtay, Nilüfer; Koç, Emine Rabia; Tepe, Nermin; Güllüoğlu, Halil; Ertaş, Mustafa
2015-01-01
The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS) were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy.
Reddy, Ashok; Sessums, Laura; Gupta, Reshma; Jin, Janel; Day, Tim; Finke, Bruce; Bitton, Asaf
2017-09-01
Risk-stratified care management is essential to improving population health in primary care settings, but evidence is limited on the type of risk stratification method and its association with care management services. We describe risk stratification patterns and association with care management services for primary care practices in the Comprehensive Primary Care (CPC) initiative. We undertook a qualitative approach to categorize risk stratification methods being used by CPC practices and tested whether these stratification methods were associated with delivery of care management services. CPC practices reported using 4 primary methods to stratify risk for their patient populations: a practice-developed algorithm (n = 215), the American Academy of Family Physicians' clinical algorithm (n = 155), payer claims and electronic health records (n = 62), and clinical intuition (n = 52). CPC practices using practice-developed algorithm identified the most number of high-risk patients per primary care physician (282 patients, P = .006). CPC practices using clinical intuition had the most high-risk patients in care management and a greater proportion of high-risk patients receiving care management per primary care physician (91 patients and 48%, P =.036 and P =.128, respectively). CPC practices used 4 primary methods to identify high-risk patients. Although practices that developed their own algorithm identified the greatest number of high-risk patients, practices that used clinical intuition connected the greatest proportion of patients to care management services. © 2017 Annals of Family Medicine, Inc.
NASA Astrophysics Data System (ADS)
Oesterle, Jonathan; Lionel, Amodeo
2018-06-01
The current competitive situation increases the importance of realistically estimating product costs during the early phases of product and assembly line planning projects. In this article, several multi-objective algorithms using difference dominance rules are proposed to solve the problem associated with the selection of the most effective combination of product and assembly lines. The list of developed algorithms includes variants of ant colony algorithms, evolutionary algorithms and imperialist competitive algorithms. The performance of each algorithm and dominance rule is analysed by five multi-objective quality indicators and fifty problem instances. The algorithms and dominance rules are ranked using a non-parametric statistical test.
NASA Astrophysics Data System (ADS)
Naeger, Aaron R.; Gupta, Pawan; Zavodsky, Bradley T.; McGrath, Kevin M.
2016-06-01
The primary goal of this study was to generate a near-real time (NRT) aerosol optical depth (AOD) product capable of providing a comprehensive understanding of the aerosol spatial distribution over the Pacific Ocean, in order to better monitor and track the trans-Pacific transport of aerosols. Therefore, we developed a NRT product that takes advantage of observations from both low-earth orbiting and geostationary satellites. In particular, we utilize AOD products from the Moderate Resolution Imaging Spectroradiometer (MODIS) and Suomi National Polar-orbiting Partnership (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) satellites. Then, we combine these AOD products with our own retrieval algorithms developed for the NOAA Geostationary Operational Environmental Satellite (GOES-15) and Japan Meteorological Agency (JMA) Multi-functional Transport Satellite (MTSAT-2) to generate a NRT daily AOD composite product. We present examples of the daily AOD composite product for a case study of trans-Pacific transport of Asian pollution and dust aerosols in mid-March 2014. Overall, the new product successfully tracks this aerosol plume during its trans-Pacific transport to the west coast of North America as the frequent geostationary observations lead to a greater coverage of cloud-free AOD retrievals equatorward of about 35° N, while the polar-orbiting satellites provide a greater coverage of AOD poleward of 35° N. However, we note several areas across the domain of interest from Asia to North America where the GOES-15 and MTSAT-2 retrieval algorithms can introduce significant uncertainties into the new product.
Adrogué, Horacio J
2010-11-01
Respiratory acidosis is characterized by a primary increase in whole-body carbon dioxide stores caused by a positive carbon dioxide balance. This acid-base disorder, if severe, may be life-threatening, therefore requiring prompt recognition and expert management. The case presented highlights the essential features of the diagnosis and management of respiratory acidosis. A brief description of the modifiers of carbon dioxide production, the pathogenesis of respiratory acidosis, and an algorithm for assessment and management of this disorder is included. Key teaching points include the clinical value of both arterial and venous blood gas analyses and the importance of proper recognition of a primary respiratory arrest in contrast to primary circulatory arrest when managing a patient who requires resuscitation from "cardiorespiratory arrest." Copyright © 2010 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
STAR Algorithm Integration Team - Facilitating operational algorithm development
NASA Astrophysics Data System (ADS)
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Remote Sensing of Cloud, Aerosol, and Water Vapor Properties from MODIS
NASA Technical Reports Server (NTRS)
King, Michael D.
2001-01-01
MODIS is an earth-viewing cross-track scanning spectroradiometer launched on the Terra satellite in December 1999. MODIS scans a swath width sufficient to provide nearly complete global coverage every two days from a polar-orbiting, sun-synchronous, platform at an altitude of 705 km, and provides images in 36 spectral bands from 0.415 to 14.235 microns with spatial resolutions of 250 m (2 bands), 500 m (5 bands) and 1000 m (29 bands). These bands have been carefully selected to enable advanced studies of land, ocean, and atmospheric processes. In this presentation I will review the comprehensive set of remote sensing algorithms that have been developed for the remote sensing of atmospheric properties using MODIS data, placing primary emphasis on the principal atmospheric applications of: (1) developing a cloud mask for distinguishing clear sky from clouds, (2) retrieving global cloud radiative and microphysical properties, including cloud top pressure and temperature, effective emissivity, cloud optical thickness, thermodynamic phase, and effective radius, (3) monitoring tropospheric aerosol optical thickness over the land and ocean and aerosol size distribution over the ocean, (4) determining atmospheric profiles of moisture and temperature, and (5) estimating column water amount. The physical principles behind the determination of each of these atmospheric products will be described, together with an example of their application using MODIS observations. All products are archived into two categories: pixel-level retrievals (referred to as Level-2 products) and global gridded products at a latitude and longitude resolution of 1 deg (Level-3 products). An overview of the MODIS atmosphere algorithms and products, status, validation activities, and early level-2 and -3 results will be presented. Finally, I will present some highlights from the land and ocean algorithms developed for processing global MODIS observations, including: (1) surface reflectance, (2) vegetation indices, leaf area index, and FPAR, (3) albedo and nadir BRDF-adjusted reflectance, (4) normalized water-leaving radiance, (5) chlorophyll-a concentration, and (6) sea surface temperature.
NASA Astrophysics Data System (ADS)
Noh, Myoung-Jong; Howat, Ian M.
2018-02-01
The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.
Martin, Heather L.; Adams, Matthew; Higgins, Julie; Bond, Jacquelyn; Morrison, Ewan E.; Bell, Sandra M.; Warriner, Stuart; Nelson, Adam; Tomlinson, Darren C.
2014-01-01
Toxicity is a major cause of failure in drug discovery and development, and whilst robust toxicological testing occurs, efficiency could be improved if compounds with cytotoxic characteristics were identified during primary compound screening. The use of high-content imaging in primary screening is becoming more widespread, and by utilising phenotypic approaches it should be possible to incorporate cytotoxicity counter-screens into primary screens. Here we present a novel phenotypic assay that can be used as a counter-screen to identify compounds with adverse cellular effects. This assay has been developed using U2OS cells, the PerkinElmer Operetta high-content/high-throughput imaging system and Columbus image analysis software. In Columbus, algorithms were devised to identify changes in nuclear morphology, cell shape and proliferation using DAPI, TOTO-3 and phosphohistone H3 staining, respectively. The algorithms were developed and tested on cells treated with doxorubicin, taxol and nocodazole. The assay was then used to screen a novel, chemical library, rich in natural product-like molecules of over 300 compounds, 13.6% of which were identified as having adverse cellular effects. This assay provides a relatively cheap and rapid approach for identifying compounds with adverse cellular effects during screening assays, potentially reducing compound rejection due to toxicity in subsequent in vitro and in vivo assays. PMID:24505478
Weiss, Jeremy C; Page, David; Peissig, Peggy L; Natarajan, Sriraam; McCarty, Catherine
2013-01-01
Electronic health records (EHRs) are an emerging relational domain with large potential to improve clinical outcomes. We apply two statistical relational learning (SRL) algorithms to the task of predicting primary myocardial infarction. We show that one SRL algorithm, relational functional gradient boosting, outperforms propositional learners particularly in the medically-relevant high recall region. We observe that both SRL algorithms predict outcomes better than their propositional analogs and suggest how our methods can augment current epidemiological practices. PMID:25360347
NASA Astrophysics Data System (ADS)
Tomiwa, K. G.
2017-09-01
The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.
Improving diagnostic recognition of primary hyperparathyroidism with machine learning.
Somnay, Yash R; Craven, Mark; McCoy, Kelly L; Carty, Sally E; Wang, Tracy S; Greenberg, Caprice C; Schneider, David F
2017-04-01
Parathyroidectomy offers the only cure for primary hyperparathyroidism, but today only 50% of primary hyperparathyroidism patients are referred for operation, in large part, because the condition is widely under-recognized. The diagnosis of primary hyperparathyroidism can be especially challenging with mild biochemical indices. Machine learning is a collection of methods in which computers build predictive algorithms based on labeled examples. With the aim of facilitating diagnosis, we tested the ability of machine learning to distinguish primary hyperparathyroidism from normal physiology using clinical and laboratory data. This retrospective cohort study used a labeled training set and 10-fold cross-validation to evaluate accuracy of the algorithm. Measures of accuracy included area under the receiver operating characteristic curve, precision (sensitivity), and positive and negative predictive value. Several different algorithms and ensembles of algorithms were tested using the Weka platform. Among 11,830 patients managed operatively at 3 high-volume endocrine surgery programs from March 2001 to August 2013, 6,777 underwent parathyroidectomy for confirmed primary hyperparathyroidism, and 5,053 control patients without primary hyperparathyroidism underwent thyroidectomy. Test-set accuracies for machine learning models were determined using 10-fold cross-validation. Age, sex, and serum levels of preoperative calcium, phosphate, parathyroid hormone, vitamin D, and creatinine were defined as potential predictors of primary hyperparathyroidism. Mild primary hyperparathyroidism was defined as primary hyperparathyroidism with normal preoperative calcium or parathyroid hormone levels. After testing a variety of machine learning algorithms, Bayesian network models proved most accurate, classifying correctly 95.2% of all primary hyperparathyroidism patients (area under receiver operating characteristic = 0.989). Omitting parathyroid hormone from the model did not decrease the accuracy significantly (area under receiver operating characteristic = 0.985). In mild disease cases, however, the Bayesian network model classified correctly 71.1% of patients with normal calcium and 92.1% with normal parathyroid hormone levels preoperatively. Bayesian networking and AdaBoost improved the accuracy of all parathyroid hormone patients to 97.2% cases (area under receiver operating characteristic = 0.994), and 91.9% of primary hyperparathyroidism patients with mild disease. This was significantly improved relative to Bayesian networking alone (P < .0001). Machine learning can diagnose accurately primary hyperparathyroidism without human input even in mild disease. Incorporation of this tool into electronic medical record systems may aid in recognition of this under-diagnosed disorder. Copyright © 2016 Elsevier Inc. All rights reserved.
Seasonal Phytoplankton Dynamics in the Eastern Tropical Atlantic
NASA Technical Reports Server (NTRS)
Monger, Bruce; McClain, Charles; Murtugudde, Ragu
1997-01-01
The coastal zone color scanner (CZCS) that operated aboard the Nimbus 7 satellite provided extensive coverage of phytoplankton pigment concentrations in the surface waters of the eastern tropical Atlantic (ETA) from March 1979 to February 1980 and coincided with four major research cruises to this region. Total primary production within the ETA (5 deg N-10 deg S, 25 deg W-10 deg E) was determined from CZCS pigment estimates and an empirical algorithm derived from concurrent in situ data taken along 4 deg W that relates near-surface chlorophyll concentration and integrated primary production. We estimated an average annual production for the ETA of 2.3 Gt C/yr with an associated 3.5-fold seasonal variation in the magnitude of this production. We describe the principal physical mechanisms controlling seasonal phytoplankton dynamics within the ETA and propose that in addition to seasonal change in the thermocline depth, one must also consider changes in the depth of the equatorial under current. An extensive validation effort indicates that the standard CZCS global products are a conservative estimate of pigment concentrations in ETA surface waters. Significant underestimates by the CZCS global products were observed in June and July which we attributed, in part, to aerosol correction errors and, more importantly, to errors caused by a significant reduction in the concentration of near-surface dissolved organic matter that resulted from strong equatorial upwelling.
Heating Structures Derived from Satellite
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Adler, R.; Haddad, Z.; Hou, A.; Kakar, R.; Krishnamurti, T. N.; Kummerow, C.; Lang, S.; Meneghini, R.; Olson, W.
2004-01-01
Rainfall is a key link in the hydrologic cycle and is a primary heat source for the atmosphere. The vertical distribution of latent-heat release, which is accompanied by rainfall, modulates the large-scale circulations of the tropics and in turn can impact midlatitude weather. This latent heat release is a consequence of phase changes between vapor, liquid, and solid water. The Tropical Rainfall Measuring Mission (TRMM), a joint U.S./Japan space project, was launched in November 1997. It provides an accurate measurement of rainfall over the global tropics which can be used to estimate the four-dimensional structure of latent heating over the global tropics. The distributions of rainfall and inferred heating can be used to advance our understanding of the global energy and water cycle. This paper describes several different algorithms for estimating latent heating using TRMM observations. The strengths and weaknesses of each algorithm as well as the heating products are also discussed. The validation of heating products will be exhibited. Finally, the application of this heating information to global circulation and climate models is presented.
NASA Astrophysics Data System (ADS)
Lee, Y. J.; Matrai, P.; Friedrichs, M. A.; Saba, V. S.
2016-02-01
Net primary production (NPP) is the major source of energy for the Arctic Ocean (AO) ecosystem, as in most ecosystems. Reproducing current patterns of NPP is essential to understand the physical and biogeochemical controls in the present and the future AO. The Primary Productivity Algorithm Round Robin (PPARR) activity provides a framework to evaluate the skill and sensitivity of NPP as estimated by coupled global/regional climate models and earth system models in the AO. Here we compare results generated from 18 global/regional climate models and three earth system models with observations from a unique pan-Arctic data set (1959-2011) that includes in situ NPP (N=928 stations) and nitrate (N=678 stations). Models results showed a distribution similar to the in situ data distribution, except for the high values of integrated NPP data. Model skill of integrated NPP exhibited little difference as a function of sea ice condition (ice-free vs. ice-covered) and depth (shallow vs. deep), but performance of models varied significantly as a function of seasons. For example, simulated integrated NPP was underestimated in the beginning of the production season (April-June) compared to mid-summer (July and August) and had the highest variability in late summer and early fall (September-October). While models typically underestimated mean NPP, nitrate concentrations were overestimated. Overall, models performed better in reproducing nitrate than NPP in terms of differences in variability. The model performance was similar at all depths within the top 100 m, both in NPP and nitrate. Continual feedback, modification and improvement of the participating models and the resulting increase in model skill are the primary goals of the PPARR-5 AO exercise.
Estimating crop net primary production using inventory data and MODIS-derived parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bandaru, Varaprasad; West, Tristram O.; Ricciuto, Daniel M.
2013-06-03
National estimates of spatially-resolved cropland net primary production (NPP) are needed for diagnostic and prognostic modeling of carbon sources, sinks, and net carbon flux. Cropland NPP estimates that correspond with existing cropland cover maps are needed to drive biogeochemical models at the local scale and over national and continental extents. Existing satellite-based NPP products tend to underestimate NPP on croplands. A new Agricultural Inventory-based Light Use Efficiency (AgI-LUE) framework was developed to estimate individual crop biophysical parameters for use in estimating crop-specific NPP. The method is documented here and evaluated for corn and soybean crops in Iowa and Illinois inmore » years 2006 and 2007. The method includes a crop-specific enhanced vegetation index (EVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS), shortwave radiation data estimated using Mountain Climate Simulator (MTCLIM) algorithm and crop-specific LUE per county. The combined aforementioned variables were used to generate spatially-resolved, crop-specific NPP that correspond to the Cropland Data Layer (CDL) land cover product. The modeling framework represented well the gradient of NPP across Iowa and Illinois, and also well represented the difference in NPP between years 2006 and 2007. Average corn and soybean NPP from AgI-LUE was 980 g C m-2 yr-1 and 420 g C m-2 yr-1, respectively. This was 2.4 and 1.1 times higher, respectively, for corn and soybean compared to the MOD17A3 NPP product. Estimated gross primary productivity (GPP) derived from AgI-LUE were in close agreement with eddy flux tower estimates. The combination of new inputs and improved datasets enabled the development of spatially explicit and reliable NPP estimates for individual crops over large regional extents.« less
NASA Astrophysics Data System (ADS)
Siswanto, Eko; Xu, Yongjiu; Ishizaka, Joji
2018-04-01
We applied ocean color algorithms and a primary production model to a 13-year ocean color data set to assess interannual variations of Changjiang-influenced water (CIW) dispersion, with an emphasis on the unusual CIW dispersion during July 2010. The characteristics of the CIW offshore dispersion were primarily driven by alongshore winds and secondarily by the Changjiang discharge, the interannual variations of which were linked to the El Niño/La Niña. The unusual southeastward dispersion of CIW in July 2010 was attributed to a relatively weak southwesterly wind (with southwesterly wind anomalies) and high Changjiang discharge (after the El Niño peak in winter). In July 2010, the CIW, which is characterized by low-salinity, high-gelbstoff, and high-primary production, intruded into the Kuroshio Current axis to form a rare band of CIW that flowed toward an area south of Japan. The southeastward dispersion of CIW in July 2003 was also unusual, but it did not extend as far as in July 2010, perhaps because of the relatively strong southwesterly winds and low Changjiang discharge in July 2003. During La Niña events, the dispersion of CIW retreated toward the coast due to prevailing northeasterly wind anomalies. We confirmed that the CIW in July 2010 was characterized by low-salinity, abundant phytoplankton biomass, and high biological production. The fact that high biological production and the peak of Changjiang discharge occurred in the same month (July) in 2010 indicated that biogeochemical production stimulated by nutrients from the Changjiang was higher than during normal summer conditions.
A Computerized Decision Support System for Depression in Primary Care
Kurian, Benji T.; Trivedi, Madhukar H.; Grannemann, Bruce D.; Claassen, Cynthia A.; Daly, Ella J.; Sunderajan, Prabha
2009-01-01
Objective: In 2004, results from The Texas Medication Algorithm Project (TMAP) showed better clinical outcomes for patients whose physicians adhered to a paper-and-pencil algorithm compared to patients who received standard clinical treatment for major depressive disorder (MDD). However, implementation of and fidelity to the treatment algorithm among various providers was observed to be inadequate. A computerized decision support system (CDSS) for the implementation of the TMAP algorithm for depression has since been developed to improve fidelity and adherence to the algorithm. Method: This was a 2-group, parallel design, clinical trial (one patient group receiving MDD treatment from physicians using the CDSS and the other patient group receiving usual care) conducted at 2 separate primary care clinics in Texas from March 2005 through June 2006. Fifty-five patients with MDD (DSM-IV criteria) with no significant difference in disease characteristics were enrolled, 32 of whom were treated by physicians using CDSS and 23 were treated by physicians using usual care. The study's objective was to evaluate the feasibility and efficacy of implementing a CDSS to assist physicians acutely treating patients with MDD compared to usual care in primary care. Primary efficacy outcomes for depression symptom severity were based on the 17-item Hamilton Depression Rating Scale (HDRS17) evaluated by an independent rater. Results: Patients treated by physicians employing CDSS had significantly greater symptom reduction, based on the HDRS17, than patients treated with usual care (P < .001). Conclusions: The CDSS algorithm, utilizing measurement-based care, was superior to usual care for patients with MDD in primary care settings. Larger randomized controlled trials are needed to confirm these findings. Trial Registration: clinicaltrials.gov Identifier: NCT00551083 PMID:19750065
A computerized decision support system for depression in primary care.
Kurian, Benji T; Trivedi, Madhukar H; Grannemann, Bruce D; Claassen, Cynthia A; Daly, Ella J; Sunderajan, Prabha
2009-01-01
In 2004, results from The Texas Medication Algorithm Project (TMAP) showed better clinical outcomes for patients whose physicians adhered to a paper-and-pencil algorithm compared to patients who received standard clinical treatment for major depressive disorder (MDD). However, implementation of and fidelity to the treatment algorithm among various providers was observed to be inadequate. A computerized decision support system (CDSS) for the implementation of the TMAP algorithm for depression has since been developed to improve fidelity and adherence to the algorithm. This was a 2-group, parallel design, clinical trial (one patient group receiving MDD treatment from physicians using the CDSS and the other patient group receiving usual care) conducted at 2 separate primary care clinics in Texas from March 2005 through June 2006. Fifty-five patients with MDD (DSM-IV criteria) with no significant difference in disease characteristics were enrolled, 32 of whom were treated by physicians using CDSS and 23 were treated by physicians using usual care. The study's objective was to evaluate the feasibility and efficacy of implementing a CDSS to assist physicians acutely treating patients with MDD compared to usual care in primary care. Primary efficacy outcomes for depression symptom severity were based on the 17-item Hamilton Depression Rating Scale (HDRS(17)) evaluated by an independent rater. Patients treated by physicians employing CDSS had significantly greater symptom reduction, based on the HDRS(17), than patients treated with usual care (P < .001). The CDSS algorithm, utilizing measurement-based care, was superior to usual care for patients with MDD in primary care settings. Larger randomized controlled trials are needed to confirm these findings. clinicaltrials.gov Identifier: NCT00551083.
Cave, Andrew J; Davey, Christina; Ahmadi, Elaheh; Drummond, Neil; Fuentes, Sonia; Kazemi-Bajestani, Seyyed Mohammad Reza; Sharpe, Heather; Taylor, Matt
2016-01-01
An accurate estimation of the prevalence of paediatric asthma in Alberta and elsewhere is hampered by uncertainty regarding disease definition and diagnosis. Electronic medical records (EMRs) provide a rich source of clinical data from primary-care practices that can be used in better understanding the occurrence of the disease. The Canadian Primary Care Sentinel Surveillance Network (CPCSSN) database includes cleaned data extracted from the EMRs of primary-care practitioners. The purpose of the study was to develop and validate a case definition of asthma in children 1–17 who consult family physicians, in order to provide primary-care estimates of childhood asthma in Alberta as accurately as possible. The validation involved the comparison of the application of a theoretical algorithm (to identify patients with asthma) to a physician review of records included in the CPCSSN database (to confirm an accurate diagnosis). The comparison yielded 87.4% sensitivity, 98.6% specificity and a positive and negative predictive value of 91.2% and 97.9%, respectively, in the age group 1–17 years. The algorithm was also run for ages 3–17 and 6–17 years, and was found to have comparable statistical values. Overall, the case definition and algorithm yielded strong sensitivity and specificity metrics and was found valid for use in research in CPCSSN primary-care practices. The use of the validated asthma algorithm may improve insight into the prevalence, diagnosis, and management of paediatric asthma in Alberta and Canada. PMID:27882997
Cave, Andrew J; Davey, Christina; Ahmadi, Elaheh; Drummond, Neil; Fuentes, Sonia; Kazemi-Bajestani, Seyyed Mohammad Reza; Sharpe, Heather; Taylor, Matt
2016-11-24
An accurate estimation of the prevalence of paediatric asthma in Alberta and elsewhere is hampered by uncertainty regarding disease definition and diagnosis. Electronic medical records (EMRs) provide a rich source of clinical data from primary-care practices that can be used in better understanding the occurrence of the disease. The Canadian Primary Care Sentinel Surveillance Network (CPCSSN) database includes cleaned data extracted from the EMRs of primary-care practitioners. The purpose of the study was to develop and validate a case definition of asthma in children 1-17 who consult family physicians, in order to provide primary-care estimates of childhood asthma in Alberta as accurately as possible. The validation involved the comparison of the application of a theoretical algorithm (to identify patients with asthma) to a physician review of records included in the CPCSSN database (to confirm an accurate diagnosis). The comparison yielded 87.4% sensitivity, 98.6% specificity and a positive and negative predictive value of 91.2% and 97.9%, respectively, in the age group 1-17 years. The algorithm was also run for ages 3-17 and 6-17 years, and was found to have comparable statistical values. Overall, the case definition and algorithm yielded strong sensitivity and specificity metrics and was found valid for use in research in CPCSSN primary-care practices. The use of the validated asthma algorithm may improve insight into the prevalence, diagnosis, and management of paediatric asthma in Alberta and Canada.
Image processing for x-ray inspection of pistachio nuts
NASA Astrophysics Data System (ADS)
Casasent, David P.
2001-03-01
A review is provided of image processing techniques that have been applied to the inspection of pistachio nuts using X-ray images. X-ray sensors provide non-destructive internal product detail not available from other sensors. The primary concern in this data is detecting the presence of worm infestations in nuts, since they have been linked to the presence of aflatoxin. We describe new techniques for segmentation, feature selection, selection of product categories (clusters), classifier design, etc. Specific novel results include: a new segmentation algorithm to produce images of isolated product items; preferable classifier operation (the classifier with the best probability of correct recognition Pc is not best); higher-order discrimination information is present in standard features (thus, high-order features appear useful); classifiers that use new cluster categories of samples achieve improved performance. Results are presented for X-ray images of pistachio nuts; however, all techniques have use in other product inspection applications.
Automatic control algorithm effects on energy production
NASA Technical Reports Server (NTRS)
Mcnerney, G. M.
1981-01-01
A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.
Analysis of retinal and cortical components of Retinex algorithms
NASA Astrophysics Data System (ADS)
Yeonan-Kim, Jihyun; Bertalmío, Marcelo
2017-05-01
Following Land and McCann's first proposal of the Retinex theory, numerous Retinex algorithms that differ considerably both algorithmically and functionally have been developed. We clarify the relationships among various Retinex families by associating their spatial processing structures to the neural organizations in the retina and the primary visual cortex in the brain. Some of the Retinex algorithms have a retina-like processing structure (Land's designator idea and NASA Retinex), and some show a close connection with the cortical structures in the primary visual area of the brain (two-dimensional L&M Retinex). A third group of Retinexes (the variational Retinex) manifests an explicit algorithmic relation to Wilson-Cowan's physiological model. We intend to overview these three groups of Retinexes with the frame of reference in the biological visual mechanisms.
Algorithms for the diagnosis and treatment of restless legs syndrome in primary care
2011-01-01
Background Restless legs syndrome (RLS) is a neurological disorder with a lifetime prevalence of 3-10%. in European studies. However, the diagnosis of RLS in primary care remains low and mistreatment is common. Methods The current article reports on the considerations of RLS diagnosis and management that were made during a European Restless Legs Syndrome Study Group (EURLSSG)-sponsored task force consisting of experts and primary care practioners. The task force sought to develop a better understanding of barriers to diagnosis in primary care practice and overcome these barriers with diagnostic and treatment algorithms. Results The barriers to diagnosis identified by the task force include the presentation of symptoms, the language used to describe them, the actual term "restless legs syndrome" and difficulties in the differential diagnosis of RLS. Conclusion The EURLSSG task force reached a consensus and agreed on the diagnostic and treatment algorithms published here. PMID:21352569
NASA Technical Reports Server (NTRS)
Gedney, Stephen D.; Lansing, Faiza
1993-01-01
The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.
Biological production models as elements of coupled, atmosphere-ocean models for climate research
NASA Technical Reports Server (NTRS)
Platt, Trevor; Sathyendranath, Shubha
1991-01-01
Process models of phytoplankton production are discussed with respect to their suitability for incorporation into global-scale numerical ocean circulation models. Exact solutions are given for integrals over the mixed layer and the day of analytic, wavelength-independent models of primary production. Within this class of model, the bias incurred by using a triangular approximation (rather than a sinusoidal one) to the variation of surface irradiance through the day is computed. Efficient computation algorithms are given for the nonspectral models. More exact calculations require a spectrally sensitive treatment. Such models exist but must be integrated numerically over depth and time. For these integrations, resolution in wavelength, depth, and time are considered and recommendations made for efficient computation. The extrapolation of the one-(spatial)-dimension treatment to large horizontal scale is discussed.
Sea Ice Mass Reconciliation Exercise (SIMRE) for altimetry derived sea ice thickness data sets
NASA Astrophysics Data System (ADS)
Hendricks, S.; Haas, C.; Tsamados, M.; Kwok, R.; Kurtz, N. T.; Rinne, E. J.; Uotila, P.; Stroeve, J.
2017-12-01
Satellite altimetry is the primary remote sensing data source for retrieval of Arctic sea-ice thickness. Observational data sets are available from current and previous missions, namely ESA's Envisat and CryoSat as well as NASA ICESat. In addition, freeboard results have been published from the earlier ESA ERS missions and candidates for new data products are the Sentinel-3 constellation, the CNES AltiKa mission and NASA laser altimeter successor ICESat-2. With all the different aspects of sensor type and orbit configuration, all missions have unique properties. In addition, thickness retrieval algorithms have evolved over time and data centers have developed different strategies. These strategies may vary in choice of auxiliary data sets, algorithm parts and product resolution and masking. The Sea Ice Mass Reconciliation Exercise (SIMRE) is a project by the sea-ice radar altimetry community to bridge the challenges of comparing data sets across missions and algorithms. The ESA Arctic+ research program facilitates this project with the objective to collect existing data sets and to derive a reconciled estimate of Arctic sea ice mass balance. Starting with CryoSat-2 products, we compare results from different data centers (UCL, AWI, NASA JPL & NASA GSFC) at full resolution along selected orbits with independent ice thickness estimates. Three regions representative of first-year ice, multiyear ice and mixed ice conditions are used to compare the difference in thickness and thickness change between products over the seasonal cycle. We present first results and provide an outline for the further development of SIMRE activities. The methodology for comparing data sets is designed to be extendible and the project is open to contributions by interested groups. Model results of sea ice thickness will be added in a later phase of the project to extend the scope of SIMRE beyond EO products.
A standard deviation selection in evolutionary algorithm for grouper fish feed formulation
NASA Astrophysics Data System (ADS)
Cai-Juan, Soong; Ramli, Razamin; Rahman, Rosshairy Abdul
2016-10-01
Malaysia is one of the major producer countries for fishery production due to its location in the equatorial environment. Grouper fish is one of the potential markets in contributing to the income of the country due to its desirable taste, high demand and high price. However, the demand of grouper fish is still insufficient from the wild catch. Therefore, there is a need to farm grouper fish to cater to the market demand. In order to farm grouper fish, there is a need to have prior knowledge of the proper nutrients needed because there is no exact data available. Therefore, in this study, primary data and secondary data are collected even though there is a limitation of related papers and 30 samples are investigated by using standard deviation selection in Evolutionary algorithm. Thus, this study would unlock frontiers for an extensive research in respect of grouper fish feed formulation. Results shown that the fitness of standard deviation selection in evolutionary algorithm is applicable. The feasible and low fitness, quick solution can be obtained. These fitness can be further predicted to minimize cost in farming grouper fish.
Cooperative optimization and their application in LDPC codes
NASA Astrophysics Data System (ADS)
Chen, Ke; Rong, Jian; Zhong, Xiaochun
2008-10-01
Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.
Integrand-level reduction of loop amplitudes by computational algebraic geometry methods
NASA Astrophysics Data System (ADS)
Zhang, Yang
2012-09-01
We present an algorithm for the integrand-level reduction of multi-loop amplitudes of renormalizable field theories, based on computational algebraic geometry. This algorithm uses (1) the Gröbner basis method to determine the basis for integrand-level reduction, (2) the primary decomposition of an ideal to classify all inequivalent solutions of unitarity cuts. The resulting basis and cut solutions can be used to reconstruct the integrand from unitarity cuts, via polynomial fitting techniques. The basis determination part of the algorithm has been implemented in the Mathematica package, BasisDet. The primary decomposition part can be readily carried out by algebraic geometry softwares, with the output of the package BasisDet. The algorithm works in both D = 4 and D = 4 - 2 ɛ dimensions, and we present some two and three-loop examples of applications of this algorithm.
Minimization of Delay Costs in the Realization of Production Orders in Two-Machine System
NASA Astrophysics Data System (ADS)
Dylewski, Robert; Jardzioch, Andrzej; Dworak, Oliver
2018-03-01
The article presents a new algorithm that enables the allocation of the optimal scheduling of the production orders in the two-machine system based on the minimum cost of order delays. The formulated algorithm uses the method of branch and bounds and it is a particular generalisation of the algorithm enabling for the determination of the sequence of the production orders with the minimal sum of the delays. In order to illustrate the proposed algorithm in the best way, the article contains examples accompanied by the graphical trees of solutions. The research analysing the utility of the said algorithm was conducted. The achieved results proved the usefulness of the proposed algorithm when applied to scheduling of orders. The formulated algorithm was implemented in the Matlab programme. In addition, the studies for different sets of production orders were conducted.
Evaluation from 3-Years Time Serie of Daily Actual Evapotranspiration over the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Faivre, R.; Menenti, M.
2016-08-01
The estimation of turbulent uxes is of primary interest for hydrological and climatological studies. Also the use of optical remote sensing data in the VNIR and TIR domain already proved to allow for the parameterization of surface energy balance, leading to many algorithms. Their use over arid high elevation areas require detailed characterisation of key surface physical properties and atmospheric statement at a reference level. Satellite products aquired over the Tibetan Plateau and simulations results delivered in the frame of the CEOP-AEGIS project provide incentives for a regular analysis at medium scale.This work aims at evaluating the use Feng-Yun 2 series and MODIS data (VNIR and TIR) for land surface evapotranspiration (ET) daily mapping based on SEBI algorithm, over the whole Tibetan Plateau (Faivre, 2014). An evaluation is performed over some reference sites set-up through the Tibetan Plateau.
Ballenger, James C.; Davidson, Jonathan R. T.; Lecrubier, Yves; Nutt, David J.
2001-04-01
The International Consensus Group on Depression and Anxiety has held 7 meetings over the last 3 years that focused on depression and specific anxiety disorders. During the course of the meeting series, a number of common themes have developed. At the last meeting of the Consensus Group, we reviewed these areas of commonality across the spectrum of depression and anxiety disorders. With the aim of improving the recognition and management of depression and anxiety in the primary care setting, we developed an algorithm that is presented in this article. We attempted to balance currently available scientific knowledge about the treatment of these disorders and to reformat it to provide an acceptable algorithm that meets the practical aspects of recognizing and treating these disorders in primary care.
In-Trail Procedure (ITP) Algorithm Design
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.; Siminiceanu, Radu I.
2007-01-01
The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.
Cario, H
2005-03-01
Polycythemias or erythrocytoses in childhood and adolescence are very rare. Systematic data on the clinical presentation and laboratory evaluations as well as on treatment regimens are sparse. The diagnostic program in absolute erythrocytosis includes extensive clinical, hematological, biochemical, and molecular biological examinations which should be applied following a stepwise algorithm. Absolute erythrocytoses are usually subdivided into primary and secondary forms. Primary erythrocytosis is a condition in which the erythropoietic compartment is expanding independently of extrinsic influences or by responding inadequately to them. Primary erythrocytoses include primary familial and congenital polycythemia (PFCP) due to mutations of the erythropoietin (Epo) receptor gene and the myeloproliferative disorder polycythemia vera. Secondary erythrocytoses are driven by hormonal factors (predominantly by Epo) extrinsic to the erythroid compartment. The increased Epo secretion may represent either a physiologic response to tissue hypoxia, an abnormal autonomous Epo production, or a dysregulation of the oxygen-dependent Epo synthesis. Congenital secondary erythrocytoses are caused, e.g., by hemoglobin variants with increased oxygen affinity, by 2,3-bisphosphoglycerate deficiency, or by mutations in the von Hippel-Lindau gene associated with a disturbed oxygen-dependent regulation of Epo synthesis.
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brabec, Jiri; Lin, Lin; Shao, Meiyue
We present a special symmetric Lanczos algorithm and a kernel polynomial method (KPM) for approximating the absorption spectrum of molecules within the linear response time-dependent density functional theory (TDDFT) framework in the product form. In contrast to existing algorithms, the new algorithms are based on reformulating the original non-Hermitian eigenvalue problem as a product eigenvalue problem and the observation that the product eigenvalue problem is self-adjoint with respect to an appropriately chosen inner product. This allows a simple symmetric Lanczos algorithm to be used to compute the desired absorption spectrum. The use of a symmetric Lanczos algorithm only requires halfmore » of the memory compared with the nonsymmetric variant of the Lanczos algorithm. The symmetric Lanczos algorithm is also numerically more stable than the nonsymmetric version. The KPM algorithm is also presented as a low-memory alternative to the Lanczos approach, but the algorithm may require more matrix-vector multiplications in practice. We discuss the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost. Applications to a set of small and medium-sized molecules are also presented.« less
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
Brabec, Jiri; Lin, Lin; Shao, Meiyue; ...
2015-10-06
We present a special symmetric Lanczos algorithm and a kernel polynomial method (KPM) for approximating the absorption spectrum of molecules within the linear response time-dependent density functional theory (TDDFT) framework in the product form. In contrast to existing algorithms, the new algorithms are based on reformulating the original non-Hermitian eigenvalue problem as a product eigenvalue problem and the observation that the product eigenvalue problem is self-adjoint with respect to an appropriately chosen inner product. This allows a simple symmetric Lanczos algorithm to be used to compute the desired absorption spectrum. The use of a symmetric Lanczos algorithm only requires halfmore » of the memory compared with the nonsymmetric variant of the Lanczos algorithm. The symmetric Lanczos algorithm is also numerically more stable than the nonsymmetric version. The KPM algorithm is also presented as a low-memory alternative to the Lanczos approach, but the algorithm may require more matrix-vector multiplications in practice. We discuss the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost. Applications to a set of small and medium-sized molecules are also presented.« less
Subarachnoid hemorrhage admissions retrospectively identified using a prediction model
McIntyre, Lauralyn; Fergusson, Dean; Turgeon, Alexis; dos Santos, Marlise P.; Lum, Cheemun; Chassé, Michaël; Sinclair, John; Forster, Alan; van Walraven, Carl
2016-01-01
Objective: To create an accurate prediction model using variables collected in widely available health administrative data records to identify hospitalizations for primary subarachnoid hemorrhage (SAH). Methods: A previously established complete cohort of consecutive primary SAH patients was combined with a random sample of control hospitalizations. Chi-square recursive partitioning was used to derive and internally validate a model to predict the probability that a patient had primary SAH (due to aneurysm or arteriovenous malformation) using health administrative data. Results: A total of 10,322 hospitalizations with 631 having primary SAH (6.1%) were included in the study (5,122 derivation, 5,200 validation). In the validation patients, our recursive partitioning algorithm had a sensitivity of 96.5% (95% confidence interval [CI] 93.9–98.0), a specificity of 99.8% (95% CI 99.6–99.9), and a positive likelihood ratio of 483 (95% CI 254–879). In this population, patients meeting criteria for the algorithm had a probability of 45% of truly having primary SAH. Conclusions: Routinely collected health administrative data can be used to accurately identify hospitalized patients with a high probability of having a primary SAH. This algorithm may allow, upon validation, an easy and accurate method to create validated cohorts of primary SAH from either ruptured aneurysm or arteriovenous malformation. PMID:27629096
Use of treatment algorithms for depression.
Trivedi, Madhukar H; Fava, Maurizio; Marangell, Lauren B; Osser, David N; Shelton, Richard C
2006-01-01
Depression continues to be a treatment challenge for many physicians-psychiatrists and primary care physicians alike-in part because of the nature of the disorder, but also because of the wide variety of medications and other treatments available, each with a distinct efficacy and safety profile. One way of negotiating treatment decisions is to use treatment guidelines and algorithms. This Commentary, which appears in the September 2006 issue of The Journal of Clinical Psychiatry (2006;67:1458-1465), provides the primary care clinician with insight into the pros and cons of using treatment algorithms to guide the treatment of depression. -Larry Culpepper, M.D.
Current Status of Japanese Global Precipitation Measurement (GPM) Research Project
NASA Astrophysics Data System (ADS)
Kachi, Misako; Oki, Riko; Kubota, Takuji; Masaki, Takeshi; Kida, Satoshi; Iguchi, Toshio; Nakamura, Kenji; Takayabu, Yukari N.
2013-04-01
The Global Precipitation Measurement (GPM) mission is a mission led by the Japan Aerospace Exploration Agency (JAXA) and the National Aeronautics and Space Administration (NASA) under collaboration with many international partners, who will provide constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory, which carries the Dual-frequency Precipitation Radar (DPR) developed by JAXA and the National Institute of Information and Communications Technology (NICT), and the GPM Microwave Imager (GMI) developed by NASA. The GPM Core Observatory is scheduled to be launched in early 2014. JAXA also provides the Global Change Observation Mission (GCOM) 1st - Water (GCOM-W1) named "SHIZUKU," as one of constellation satellites. The SHIZUKU satellite was launched in 18 May, 2012 from JAXA's Tanegashima Space Center, and public data release of the Advanced Microwave Scanning Radiometer 2 (AMSR2) on board the SHIZUKU satellite was planned that Level 1 products in January 2013, and Level 2 products including precipitation in May 2013. The Japanese GPM research project conducts scientific activities on algorithm development, ground validation, application research including production of research products. In addition, we promote collaboration studies in Japan and Asian countries, and public relations activities to extend potential users of satellite precipitation products. In pre-launch phase, most of our activities are focused on the algorithm development and the ground validation related to the algorithm development. As the GPM standard products, JAXA develops the DPR Level 1 algorithm, and the NASA-JAXA Joint Algorithm Team develops the DPR Level 2 and the DPR-GMI combined Level2 algorithms. JAXA also develops the Global Rainfall Map product as national product to distribute hourly and 0.1-degree horizontal resolution rainfall map. All standard algorithms including Japan-US joint algorithm will be reviewed by the Japan-US Joint Precipitation Measuring Mission (PMM) Science Team (JPST) before the release. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. At-launch code was developed in December 2012. In addition, JAXA and NASA have provided synthetic DPR L1 data and tests have been performed using them. Japanese Global Rainfall Map algorithm for the GPM mission has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project, which was sponsored by the Japan Science and Technology Agency (JST) under the Core Research for Evolutional Science and Technology (CREST) framework between 2002 and 2007. The GSMaP near-real-time version and reanalysis version have been in operation at JAXA, and browse images and binary data available at the GSMaP web site (http://sharaku.eorc.jaxa.jp/GSMaP/). The GSMaP algorithm for GPM is developed in collaboration with AMSR2 standard algorithm for precipitation product, and their validation studies are closely related. As JAXA GPM product, we will provide 0.1-degree grid and hourly product for standard and near-realtime processing. Outputs will include hourly rainfall, gauge-calibrated hourly rainfall, and several quality information (satellite information flag, time information flag, and gauge quality information) over global areas from 60°S to 60°N. At-launch code of GSMaP for GPM is under development, and will be delivered to JAXA GPM Mission Operation System by April 2013. At-launch code will include several updates of microwave imager and sounder algorithms and databases, and introduction of rain-gauge correction.
NASA Astrophysics Data System (ADS)
Perez, D.; Phinn, S. R.; Roelfsema, C. M.; Shaw, E. C.; Johnston, L.; Iguel, J.; Camacho, R.
2017-12-01
Primary production and calcification are important to measure and monitor over time, because of their fundamental roles in the carbon cycling and accretion of habitat structure for reef ecosystems. However, monitoring biogeochemical processes in coastal environments has been difficult due to complications in resolving differences in water optical properties from biological productivity and other sources (sediment, dissolved organics, etc.). This complicates application of algorithms developed for satellite image data from open ocean conditions, and requires alternative approaches. This project applied a cross-disciplinary approach, using established methods for monitoring productivity in terrestrial environments to coral reef systems. Availability of regularly acquired high spatial (< 5m pixels), multispectral satellite imagery has improved mapping and monitoring capabilities for shallow, marine environments such as seagrass and coral reefs. There is potential to further develop optical models for remote sensing applications to estimate and monitor reef system processes, such as primary productivity and calcification. This project collected field measurements of spectral absorptance and primary productivity and calcification rates for two reef systems: Heron Reef, southern Great Barrier Reef and Saipan Lagoon, Commonwealth of the Northern Mariana Islands. Field data were used to parameterize a light-use efficiency (LUE) model, estimating productivity from absorbed photosynthetically active radiation. The LUE model has been successfully applied in terrestrial environments for the past 40 years, and could potentially be used in shallow, marine environments. The model was used in combination with a map of benthic community composition produced from objective based image analysis of WorldView 2 imagery. Light-use efficiency was measured for functional groups: coral, algae, seagrass, and sediment. However, LUE was overestimated for sediment, which led to overestimation of productivity for the mapped area. This was due to differences in spatial and temporal resolution of field data used in the model. The limitations and application of the LUE model to coral reef environments will be presented.
A new algorithm for finding survival coefficients employed in reliability equations
NASA Technical Reports Server (NTRS)
Bouricius, W. G.; Flehinger, B. J.
1973-01-01
Product reliabilities are predicted from past failure rates and reasonable estimate of future failure rates. Algorithm is used to calculate probability that product will function correctly. Algorithm sums the probabilities of each survival pattern and number of permutations for that pattern, over all possible ways in which product can survive.
17 CFR 41.27 - Prohibition of dual trading in security futures products by floor brokers.
Code of Federal Regulations, 2011 CFR
2011-04-01
... predetermined algorithm, a transaction for the same security futures product on the same designated contract... place advantage or the ability to override a predetermined algorithm must submit an appropriate rule... predetermined algorithm from trading a security futures product for accounts in which these same participants...
17 CFR 41.27 - Prohibition of dual trading in security futures products by floor brokers.
Code of Federal Regulations, 2012 CFR
2012-04-01
... predetermined algorithm, a transaction for the same security futures product on the same designated contract... place advantage or the ability to override a predetermined algorithm must submit an appropriate rule... predetermined algorithm from trading a security futures product for accounts in which these same participants...
17 CFR 41.27 - Prohibition of dual trading in security futures products by floor brokers.
Code of Federal Regulations, 2010 CFR
2010-04-01
... predetermined algorithm, a transaction for the same security futures product on the same designated contract... place advantage or the ability to override a predetermined algorithm must submit an appropriate rule... predetermined algorithm from trading a security futures product for accounts in which these same participants...
Underlying-event sensitive observables in Drell–Yan production using GENEVA
Alioli, Simone; Bauer, Christian W.; Guns, Sam; ...
2016-11-09
We present an extension of the Geneva Monte Carlo framework to include multiple parton interactions (MPI) provided by Pythia8. This allows us to obtain predictions for underlying-event sensitive measurements in Drell–Yan production, in conjunction with Geneva ’s fully differential NNLO calculation, NNLL' resummation for the 0-jet resolution variable (beam thrust), and NLL resummation for the 1-jet resolution variable. We describe the interface with the parton-shower algorithm and MPI model of Pythia8, which preserves both the precision of the partonic N-jet cross sections in Geneva as well as the shower accuracy and good description of soft hadronic physics of Pythia8. Wemore » present results for several underlying-event sensitive observables and compare to data from ATLAS and CMS as well as to standalone Pythia8 predictions. This includes a comparison with the recent ATLAS measurement of the beam thrust spectrum, which provides a potential avenue to fully disentangle the physical effects from the primary hard interaction, primary soft radiation, multiple parton interactions, and nonperturbative hadronization.« less
Underlying-event sensitive observables in Drell–Yan production using GENEVA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alioli, Simone; Bauer, Christian W.; Guns, Sam
We present an extension of the Geneva Monte Carlo framework to include multiple parton interactions (MPI) provided by Pythia8. This allows us to obtain predictions for underlying-event sensitive measurements in Drell–Yan production, in conjunction with Geneva ’s fully differential NNLO calculation, NNLL' resummation for the 0-jet resolution variable (beam thrust), and NLL resummation for the 1-jet resolution variable. We describe the interface with the parton-shower algorithm and MPI model of Pythia8, which preserves both the precision of the partonic N-jet cross sections in Geneva as well as the shower accuracy and good description of soft hadronic physics of Pythia8. Wemore » present results for several underlying-event sensitive observables and compare to data from ATLAS and CMS as well as to standalone Pythia8 predictions. This includes a comparison with the recent ATLAS measurement of the beam thrust spectrum, which provides a potential avenue to fully disentangle the physical effects from the primary hard interaction, primary soft radiation, multiple parton interactions, and nonperturbative hadronization.« less
Training product unit neural networks with genetic algorithms
NASA Technical Reports Server (NTRS)
Janson, D. J.; Frenzel, J. F.; Thelen, D. C.
1991-01-01
The training of product neural networks using genetic algorithms is discussed. Two unusual neural network techniques are combined; product units are employed instead of the traditional summing units and genetic algorithms train the network rather than backpropagation. As an example, a neural netork is trained to calculate the optimum width of transistors in a CMOS switch. It is shown how local minima affect the performance of a genetic algorithm, and one method of overcoming this is presented.
Lunar BRDF Correction of Suomi-NPP VIIRS Day/Night Band Time Series Product
NASA Astrophysics Data System (ADS)
Wang, Z.; Roman, M. O.; Kalb, V.; Stokes, E.; Miller, S. D.
2015-12-01
Since the first-light images from the Suomi-NPP VIIRS low-light visible Day/Night Band (DNB) sensor were received in November 2011, the NASA Suomi-NPP Land Science Investigator Processing System (SIPS) has focused on evaluating this new capability for quantitative science applications, as well as developing and testing refined algorithms to meet operational and Land science research needs. While many promising DNB applications have been developed since the Suomi-NPP launch, most studies to-date have been limited by the traditional qualitative image display and spatial-temporal aggregated statistical analysis methods inherent in current heritage algorithms. This has resulted in strong interest for a new generation of science-quality products that can be used to monitor both the magnitude and signature of nighttime phenomena and anthropogenic sources of light emissions. In one particular case study, Román and Stokes (2015) demonstrated that tracking daily dynamic DNB radiances can provide valuable information about the character of the human activities and behaviors that influence energy, consumption, and vulnerability. Here we develop and evaluate a new suite of DNB science-quality algorithms that can exclude a primary source of background noise: i.e., the Lunar BRDF (Bidirectional Reflectance Distribution Function) effect. Every day, the operational NASA Land SIPS DNB algorithm makes use of 16 days worth of DNB-derived surface reflectances (SR) (based on the heritage MODIS SR algorithm) and a semiempirical kernel-driven bidirectional reflectance model to determine a global set of parameters describing the BRDF of the land surface. The nighttime period of interest is heavily weighted as a function of observation coverage. These gridded parameters, combined with Miller and Turner's [2009] top-of-atmosphere spectral irradiance model, are then used to determine the DNB's lunar radiance contribution at any point in time and under specific illumination conditions.
Development of GK-2A cloud optical and microphysical properties retrieval algorithm
NASA Astrophysics Data System (ADS)
Yang, Y.; Yum, S. S.; Um, J.
2017-12-01
Cloud and aerosol radiative forcing is known to be one of the the largest uncertainties in climate change prediction. To reduce this uncertainty, remote sensing observation of cloud radiative and microphysical properties have been used since 1970s and the corresponding remote sensing techniques and instruments have been developed. As a part of such effort, Geo-KOMPSAT-2A (Geostationary Korea Multi-Purpose Satellite-2A, GK-2A) will be launched in 2018. On the GK-2A, the Advanced Meteorological Imager (AMI) is primary instrument which have 3 visible, 3 near-infrared, and 10 infrared channels. To retrieve optical and microphysical properties of clouds using AMI measurements, the preliminary version of new cloud retrieval algorithm for GK-2A was developed and several validation tests were conducted. This algorithm retrieves cloud optical thickness (COT), cloud effective radius (CER), liquid water path (LWP), and ice water path (IWP), so we named this algorithm as Daytime Cloud Optical thickness, Effective radius and liquid and ice Water path (DCOEW). The DCOEW uses cloud reflectance at visible and near-infrared channels as input data. An optimal estimation (OE) approach that requires appropriate a-priori values and measurement error information is used to retrieve COT and CER. LWP and IWP are calculated using empirical relationships between COT/CER and cloud water path that were determined previously. To validate retrieved cloud properties, we compared DCOEW output data with other operational satellite data. For COT and CER validation, we used two different data sets. To compare algorithms that use cloud reflectance at visible and near-IR channels as input data, MODIS MYD06 cloud product was selected. For the validation with cloud products that are based on microwave measurements, COT(2B-TAU)/CER(2C-ICE) data retrieved from CloudSat cloud profiling radar (W-band, 94 GHz) was used. For cloud water path validation, AMSR-2 Level-3 Cloud liquid water data was used. Detailed results will be shown at the conference.
The GLAS Science Algorithm Software (GSAS) Detailed Design Document Version 6. Volume 16
NASA Technical Reports Server (NTRS)
Lee, Jeffrey E.
2013-01-01
The Geoscience Laser Altimeter System (GLAS) is the primary instrument for the ICESat (Ice, Cloud and Land Elevation Satellite) laser altimetry mission. ICESat was the benchmark Earth Observing System (EOS) mission for measuring ice sheet mass balance, cloud and aerosol heights, as well as land topography and vegetation characteristics. From 2003 to 2009, the ICESat mission provided multi-year elevation data needed to determine ice sheet mass balance as well as cloud property information, especially for stratospheric clouds common over polar areas. It also provided topography and vegetation data around the globe, in addition to the polar-specific coverage over the Greenland and Antarctic ice sheets.This document describes the detailed design of GLAS Science Algorithm Software (GSAS). The GSAS is used to create the ICESat GLAS standard data products. The National Snow and Ice Data Center (NSDIC) distribute these products. The document contains descriptions, flow charts, data flow diagrams, and structure charts for each major component of the GSAS. The purpose of this document is to present the detailed design of the GSAS. It is intended as a reference source to assist the maintenance programmer in making changes that fix or enhance the documented software.
Atmospheric products from the Upper Atmosphere Research Satellite (UARS)
NASA Technical Reports Server (NTRS)
Ahmad, Suraiya P.; Johnson, James E.; Jackman, Charles H.
2003-01-01
This paper provides information on the products available at the NASA Goddard Earth Sciences (GES) Distributed Active Archive Center (DAAC) from the Upper Atmosphere Research Satellite (UARS) mission. The GES DAAC provides measurements from the primary UARS mission, which extended from launch in September 1991 through September 2001. The ten instruments aboard UARS provide measurements of atmospheric trace gas species, dynamical variables, solar irradiance input, and particle energy flux. All standard Level 3 UARS products from all ten instruments are offered free to the public and science user community. The Level 3 data are geophysical parameters, which have been transformed into a common format and equally spaced along the measurement trajectory. The UARS data have been reprocessed several times over the years following improvements to the processing algorithms. The UARS data offered from the GES DAAC are the latest versions of each instrument. The UARS data may be accessed through the GES DAAC website at
Moreno-Peral, Patricia; Luna, Juan de Dios; Marston, Louise; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Muñoz-Bravo, Carlos; Bellón, Juan Ángel
2014-01-01
Background There are no risk algorithms for the onset of anxiety syndromes at 12 months in primary care. We aimed to develop and validate internally a risk algorithm to predict the onset of anxiety syndromes at 12 months. Methods A prospective cohort study with evaluations at baseline, 6 and 12 months. We measured 39 known risk factors and used multilevel logistic regression and inverse probability weighting to build the risk algorithm. Our main outcome was generalized anxiety, panic and other non-specific anxiety syndromes as measured by the Primary Care Evaluation of Mental Disorders, Patient Health Questionnaire (PRIME-MD-PHQ). We recruited 3,564 adult primary care attendees without anxiety syndromes from 174 family physicians and 32 health centers in 6 Spanish provinces. Results The cumulative 12-month incidence of anxiety syndromes was 12.2%. The predictA-Spain risk algorithm included the following predictors of anxiety syndromes: province; sex (female); younger age; taking medicines for anxiety, depression or stress; worse physical and mental quality of life (SF-12); dissatisfaction with paid and unpaid work; perception of financial strain; and the interactions sex*age, sex*perception of financial strain, and age*dissatisfaction with paid work. The C-index was 0.80 (95% confidence interval = 0.78–0.83) and the Hedges' g = 1.17 (95% confidence interval = 1.04–1.29). The Copas shrinkage factor was 0.98 and calibration plots showed an accurate goodness of fit. Conclusions The predictA-Spain risk algorithm is valid to predict anxiety syndromes at 12 months. Although external validation is required, the predictA-Spain is available for use as a predictive tool in the prevention of anxiety syndromes in primary care. PMID:25184313
Level 2 Ancillary Products and Datasets Algorithm Theoretical Basis
NASA Technical Reports Server (NTRS)
Diner, D.; Abdou, W.; Gordon, H.; Kahn, R.; Knyazikhin, Y.; Martonchik, J.; McDonald, D.; McMuldroch, S.; Myneni, R.; West, R.
1999-01-01
This Algorithm Theoretical Basis (ATB) document describes the algorithms used to generate the parameters of certain ancillary products and datasets used during Level 2 processing of Multi-angle Imaging SpectroRadiometer (MIST) data.
NASA Astrophysics Data System (ADS)
Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R.
2016-07-01
Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.
Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R
2016-07-07
Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.
Discrete geometric analysis of message passing algorithm on graphs
NASA Astrophysics Data System (ADS)
Watanabe, Yusuke
2010-04-01
We often encounter probability distributions given as unnormalized products of non-negative functions. The factorization structures are represented by hypergraphs called factor graphs. Such distributions appear in various fields, including statistics, artificial intelligence, statistical physics, error correcting codes, etc. Given such a distribution, computations of marginal distributions and the normalization constant are often required. However, they are computationally intractable because of their computational costs. One successful approximation method is Loopy Belief Propagation (LBP) algorithm. The focus of this thesis is an analysis of the LBP algorithm. If the factor graph is a tree, i.e. having no cycle, the algorithm gives the exact quantities. If the factor graph has cycles, however, the LBP algorithm does not give exact results and possibly exhibits oscillatory and non-convergent behaviors. The thematic question of this thesis is "How the behaviors of the LBP algorithm are affected by the discrete geometry of the factor graph?" The primary contribution of this thesis is the discovery of a formula that establishes the relation between the LBP, the Bethe free energy and the graph zeta function. This formula provides new techniques for analysis of the LBP algorithm, connecting properties of the graph and of the LBP and the Bethe free energy. We demonstrate applications of the techniques to several problems including (non) convexity of the Bethe free energy, the uniqueness and stability of the LBP fixed point. We also discuss the loop series initiated by Chertkov and Chernyak. The loop series is a subgraph expansion of the normalization constant, or partition function, and reflects the graph geometry. We investigate theoretical natures of the series. Moreover, we show a partial connection between the loop series and the graph zeta function.
Moreno-Valenzuela, Javier; González-Hernández, Luis
2011-01-01
In this paper, a new control algorithm for operational space trajectory tracking control of robot arms is introduced. The new algorithm does not require velocity measurement and is based on (1) a primary controller which incorporates an algorithm to obtain synthesized velocity from joint position measurements and (2) a secondary controller which computes the desired joint acceleration and velocity required to achieve operational space motion control. The theory of singularly perturbed systems is crucial for the analysis of the closed-loop system trajectories. In addition, the practical viability of the proposed algorithm is explored through real-time experiments in a two degrees-of-freedom horizontal planar direct-drive arm. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...
Algorithms for the assessment and management of insomnia in primary care
Hilty, Donald; Young, Julie S; Bourgeois, James A; Klein, Sally; Hardin, Kimberly A
2009-01-01
Insomnia is a leading cause of sleep disturbance in primary care practice affecting >30% of people in the United States and can result in psychological and physiological consequences. We aim for a focused discussion of some of the underpinnings of insomnia and practical tips for management (eg, algorithms). A PubMed search was conducted using English language papers between 1997–2007, with the terms “sleep,” “insomnia”; “primary care” and “clinics”; “comorbid conditions”; “treatment” and “management.” Sleep, psychiatric and medical disorders significantly affect sleep, causing patient suffering, potentially worsening of other disorders and increasing the use of primary care services. We provide an outline for practical assessment and treatment of insomnia in primary care, including the strengths and weaknesses of medications. PMID:19936140
NASA Astrophysics Data System (ADS)
Churilova, T.; Suslin, V.
2012-04-01
Satellite observations of ocean color provide a unique opportunity in oceanography to assess productivity of the sea on different spatial and temporal scales. However it has been shown that the standard SeaWiFS algorithm generally overestimates summer chlorophyll concentration and underestimates pigment content during spring phytoplankton bloom in comparison with in situ measurements. It is required to develop regional algorithms which are based on biooptical characteristics typical for the Sea and consequently could be used for correct transformation of spectral features of water-leaving radiance to chlorophyll a concentrations (Chl), light absorption features of suspended and dissolved organic matter (CDM), downwelling light attenuation coefficient/euphotic zone depth (PAR1%) and rate of primary synthesis of organic substances (PP). The numerous measurements of light absorption spectra of phytoplankton, non-algal particles and coloured dissolved organic matter carried out since 1996 in different seasons and regions of the Black Sea allowed to make a parameterization of the light absorption by all optically active components. Taking into account regional peculiarities of the biooptical parameters, their difference between seasons, shallow and deep-waters, their depth-dependent variability within photosynthetic zone regional spectral models for estimation of chlorophyll a concentration (Chl Model), colored dissolved and suspended organic matter absorption (CDM Model), downwelling irradiance (PAR Model) and primary production (PP Model) have been developed based on satellite data. Test of validation of models showed appropriate accuracy of the models. The developed models have been applied for estimation of spatial/temporal variability of chlorophyll a, dissolved organic matter concentrations, waters transparency, euphotic zone depth and primary production based on SeaWiFS data. Two weeks averaged maps of spatial distribution of these parameters have been composed for period from 1998 to 2009 (most of them presented on site http://blackseacolor.com/browser3.html). Comparative analysis of long-term series (since 1998) of these parameters with subsurface water temperature (SST) and solar radiance of the sea surface (PAR-0m) revealed the key factors determining the seasonal and inter-annual variations of Chl, PAR1%, CDM, PP. The seasonal dynamics of these parameters were more pronounced compared with inter-annual variability. The later was related to climate effect. In deep-waters region relatively lower SST during cold winters were forcing more intensive winter-spring phytoplankton bloom. In north-western shelf inter-annual variability in river (Danube) run off, which was related to climate change as well, determined year-to-year changing in Chl, CDM, PAR1%, and PP.
A nonlinear bi-level programming approach for product portfolio management.
Ma, Shuang
2016-01-01
Product portfolio management (PPM) is a critical decision-making for companies across various industries in today's competitive environment. Traditional studies on PPM problem have been motivated toward engineering feasibilities and marketing which relatively pay less attention to other competitors' actions and the competitive relations, especially in mathematical optimization domain. The key challenge lies in that how to construct a mathematical optimization model to describe this Stackelberg game-based leader-follower PPM problem and the competitive relations between them. The primary work of this paper is the representation of a decision framework and the optimization model to leverage the PPM problem of leader and follower. A nonlinear, integer bi-level programming model is developed based on the decision framework. Furthermore, a bi-level nested genetic algorithm is put forward to solve this nonlinear bi-level programming model for leader-follower PPM problem. A case study of notebook computer product portfolio optimization is reported. Results and analyses reveal that the leader-follower bi-level optimization model is robust and can empower product portfolio optimization.
Dreno, B; Bensadoun, RJ; Humbert, P; Krutmann, J; Luger, T; Triller, R; Rougier, A; Seité, S
2013-01-01
Currently, numerous patients who receive targeted chemotherapy for cancer suffer from disabling skin reactions due to cutaneous toxicity, which is a significant problem for an increasing number of patients and their treating physicians. In addition, using inappropriate personal hygiene products often worsens these otherwise manageable side-effects. Cosmetic products for personal hygiene and lesion camouflage are part of a patients’ well-being and an increasing number of physicians feel that they do not have adequate information to provide effective advice on concomitant cosmetic therapy. Although ample information is available in the literature on pharmaceutical treatment for cutaneous side-effects of chemotherapy, little is available for the concomitant use of dermatological skin-care products with medical treatments. The objective of this consensus study is to provide an algorithm for the appropriate use of dermatological cosmetics in the management of cutaneous toxicities associated with targeted chemotherapy such as epidermal growth factor receptor inhibitors and other monoclonal antibodies. These guidelines were developed by a French and German expert group of dermatologists and an oncologist for oncologists and primary care physicians who manage oncology patients. The information in this report is based on published data and the expert group’s opinion. Due to the current lack of clinical evidence, only a review of published recommendations including suggestions for concomitant cosmetic use was conducted. PMID:23368717
NASA Technical Reports Server (NTRS)
King, Michael D.; Platnick, Steven; Chu, D. Allen; Moody, Eric G.
2001-01-01
MODIS is an earth-viewing cross-track scanning spectroradiometer launched on the Terra satellite in December 1999. MODIS scans a swath width sufficient to provide nearly complete global coverage every two days from a polar-orbiting, sun-synchronous platform at an altitude of 705 km, and provides images in 36 spectral bands between 0.415 and 14.235 microns with spatial resolutions of 250 m (two bands), 500 m (five bands) and 1000 m (29 bands). These bands have been carefully selected to enable advanced studies of land, ocean, and atmospheric processes. In this presentation we review the comprehensive set of remote sensing algorithms that have been developed for the remote sensing of atmospheric properties using MODIS data, placing primary emphasis on the principal atmospheric applications of (i) developing a cloud mask for distinguishing clear sky from clouds, (ii) retrieving global cloud radiative and microphysical properties, including cloud top pressure and temperature, effective emissivity, cloud optical thickness, thermodynamic phase, and effective radius, (iii) monitoring tropospheric aerosol optical thickness over the land and ocean and aerosol size distribution over the ocean, (iv) determining atmospheric profiles of moisture and temperature, and (v) estimating column water amount. The physical principles behind the determination of each of these atmospheric products will be described, together with an example of their application using MODIS observations to the east Asian region in Spring 2001. All products are archived into two categories: pixel-level retrievals (referred to as Level-2 products) and global gridded products at a latitude and longitude resolution of 1 degree (Level-3 products). An overview of the MODIS atmosphere algorithms and products, status, validation activities, and early level-2 and -3 results will be presented.
Remote Sensing of Cloud, Aerosol, and Water Vapor Properties from MODIS
NASA Technical Reports Server (NTRS)
King, Michael D.; Platnick, Steven; Menzel, W. Paul; Kaufman, Yoram J.; Ackerman, Steven A.; Tanre, Didier; Gao, Bo-Cai
2001-01-01
MODIS is an earth-viewing cross-track scanning spectroradiometer launched on the Terra satellite in December 1999. MODIS scans a swath width sufficient to provide nearly complete global coverage every two days from a polar orbiting, sun-synchronous, platform at an altitude of 705 kilometers, and provides images in 36 spectral bands between 0.415 and 14.235 micrometers with spatial resolutions of 250 meters (2 bands), 500 meters (5 bands) and 1000 meters (29 bands). These bands have been carefully selected to enable advanced studies of land, ocean, and atmospheric processes. In this presentation we review the comprehensive set of remote sensing algorithms that have been developed for the remote sensing of atmospheric properties using MODIS data, placing primary emphasis on the principal atmospheric applications of (i) developing a cloud mask for distinguishing clear sky from clouds, (ii) retrieving global cloud radiative and microphysical properties, including cloud top pressure and temperature, effective emissivity, cloud optical thickness, thermodynamic phase, and effective radius, (iii) monitoring tropospheric aerosol optical thickness over the land and ocean and aerosol size distribution over the ocean, (iv) determining atmospheric profiles of moisture and temperature, and (v) estimating column water amount. The physical principles behind the determination of each of these atmospheric products will be described, together with an example of their application using MODIS observations. All products are archived into two categories: pixel-level retrievals (referred to as Level-2 products) and global gridded products at a latitude and longitude resolution of 1 degree (Level-3 products). An overview of the MODIS atmosphere algorithms and products, status, validation activities, and early level-2 and -3 results will be presented.
Towards improving the NASA standard soil moisture retrieval algorithm and product
NASA Astrophysics Data System (ADS)
Mladenova, I. E.; Jackson, T. J.; Njoku, E. G.; Bindlish, R.; Cosh, M. H.; Chan, S.
2013-12-01
Soil moisture mapping using passive-based microwave remote sensing techniques has proven to be one of the most effective ways of acquiring reliable global soil moisture information on a routine basis. An important step in this direction was made by the launch of the Advanced Microwave Scanning Radiometer on the NASA's Earth Observing System Aqua satellite (AMSR-E). Along with the standard NASA algorithm and operational AMSR-E product, the easy access and availability of the AMSR-E data promoted the development and distribution of alternative retrieval algorithms and products. Several evaluation studies have demonstrated issues with the standard NASA AMSR-E product such as dampened temporal response and limited range of the final retrievals and noted that the available global passive-based algorithms, even though based on the same electromagnetic principles, produce different results in terms of accuracy and temporal dynamics. Our goal is to identify the theoretical causes that determine the reduced sensitivity of the NASA AMSR-E product and outline ways to improve the operational NASA algorithm, if possible. Properly identifying the underlying reasons that cause the above mentioned features of the NASA AMSR-E product and differences between the alternative algorithms requires a careful examination of the theoretical basis of each approach. Specifically, the simplifying assumptions and parametrization approaches adopted by each algorithm to reduce the dimensionality of unknowns and characterize the observing system. Statistically-based error analyses, which are useful and necessary, provide information on the relative accuracy of each product but give very little information on the theoretical causes, knowledge that is essential for algorithm improvement. Thus, we are currently examining the possibility of improving the standard NASA AMSR-E global soil moisture product by conducting a thorough theoretically-based review of and inter-comparisons between several well established global retrieval techniques. A detailed discussion focused on the theoretical basis of each approach and algorithms sensitivity to assumptions and parametrization approaches will be presented. USDA is an equal opportunity provider and employer.
He, Xiao-Ou; D'Urzo, Anthony; Jugovic, Pieter; Jhirad, Reuven; Sehgal, Prateek; Lilly, Evan
2015-03-12
Spirometry is recommended for the diagnosis of asthma and chronic obstructive pulmonary disease (COPD) in international guidelines and may be useful for distinguishing asthma from COPD. Numerous spirometry interpretation algorithms (SIAs) are described in the literature, but no studies highlight how different SIAs may influence the interpretation of the same spirometric data. We examined how two different SIAs may influence decision making among primary-care physicians. Data for this initiative were gathered from 113 primary-care physicians attending accredited workshops in Canada between 2011 and 2013. Physicians were asked to interpret nine spirograms presented twice in random sequence using two different SIAs and touch pad technology for anonymous data recording. We observed differences in the interpretation of spirograms using two different SIAs. When the pre-bronchodilator FEV1/FVC (forced expiratory volume in one second/forced vital capacity) ratio was >0.70, algorithm 1 led to a 'normal' interpretation (78% of physicians), whereas algorithm 2 prompted a bronchodilator challenge revealing changes in FEV1 that were consistent with asthma, an interpretation selected by 94% of physicians. When the FEV1/FVC ratio was <0.70 after bronchodilator challenge but FEV1 increased >12% and 200 ml, 76% suspected asthma and 10% suspected COPD using algorithm 1, whereas 74% suspected asthma versus COPD using algorithm 2 across five separate cases. The absence of a post-bronchodilator FEV1/FVC decision node in algorithm 1 did not permit consideration of possible COPD. This study suggests that differences in SIAs may influence decision making and lead clinicians to interpret the same spirometry data differently.
Spettell, Claire M; Wall, Terry C; Allison, Jeroan; Calhoun, Jaimee; Kobylinski, Richard; Fargason, Rachel; Kiefe, Catarina I
2003-01-01
Background Multiple factors limit identification of patients with depression from administrative data. However, administrative data drives many quality measurement systems, including the Health Plan Employer Data and Information Set (HEDIS®). Methods We investigated two algorithms for identification of physician-recognized depression. The study sample was drawn from primary care physician member panels of a large managed care organization. All members were continuously enrolled between January 1 and December 31, 1997. Algorithm 1 required at least two criteria in any combination: (1) an outpatient diagnosis of depression or (2) a pharmacy claim for an antidepressant. Algorithm 2 included the same criteria as algorithm 1, but required a diagnosis of depression for all patients. With algorithm 1, we identified the medical records of a stratified, random subset of patients with and without depression (n=465). We also identified patients of primary care physicians with a minimum of 10 depressed members by algorithm 1 (n=32,819) and algorithm 2 (n=6,837). Results The sensitivity, specificity, and positive predictive values were: Algorithm 1: 95 percent, 65 percent, 49 percent; Algorithm 2: 52 percent, 88 percent, 60 percent. Compared to algorithm 1, profiles from algorithm 2 revealed higher rates of follow-up visits (43 percent, 55 percent) and appropriate antidepressant dosage acutely (82 percent, 90 percent) and chronically (83 percent, 91 percent) (p<0.05 for all). Conclusions Both algorithms had high false positive rates. Denominator construction (algorithm 1 versus 2) contributed significantly to variability in measured quality. Our findings raise concern about interpreting depression quality reports based upon administrative data. PMID:12968818
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue
We present two efficient iterative algorithms for solving the linear response eigen- value problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into a product eigenvalue problem that is self-adjoint with respect to a K-inner product. This product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-innermore » product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. However, the other component of the eigenvector can be easily recovered in a postprocessing procedure. Therefore, the algorithms we present here are more efficient than existing algorithms that try to approximate both components of the eigenvectors simultaneously. The efficiency of the new algorithms is demonstrated by numerical examples.« less
NASA Technical Reports Server (NTRS)
Abbott, Mark R.
1997-01-01
We are responsible for the delivery of two at-launch products for AM-1: Fluorescence line height (FLH) and chlorophyll fluorescence efficiency (CFE). In our last report we had planned to combine the two separate algorithms into a single piece of code. However, after discussions with Bob Evans, it was decided that it was best to leave the two algorithms separate. They have been integrated into the MOCEAN processing system, and given their low computational requirements, it easier to keep them separate. In addition, there remain questions concerning the specific chlorophyll product that will be used for the CFE calculation. Presently, the CFE algorithm relies on the chlorophyll product produced by Ken Carder. This product is based on a reflectance model, and is theoretically different than the chlorophyll product being provided by Dennis Clark (NOAA). These two products will be compared systematically in the coming months. If we decide to switch to the Clark product, then it will be simpler to modify the CFE algorithm if it remains separate from the FLH algorithm. Our focus for the next six months is to refine the quality flags that were delivered as part of the algorithm last summer. A description of these flags was provided to Evans for the MOCEAN processing system. A summary was included in the revised ATBD. Some of the flags depend on flags produced by the input products so coordination will be required.
A novel algorithm for thermal image encryption.
Hussain, Iqtadar; Anees, Amir; Algarni, Abdulmohsen
2018-04-16
Thermal images play a vital character at nuclear plants, Power stations, Forensic labs biological research, and petroleum products extraction. Safety of thermal images is very important. Image data has some unique features such as intensity, contrast, homogeneity, entropy and correlation among pixels that is why somehow image encryption is trickier as compare to other encryptions. With conventional image encryption schemes it is normally hard to handle these features. Therefore, cryptographers have paid attention to some attractive properties of the chaotic maps such as randomness and sensitivity to build up novel cryptosystems. That is why, recently proposed image encryption techniques progressively more depends on the application of chaotic maps. This paper proposed an image encryption algorithm based on Chebyshev chaotic map and S8 Symmetric group of permutation based substitution boxes. Primarily, parameters of chaotic Chebyshev map are chosen as a secret key to mystify the primary image. Then, the plaintext image is encrypted by the method generated from the substitution boxes and Chebyshev map. By this process, we can get a cipher text image that is perfectly twisted and dispersed. The outcomes of renowned experiments, key sensitivity tests and statistical analysis confirm that the proposed algorithm offers a safe and efficient approach for real-time image encryption.
Selmants, Paul C.; Moreno, Alvaro; Running, Steve W.; Giardina, Christian P.
2017-01-01
Gross primary production (GPP) is the Earth’s largest carbon flux into the terrestrial biosphere and plays a critical role in regulating atmospheric chemistry and global climate. The Moderate Resolution Imaging Spectrometer (MODIS)-MOD17 data product is a widely used remote sensing-based model that provides global estimates of spatiotemporal trends in GPP. When the MOD17 algorithm is applied to regional scale heterogeneous landscapes, input data from coarse resolution land cover and climate products may increase uncertainty in GPP estimates, especially in high productivity tropical ecosystems. We examined the influence of using locally specific land cover and high-resolution local climate input data on MOD17 estimates of GPP for the State of Hawaii, a heterogeneous and discontinuous tropical landscape. Replacing the global land cover data input product (MOD12Q1) with Hawaii-specific land cover data reduced statewide GPP estimates by ~8%, primarily because the Hawaii-specific land cover map had less vegetated land area compared to the global land cover product. Replacing coarse resolution GMAO climate data with Hawaii-specific high-resolution climate data also reduced statewide GPP estimates by ~8% because of the higher spatial variability of photosynthetically active radiation (PAR) in the Hawaii-specific climate data. The combined use of both Hawaii-specific land cover and high-resolution Hawaii climate data inputs reduced statewide GPP by ~16%, suggesting equal and independent influence on MOD17 GPP estimates. Our sensitivity analyses within a heterogeneous tropical landscape suggest that refined global land cover and climate data sets may contribute to an enhanced MOD17 product at a variety of spatial scales. PMID:28886187
Kimball, Heather L.; Selmants, Paul; Moreno, Alvaro; Running Steve W,; Giardina, Christian P.
2017-01-01
Gross primary production (GPP) is the Earth’s largest carbon flux into the terrestrial biosphere and plays a critical role in regulating atmospheric chemistry and global climate. The Moderate Resolution Imaging Spectrometer (MODIS)-MOD17 data product is a widely used remote sensing-based model that provides global estimates of spatiotemporal trends in GPP. When the MOD17 algorithm is applied to regional scale heterogeneous landscapes, input data from coarse resolution land cover and climate products may increase uncertainty in GPP estimates, especially in high productivity tropical ecosystems. We examined the influence of using locally specific land cover and high-resolution local climate input data on MOD17 estimates of GPP for the State of Hawaii, a heterogeneous and discontinuous tropical landscape. Replacing the global land cover data input product (MOD12Q1) with Hawaii-specific land cover data reduced statewide GPP estimates by ~8%, primarily because the Hawaii-specific land cover map had less vegetated land area compared to the global land cover product. Replacing coarse resolution GMAO climate data with Hawaii-specific high-resolution climate data also reduced statewide GPP estimates by ~8% because of the higher spatial variability of photosynthetically active radiation (PAR) in the Hawaii-specific climate data. The combined use of both Hawaii-specific land cover and high-resolution Hawaii climate data inputs reduced statewide GPP by ~16%, suggesting equal and independent influence on MOD17 GPP estimates. Our sensitivity analyses within a heterogeneous tropical landscape suggest that refined global land cover and climate data sets may contribute to an enhanced MOD17 product at a variety of spatial scales.
Kimball, Heather L; Selmants, Paul C; Moreno, Alvaro; Running, Steve W; Giardina, Christian P
2017-01-01
Gross primary production (GPP) is the Earth's largest carbon flux into the terrestrial biosphere and plays a critical role in regulating atmospheric chemistry and global climate. The Moderate Resolution Imaging Spectrometer (MODIS)-MOD17 data product is a widely used remote sensing-based model that provides global estimates of spatiotemporal trends in GPP. When the MOD17 algorithm is applied to regional scale heterogeneous landscapes, input data from coarse resolution land cover and climate products may increase uncertainty in GPP estimates, especially in high productivity tropical ecosystems. We examined the influence of using locally specific land cover and high-resolution local climate input data on MOD17 estimates of GPP for the State of Hawaii, a heterogeneous and discontinuous tropical landscape. Replacing the global land cover data input product (MOD12Q1) with Hawaii-specific land cover data reduced statewide GPP estimates by ~8%, primarily because the Hawaii-specific land cover map had less vegetated land area compared to the global land cover product. Replacing coarse resolution GMAO climate data with Hawaii-specific high-resolution climate data also reduced statewide GPP estimates by ~8% because of the higher spatial variability of photosynthetically active radiation (PAR) in the Hawaii-specific climate data. The combined use of both Hawaii-specific land cover and high-resolution Hawaii climate data inputs reduced statewide GPP by ~16%, suggesting equal and independent influence on MOD17 GPP estimates. Our sensitivity analyses within a heterogeneous tropical landscape suggest that refined global land cover and climate data sets may contribute to an enhanced MOD17 product at a variety of spatial scales.
Administrative Data Algorithms Can Describe Ambulatory Physician Utilization
Shah, Baiju R; Hux, Janet E; Laupacis, Andreas; Zinman, Bernard; Cauch-Dudek, Karen; Booth, Gillian L
2007-01-01
Objective To validate algorithms using administrative data that characterize ambulatory physician care for patients with a chronic disease. Data Sources Seven-hundred and eighty-one people with diabetes were recruited mostly from community pharmacies to complete a written questionnaire about their physician utilization in 2002. These data were linked with administrative databases detailing health service utilization. Study Design An administrative data algorithm was defined that identified whether or not patients received specialist care, and it was tested for agreement with self-report. Other algorithms, which assigned each patient to a primary care and specialist physician, were tested for concordance with self-reported regular providers of care. Principal Findings The algorithm to identify whether participants received specialist care had 80.4 percent agreement with questionnaire responses (κ = 0.59). Compared with self-report, administrative data had a sensitivity of 68.9 percent and specificity 88.3 percent for identifying specialist care. The best administrative data algorithm to assign each participant's regular primary care and specialist providers was concordant with self-report in 82.6 and 78.2 percent of cases, respectively. Conclusions Administrative data algorithms can accurately match self-reported ambulatory physician utilization. PMID:17610448
Assessment of the NPOESS/VIIRS Nighttime Infrared Cloud Optical Properties Algorithms
NASA Astrophysics Data System (ADS)
Wong, E.; Ou, S. C.
2008-12-01
In this paper we will describe two NPOESS VIIRS IR algorithms used to retrieve microphysical properties for water and ice clouds during nighttime conditions. Both algorithms employ four VIIRS IR channels: M12 (3.7 μm), M14 (8.55 μm), M15 (10.7 μm) and M16 (12 μm). The physical basis for the two algorithms is similar in that while the Cloud Top Temperature (CTT) is derived from M14 and M16 for ice clouds the Cloud Optical Thickness (COT) and Cloud Effective Particle Size (CEPS) are derived from M12 and M15. The two algorithms depart in the different radiative transfer parameterization equations used for ice and water clouds. Both the VIIRS nighttime IR algorithms and the CERES split-window method employ the 3.7 μm and 10.7 μm bands for cloud optical properties retrievals, apparently based on similar physical principles but with different implementations. It is reasonable to expect that the VIIRS and CERES IR algorithms produce comparable performance and similar limitations. To demonstrate the VIIRS nighttime IR algorithm performance, we will select a number of test cases using NASA MODIS L1b radiance products as proxy input data for VIIRS. The VIIRS retrieved COT and CEPS will then be compared to cloud products available from the MODIS, NASA CALIPSO, CloudSat and CERES sensors. For the MODIS product, the nighttime cloud emissivity will serve as an indirect comparison to VIIRS COT. For the CALIPSO and CloudSat products, the layered COT will be used for direct comparison. Finally, the CERES products will provide direct comparison with COT as well as CEPS. This study can only provide a qualitative assessment of the VIIRS IR algorithms due to the large uncertainties in these cloud products.
Primary decomposition of zero-dimensional ideals over finite fields
NASA Astrophysics Data System (ADS)
Gao, Shuhong; Wan, Daqing; Wang, Mingsheng
2009-03-01
A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.
NASA Astrophysics Data System (ADS)
Fang, Li
The Geostationary Operational Environmental Satellites (GOES) have been continuously monitoring the earth surface since 1970, providing valuable and intensive data from a very broad range of wavelengths, day and night. The National Oceanic and Atmospheric Administration's (NOAA's) National Environmental Satellite, Data, and Information Service (NESDIS) is currently operating GOES-15 and GOES-13. The design of the GOES series is now heading to the 4 th generation. GOES-R, as a representative of the new generation of the GOES series, is scheduled to be launched in 2015 with higher spatial and temporal resolution images and full-time soundings. These frequent observations provided by GOES Image make them attractive for deriving information on the diurnal land surface temperature (LST) cycle and diurnal temperature range (DTR). These parameters are of great value for research on the Earth's diurnal variability and climate change. Accurate derivation of satellite-based LSTs from thermal infrared data has long been an interesting and challenging research area. To better support the research on climate change, the generation of consistent GOES LST products for both GOES-East and GOES-West from operational dataset as well as historical archive is in great demand. The derivation of GOES LST products and the evaluation of proposed retrieval methods are two major objectives of this study. Literature relevant to satellite-based LST retrieval techniques was reviewed. Specifically, the evolution of two LST algorithm families and LST retrieval methods for geostationary satellites were summarized in this dissertation. Literature relevant to the evaluation of satellite-based LSTs was also reviewed. All the existing methods are a valuable reference to develop the GOES LST product. The primary objective of this dissertation is the development of models for deriving consistent GOES LSTs with high spatial and high temporal coverage. Proper LST retrieval algorithms were studied according to the characteristics of the imager onboard the GOES series. For the GOES 8-11 and GOES R series with split window (SW) channels, a new temperature and emissivity separation (TES) approach was proposed for deriving LST and LSE simultaneously by using multiple-temporal satellite observations. Two split-window regression formulas were selected for this approach, and two satellite observations over the same geo-location within a certain time interval were utilized. This method is particularly applicable to geostationary satellite missions from which qualified multiple-temporal observations are available. For the GOES M(12)-Q series without SW channels, the dual-window LST algorithm was adopted to derive LST. Instead of using the conventional training method to generate coefficients for the LST regression algorithms, a machine training technique was introduced to automatically select the criteria and the boundary of the sub-ranges for generating algorithm coefficients under different conditions. A software package was developed to produce a brand new GOES LST product from both operational GOES measurements and historical archive. The system layers of the software and related system input and output were illustrated in this work. Comprehensive evaluation of GOES LST products was conducted by validating products against multiple ground-based LST observations, LST products from fine-resolution satellites (e.g. MODIS) and GSIP LST products. The key issues relevant to the cloud diffraction effect were studied as well. GOES measurements as well as ancillary data, including satellite and solar geometry, water vapor, cloud mask, land emissivity etc., were collected to generate GOES LST products. In addition, multiple in situ temperature measurements were collected to test the performance of the proposed GOES LST retrieval algorithms. The ground-based dataset included direct surface temperature measurements from the Atmospheric Radiation Measurement program (ARM), and indirect measurements (surface long-wave radiation observations) from the SURFace RADiation Budget (SURFRAD) Network. A simulated dataset was created to analyse the sensitivity of the proposed retrieval algorithms. In addition, the MODIS LST and GSIP LST products were adopted to cross-evaluate the accuracy of the GOES LST products. Evaluation results demonstrate that the proposed GOES LST system is capable of deriving consistent land surface temperatures with good retrieval precision. Consistent GOES LST products with high spatial/temporal coverage and reliable accuracy will better support detections and observations of meteorological over land surfaces.
Progress on a generalized coordinates tensor product finite element 3DPNS algorithm for subsonic
NASA Technical Reports Server (NTRS)
Baker, A. J.; Orzechowski, J. A.
1983-01-01
A generalized coordinates form of the penalty finite element algorithm for the 3-dimensional parabolic Navier-Stokes equations for turbulent subsonic flows was derived. This algorithm formulation requires only three distinct hypermatrices and is applicable using any boundary fitted coordinate transformation procedure. The tensor matrix product approximation to the Jacobian of the Newton linear algebra matrix statement was also derived. Tne Newton algorithm was restructured to replace large sparse matrix solution procedures with grid sweeping using alpha-block tridiagonal matrices, where alpha equals the number of dependent variables. Numerical experiments were conducted and the resultant data gives guidance on potentially preferred tensor product constructions for the penalty finite element 3DPNS algorithm.
Freeze-drying process design by manometric temperature measurement: design of a smart freeze-dryer.
Tang, Xiaolin Charlie; Nail, Steven L; Pikal, Michael J
2005-04-01
To develop a procedure based on manometric temperature measurement (MTM) and an expert system for good practices in freeze drying that will allow development of an optimized freeze-drying process during a single laboratory freeze-drying experiment. Freeze drying was performed with a FTS Dura-Stop/Dura-Top freeze dryer with the manometric temperature measurement software installed. Five percent solutions of glycine, sucrose, or mannitol with 2 ml to 4 ml fill in 5 ml vials were used, with all vials loaded on one shelf. Details of freezing, optimization of chamber pressure, target product temperature, and some aspects of secondary drying are determined by the expert system algorithms. MTM measurements were used to select the optimum shelf temperature, to determine drying end points, and to evaluate residual moisture content in real-time. MTM measurements were made at 1 hour or half-hour intervals during primary drying and secondary drying, with a data collection frequency of 4 points per second. The improved MTM equations were fit to pressure-time data generated by the MTM procedure using Microcal Origin software to obtain product temperature and dry layer resistance. Using heat and mass transfer theory, the MTM results were used to evaluate mass and heat transfer rates and to estimate the shelf temperature required to maintain the target product temperature. MTM product dry layer resistance is accurate until about two-thirds of total primary drying time is over, and the MTM product temperature is normally accurate almost to the end of primary drying provided that effective thermal shielding is used in the freeze-drying process. The primary drying times can be accurately estimated from mass transfer rates calculated very early in the run, and we find the target product temperature can be achieved and maintained with only a few adjustments of shelf temperature. The freeze-dryer overload conditions can be estimated by calculation of heat/mass flow at the target product temperature. It was found that the MTM results serve as an excellent indicator of the end point of primary drying. Further, we find that the rate of water desorption during secondary drying may be accurately measured by a variation of the basic MTM procedure. Thus, both the end point of secondary drying and real-time residual moisture may be obtained during secondary drying. Manometric temperature measurement and the expert system for good practices in freeze drying does allow development of an optimized freeze-drying process during a single laboratory freeze-drying experiment.
Work productivity improvement after acid suppression in patients with uninvestigated dyspepsia.
Bytzer, Peter; Langkilde, Lars K; Christensen, Erik; Meineche-Schmidt, Villy
2012-07-01
Lost productivity accounts for a significant part of the costs caused by gastrointestinal symptoms. We aimed to describe selfreported productivity in patients presenting with dyspepsia. Data were sourced from a randomized, double-blinded study of two weeks of esomeprazole 40 mg or placebo in 805 primary-care patients with uninvestigated dyspepsia. Work productivity was tested using the Work Productivity and Activity Impairment questionnaire. Treatment effect on work productivity loss was tested according to the likelihood of treatment response. A total of 401/805 employed patients were included in the analysis. The average work productivity loss in the past seven days was 10.5 working hours/week. The productivity loss grew with increasing severity of symptoms at baseline. Following two weeks of treatment, the mean improvement in work productivity was significantly higher for both absenteeism (1 hour versus 0.1 hour, p < 0.05) and presenteeism (5.3 hours versus 4.3 hours, p < 0.05) in patients treated with esomeprazole versus placebo. The most substantial improvement was seen in patients who, based on baseline symptoms, were assessed to be likely treatment responders. Dyspepsia symptoms represent a significant economic burden in terms of lost productivity. The RESPONSE algorithm is successful in determining which patients will benefit from acid suppression in terms of enhanced productivity.
Inverting Image Data For Optical Testing And Alignment
NASA Technical Reports Server (NTRS)
Shao, Michael; Redding, David; Yu, Jeffrey W.; Dumont, Philip J.
1993-01-01
Data from images produced by slightly incorrectly figured concave primary mirror in telescope processed into estimate of spherical aberration of mirror, by use of algorithm finding nonlinear least-squares best fit between actual images and synthetic images produced by multiparameter mathematical model of telescope optical system. Estimated spherical aberration, in turn, converted into estimate of deviation of reflector surface from nominal precise shape. Algorithm devised as part of effort to determine error in surface figure of primary mirror of Hubble space telescope, so corrective lens designed. Modified versions of algorithm also used to find optical errors in other components of telescope or of other optical systems, for purposes of testing, alignment, and/or correction.
NASA Technical Reports Server (NTRS)
Balla, R. Jeffrey; Miller, Corey A.
2008-01-01
This study seeks a numerical algorithm which optimizes frequency precision for the damped sinusoids generated by the nonresonant LITA technique. It compares computed frequencies, frequency errors, and fit errors obtained using five primary signal analysis methods. Using variations on different algorithms within each primary method, results from 73 fits are presented. Best results are obtained using an AutoRegressive method. Compared to previous results using Prony s method, single shot waveform frequencies are reduced approx.0.4% and frequency errors are reduced by a factor of approx.20 at 303K to approx. 0.1%. We explore the advantages of high waveform sample rates and potential for measurements in low density gases.
Statistical simplex approach to primary and secondary color correction in thick lens assemblies
NASA Astrophysics Data System (ADS)
Ament, Shelby D. V.; Pfisterer, Richard
2017-11-01
A glass selection optimization algorithm is developed for primary and secondary color correction in thick lens systems. The approach is based on the downhill simplex method, and requires manipulation of the surface color equations to obtain a single glass-dependent parameter for each lens element. Linear correlation is used to relate this parameter to all other glass-dependent variables. The algorithm provides a statistical distribution of Abbe numbers for each element in the system. Examples of several lenses, from 2-element to 6-element systems, are performed to verify this approach. The optimization algorithm proposed is capable of finding glass solutions with high color correction without requiring an exhaustive search of the glass catalog.
Algorithm for space-time analysis of data on geomagnetic field
NASA Technical Reports Server (NTRS)
Kulanin, N. V.; Golokov, V. P. (Editor); Tyupkin, S. (Editor)
1984-01-01
The algorithm for the execution of the space-time analysis of data on geomagnetic fields is described. The primary constraints figuring in the specific realization of the algorithm on a computer stem exclusively from the limited possibilities of the computer involved. It is realized in the form of a program for the BESM-6 computer.
Frontal dynamics boost primary production in the summer stratified Mediterranean sea
NASA Astrophysics Data System (ADS)
Olita, Antonio; Capet, Arthur; Claret, Mariona; Mahadevan, Amala; Poulain, Pierre Marie; Ribotti, Alberto; Ruiz, Simón; Tintoré, Joaquín; Tovar-Sánchez, Antonio; Pascual, Ananda
2017-06-01
Bio-physical glider measurements from a unique process-oriented experiment in the Eastern Alboran Sea (AlborEx) allowed us to observe the distribution of the deep chlorophyll maximum (DCM) across an intense density front, with a resolution (˜ 400 m) suitable for investigating sub-mesoscale dynamics. This front, at the interface between Atlantic and Mediterranean waters, had a sharp density gradient (Δ ρ ˜ 1 kg/m3 in ˜ 10 km) and showed imprints of (sub-)mesoscale phenomena on tracer distributions. Specifically, the chlorophyll-a concentration within the DCM showed a disrupted pattern along isopycnal surfaces, with patches bearing a relationship to the stratification (buoyancy frequency) at depths between 30 and 60 m. In order to estimate the primary production (PP) rate within the chlorophyll patches observed at the sub-surface, we applied the Morel and Andrè (J Geophys Res 96:685-698 1991) bio-optical model using the photosynthetic active radiation (PAR) from Argo profiles collected simultaneously with glider data. The highest production was located concurrently with domed isopycnals on the fresh side of the front, suggestive that (sub-)mesoscale upwelling is carrying phytoplankton patches from less to more illuminated levels, with a contemporaneous delivering of nutrients. Integrated estimations of PP (1.3 g C m-2d-1) along the glider path are two to four times larger than the estimations obtained from satellite-based algorithms, i.e., derived from the 8-day composite fields extracted over the glider trip path. Despite the differences in spatial and temporal sampling between instruments, the differences in PP estimations are mainly due to the inability of the satellite to measure DCM patches responsible for the high production. The deepest (depth > 60 m) chlorophyll patches are almost unproductive and probably transported passively (subducted) from upper productive layers. Finally, the relationship between primary production and oxygen is also investigated. The logarithm of the primary production in the DCM interior (chlorophyll (Chl) > 0.5 mg/m3) shows a linear negative relationship with the apparent oxygen utilization, confirming that high chlorophyll patches are productive. The slope of this relationship is different for Atlantic, mixed interface waters and Mediterranean waters, suggesting the presence of differences in planktonic communities (whether physiological, population, or community level should be object of further investigation) on the different sides of the front. In addition, the ratio of optical backscatter to Chl is high within the intermediate (mixed) waters, which is suggestive of large phytoplankton cells, and lower within the core of the Atlantic and Mediterranean waters. These observations highlight the relevance of fronts in triggering primary production at DCM level and shaping the characteristic patchiness of the pelagic domain. This gains further relevance considering the inadequacy of optical satellite sensors to observe DCM concentrations at such fine scales.
Side-locked headaches: an algorithm-based approach.
Prakash, Sanjay; Rathore, Chaturbhuj
2016-12-01
The differential diagnosis of strictly unilateral hemicranial pain includes a large number of primary and secondary headaches and cranial neuropathies. It may arise from both intracranial and extracranial structures such as cranium, neck, vessels, eyes, ears, nose, sinuses, teeth, mouth, and the other facial or cervical structure. Available data suggest that about two-third patients with side-locked headache visiting neurology or headache clinics have primary headaches. Other one-third will have either secondary headaches or neuralgias. Many of these hemicranial pain syndromes have overlapping presentations. Primary headache disorders may spread to involve the face and / or neck. Even various intracranial and extracranial pathologies may have similar overlapping presentations. Patients may present to a variety of clinicians, including headache experts, dentists, otolaryngologists, ophthalmologist, psychiatrists, and physiotherapists. Unfortunately, there is not uniform approach for such patients and diagnostic ambiguity is frequently encountered in clinical practice.Herein, we review the differential diagnoses of side-locked headaches and provide an algorithm based approach for patients presenting with side-locked headaches. Side-locked headache is itself a red flag. So, the first priority should be to rule out secondary headaches. A comprehensive history and thorough examinations will help one to formulate an algorithm to rule out or confirm secondary side-locked headaches. The diagnoses of most secondary side-locked headaches are largely investigations dependent. Therefore, each suspected secondary headache should be subjected for appropriate investigations or referral. The diagnostic approach of primary side-locked headache starts once one rule out all the possible secondary headaches. We have discussed an algorithmic approach for both secondary and primary side-locked headaches.
Satellite Monitoring of Long-Range Transport of Asian Dust Storms from Sources to Sinks
NASA Astrophysics Data System (ADS)
Hsu, N.; Tsay, S.; Jeong, M.; King, M.; Holben, B.
2007-05-01
Among the many components that contribute to air pollution, airborne mineral dust plays an important role due to its biogeochemical impact on the ecosystem and its radiative-forcing effect on the climate system. In East Asia, dust storms frequently accompany the cold and dry air masses that occur as part of spring-time cold front systems. China's capital, Beijing, and other large cities are on the primary pathway of these dust storm plumes, and their passage over such popu-lation centers causes flight delays, pushes grit through windows and doors, and forces people indoors. Furthermore, during the spring these anthropogenic and natural air pollutants, once generated over the source regions, can be transported out of the boundary layer into the free troposphere and can travel thousands of kilometers across the Pacific into the United States and beyond. In this paper, we will demonstrate the capability of a new satellite algorithm to retrieve aerosol optical thickness and single scattering albedo over bright-reflecting surfaces such as urban areas and deserts. Such retrievals have been dif-ficult to perform using previously available algorithms that use wavelengths from the mid-visible to the near IR because they have trouble separating the aerosol signal from the contribution due to the bright surface reflectance. The new algorithm, called Deep Blue, utilizes blue-wavelength measurements from instruments such as SeaWiFS and MODIS to infer the properties of aerosols, since the surface reflectance over land in the blue part of the spectrum is much lower than for longer wavelength channels. Deep Blue algorithm has recently been integrated into the MODIS processing stream and began to provide aerosol products over land as part of the opera-tional MYD04 products. In this talk, we will show the comparisons of the MODIS Deep Blue products with data from AERONET sunphotometers on a global ba-sis. The results indicate reasonable agreements between these two. These new satellite products will allow scientists to determine quantitatively the aerosol properties near sources and their evolution along transport pathway using high spatial resolution measurements from SeaWiFS and MODIS-like instruments. We will also utilize the multiyear satellite measurements from MODIS and SeaWiFS to investigate the interannual variability of source strength, pathway, and radia-tive forcing associated with these dust outbreaks in East Asia.
He, Xiao-Ou; D’Urzo, Anthony; Jugovic, Pieter; Jhirad, Reuven; Sehgal, Prateek; Lilly, Evan
2015-01-01
Background: Spirometry is recommended for the diagnosis of asthma and chronic obstructive pulmonary disease (COPD) in international guidelines and may be useful for distinguishing asthma from COPD. Numerous spirometry interpretation algorithms (SIAs) are described in the literature, but no studies highlight how different SIAs may influence the interpretation of the same spirometric data. Aims: We examined how two different SIAs may influence decision making among primary-care physicians. Methods: Data for this initiative were gathered from 113 primary-care physicians attending accredited workshops in Canada between 2011 and 2013. Physicians were asked to interpret nine spirograms presented twice in random sequence using two different SIAs and touch pad technology for anonymous data recording. Results: We observed differences in the interpretation of spirograms using two different SIAs. When the pre-bronchodilator FEV1/FVC (forced expiratory volume in one second/forced vital capacity) ratio was >0.70, algorithm 1 led to a ‘normal’ interpretation (78% of physicians), whereas algorithm 2 prompted a bronchodilator challenge revealing changes in FEV1 that were consistent with asthma, an interpretation selected by 94% of physicians. When the FEV1/FVC ratio was <0.70 after bronchodilator challenge but FEV1 increased >12% and 200 ml, 76% suspected asthma and 10% suspected COPD using algorithm 1, whereas 74% suspected asthma versus COPD using algorithm 2 across five separate cases. The absence of a post-bronchodilator FEV1/FVC decision node in algorithm 1 did not permit consideration of possible COPD. Conclusions: This study suggests that differences in SIAs may influence decision making and lead clinicians to interpret the same spirometry data differently. PMID:25763716
Joerger, Markus; Ferreri, Andrés J M; Krähenbühl, Stephan; Schellens, Jan H M; Cerny, Thomas; Zucca, Emanuele; Huitema, Alwin D R
2012-02-01
There is no consensus regarding optimal dosing of high dose methotrexate (HDMTX) in patients with primary CNS lymphoma. Our aim was to develop a convenient dosing algorithm to target AUC(MTX) in the range between 1000 and 1100 µmol l(-1) h. A population covariate model from a pooled dataset of 131 patients receiving HDMTX was used to simulate concentration-time curves of 10,000 patients and test the efficacy of a dosing algorithm based on 24 h MTX plasma concentrations to target the prespecified AUC(MTX) . These data simulations included interindividual, interoccasion and residual unidentified variability. Patients received a total of four simulated cycles of HDMTX and adjusted MTX dosages were given for cycles two to four. The dosing algorithm proposes MTX dose adaptations ranging from +75% in patients with MTX C(24) < 0.5 µmol l(-1) up to -35% in patients with MTX C(24) > 12 µmol l(-1). The proposed dosing algorithm resulted in a marked improvement of the proportion of patients within the AUC(MTX) target between 1000 and 1100 µmol l(-1) h (11% with standard MTX dose, 35% with the adjusted dose) and a marked reduction of the interindividual variability of MTX exposure. A simple and practical dosing algorithm for HDMTX has been developed based on MTX 24 h plasma concentrations, and its potential efficacy in improving the proportion of patients within a prespecified target AUC(MTX) and reducing the interindividual variability of MTX exposure has been shown by data simulations. The clinical benefit of this dosing algorithm should be assessed in patients with primary central nervous system lymphoma (PCNSL). © 2011 The Authors. British Journal of Clinical Pharmacology © 2011 The British Pharmacological Society.
Skinnider, Michael A; Dejong, Chris A; Franczak, Brian C; McNicholas, Paul D; Magarvey, Nathan A
2017-08-16
Natural products represent a prominent source of pharmaceutically and industrially important agents. Calculating the chemical similarity of two molecules is a central task in cheminformatics, with applications at multiple stages of the drug discovery pipeline. Quantifying the similarity of natural products is a particularly important problem, as the biological activities of these molecules have been extensively optimized by natural selection. The large and structurally complex scaffolds of natural products distinguish their physical and chemical properties from those of synthetic compounds. However, no analysis of the performance of existing methods for molecular similarity calculation specific to natural products has been reported to date. Here, we present LEMONS, an algorithm for the enumeration of hypothetical modular natural product structures. We leverage this algorithm to conduct a comparative analysis of molecular similarity methods within the unique chemical space occupied by modular natural products using controlled synthetic data, and comprehensively investigate the impact of diverse biosynthetic parameters on similarity search. We additionally investigate a recently described algorithm for natural product retrobiosynthesis and alignment, and find that when rule-based retrobiosynthesis can be applied, this approach outperforms conventional two-dimensional fingerprints, suggesting it may represent a valuable approach for the targeted exploration of natural product chemical space and microbial genome mining. Our open-source algorithm is an extensible method of enumerating hypothetical natural product structures with diverse potential applications in bioinformatics.
Zomer, Ella; Osborn, David; Nazareth, Irwin; Blackburn, Ruth; Burton, Alexandra; Hardoon, Sarah; Holt, Richard Ian Gregory; King, Michael; Marston, Louise; Morris, Stephen; Omar, Rumana; Petersen, Irene; Walters, Kate; Hunter, Rachael Maree
2017-09-05
To determine the cost-effectiveness of two bespoke severe mental illness (SMI)-specific risk algorithms compared with standard risk algorithms for primary cardiovascular disease (CVD) prevention in those with SMI. Primary care setting in the UK. The analysis was from the National Health Service perspective. 1000 individuals with SMI from The Health Improvement Network Database, aged 30-74 years and without existing CVD, populated the model. Four cardiovascular risk algorithms were assessed: (1) general population lipid, (2) general population body mass index (BMI), (3) SMI-specific lipid and (4) SMI-specific BMI, compared against no algorithm. At baseline, each cardiovascular risk algorithm was applied and those considered high risk ( > 10%) were assumed to be prescribed statin therapy while others received usual care. Quality-adjusted life years (QALYs) and costs were accrued for each algorithm including no algorithm, and cost-effectiveness was calculated using the net monetary benefit (NMB) approach. Deterministic and probabilistic sensitivity analyses were performed to test assumptions made and uncertainty around parameter estimates. The SMI-specific BMI algorithm had the highest NMB resulting in 15 additional QALYs and a cost saving of approximately £53 000 per 1000 patients with SMI over 10 years, followed by the general population lipid algorithm (13 additional QALYs and a cost saving of £46 000). The general population lipid and SMI-specific BMI algorithms performed equally well. The ease and acceptability of use of an SMI-specific BMI algorithm (blood tests not required) makes it an attractive algorithm to implement in clinical settings. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Alawadi, Fahad
2010-10-01
Quantifying ocean colour properties has evolved over the past two decades from being able to merely detect their biological activity to the ability to estimate chlorophyll concentration using optical satellite sensors like MODIS and MERIS. The production of chlorophyll spatial distribution maps is a good indicator of plankton biomass (primary production) and is useful for the tracing of oceanographic currents, jets and blooms, including harmful algal blooms (HABs). Depending on the type of HABs involved and the environmental conditions, if their concentration rises above a critical threshold, it can impact the flora and fauna of the aquatic habitat through the introduction of the so called "red tide" phenomenon. The estimation of chlorophyll concentration is derived from quantifying the spectral relationship between the blue and the green bands reflected from the water column. This spectral relationship is employed in the standard ocean colour chlorophyll-a (Chlor-a) product, but is incapable of detecting certain macro-algal species that float near to or at the water surface in the form of dense filaments or mats. The ability to accurately identify algal formations that sometimes appear as oil spill look-alikes in satellite imagery, contributes towards the reduction of false-positive incidents arising from oil spill monitoring operations. Such algal formations that occur in relatively high concentrations may experience, as in land vegetation, what is known as the "red-edge" effect. This phenomena occurs at the highest reflectance slope between the maximum absorption in the red due to the surrounding ocean water and the maximum reflectance in the infra-red due to the photosynthetic pigments present in the surface algae. A new algorithm termed the surface algal bloom index (SABI), has been proposed to delineate the spatial distributions of floating micro-algal species like for example cyanobacteria or exposed inter-tidal vegetation like seagrass. This algorithm was specifically modelled to adapt to the marine habitat through its inclusion of ocean-colour sensitive bands in a four-band ratio-based relationship. The algorithm has demonstrated high stability against various environmental conditions like aerosol and sun glint.
Long-Term Evaluation of the AMSR-E Soil Moisture Product Over the Walnut Gulch Watershed, AZ
NASA Astrophysics Data System (ADS)
Bolten, J. D.; Jackson, T. J.; Lakshmi, V.; Cosh, M. H.; Drusch, M.
2005-12-01
The Advanced Microwave Scanning Radiometer -Earth Observing System (AMSR-E) was launched aboard NASA's Aqua satellite on May 4th, 2002. Quantitative estimates of soil moisture using the AMSR-E provided data have required routine radiometric data calibration and validation using comparisons of satellite observations, extended targets and field campaigns. The currently applied NASA EOS Aqua ASMR-E soil moisture algorithm is based on a change detection approach using polarization ratios (PR) of the calibrated AMSR-E channel brightness temperatures. To date, the accuracy of the soil moisture algorithm has been investigated on short time scales during field campaigns such as the Soil Moisture Experiments in 2004 (SMEX04). Results have indicated self-consistency and calibration stability of the observed brightness temperatures; however the performance of the moisture retrieval algorithm has been poor. The primary objective of this study is to evaluate the quality of the current version of the AMSR-E soil moisture product for a three year period over the Walnut Gulch Experimental Watershed (150 km2) near Tombstone, AZ; the northern study area of SMEX04. This watershed is equipped with hourly and daily recording of precipitation, soil moisture and temperature via a network of raingages and a USDA-NRCS Soil Climate Analysis Network (SCAN) site. Surface wetting and drying are easily distinguished in this area due to the moderately-vegetated terrain and seasonally intense precipitation events. Validation of AMSR-E derived soil moisture is performed from June 2002 to June 2005 using watershed averages of precipitation, and soil moisture and temperature data from the SCAN site supported by a surface soil moisture network. Long-term assessment of soil moisture algorithm performance is investigated by comparing temporal variations of moisture estimates with seasonal changes and precipitation events. Further comparisons are made with a standard soil dataset from the European Centre for Medium-Range Weather Forecasts. The results of this research will contribute to a better characterization of the low biases and discrepancies currently observed in the AMSR-E soil moisture product.
NASA Astrophysics Data System (ADS)
Sempau, Josep; Wilderman, Scott J.; Bielajew, Alex F.
2000-08-01
A new Monte Carlo (MC) algorithm, the `dose planning method' (DPM), and its associated computer program for simulating the transport of electrons and photons in radiotherapy class problems employing primary electron beams, is presented. DPM is intended to be a high-accuracy MC alternative to the current generation of treatment planning codes which rely on analytical algorithms based on an approximate solution of the photon/electron Boltzmann transport equation. For primary electron beams, DPM is capable of computing 3D dose distributions (in 1 mm3 voxels) which agree to within 1% in dose maximum with widely used and exhaustively benchmarked general-purpose public-domain MC codes in only a fraction of the CPU time. A representative problem, the simulation of 1 million 10 MeV electrons impinging upon a water phantom of 1283 voxels of 1 mm on a side, can be performed by DPM in roughly 3 min on a modern desktop workstation. DPM achieves this performance by employing transport mechanics and electron multiple scattering distribution functions which have been derived to permit long transport steps (of the order of 5 mm) which can cross heterogeneity boundaries. The underlying algorithm is a `mixed' class simulation scheme, with differential cross sections for hard inelastic collisions and bremsstrahlung events described in an approximate manner to simplify their sampling. The continuous energy loss approximation is employed for energy losses below some predefined thresholds, and photon transport (including Compton, photoelectric absorption and pair production) is simulated in an analogue manner. The δ-scattering method (Woodcock tracking) is adopted to minimize the computational costs of transporting photons across voxels.
Algorithm for designing smart factory Industry 4.0
NASA Astrophysics Data System (ADS)
Gurjanov, A. V.; Zakoldaev, D. A.; Shukalov, A. V.; Zharinov, I. O.
2018-03-01
The designing task of production division of the Industry 4.0 item designing company is being studied. The authors proposed an algorithm, which is based on the modified V L Volkovich method. This algorithm allows generating options how to arrange the production with robotized technological equipment functioning in the automatic mode. The optimization solution of the multi-criteria task for some additive criteria is the base of the algorithm.
Remote sensing investigations of wetland biomass and productivity for global biosystems research
NASA Technical Reports Server (NTRS)
Klemas, V.
1986-01-01
The relationship between spectral radiance and plant canopy biomass was studied in wetlands. Spectroradiometer data was gathered on Thematic Mapper wavebands 3, 4, and 5, and correlated with canopy and edaphic factors determined by harvesting. The relationship between spectral radiance and plant canopy biomass for major salt and brackish canopy types was determined. Algorithms were developed for biomass measurement in mangrove swamps. The influence of latitudinal variability in canopy structure on biomass assessment of selected plants was investigated. Brackish marsh biomass estimates were obtained from low altitude aircraft and compared with ground measurements. Annual net aerial primary productivity estimates computed from spectral radiance data were compiled for a Spartina alterniflora marsh. Spectral radiance data were expressed as vegetation or infrared index values. Biomass estimates computed from models were in close agreement with biomass estimates determined from harvests.
Some issues in numerical simulation of nonlinear structural response
NASA Technical Reports Server (NTRS)
Hibbitt, H. D.
1989-01-01
The development of commercial finite element software is addressed. This software provides practical tools that are used in an astonishingly wide range of engineering applications that include critical aspects of the safety evaluation of nuclear power plants or of heavily loaded offshore structures in the hostile environments of the North Sea or the Arctic, major design activities associated with the development of airframes for high strength and minimum weight, thermal analysis of electronic components, and the design of sports equipment. In the more advanced application areas, the effectiveness of the product depends critically on the quality of the mechanics and mechanics related algorithms that are implemented. Algorithmic robustness is of primary concern. Those methods that should be chosen will maximize reliability with minimal understanding on the part of the user. Computational efficiency is also important because there are always limited resources, and hence problems that are too time consuming or costly. Finally, some areas where research work will provide new methods and improvements is discussed.
Inversion Schemes to Retrieve Atmospheric and Oceanic Parameters from SeaWiFS Data
NASA Technical Reports Server (NTRS)
Deschamps, P.-Y.; Frouin, R.
1997-01-01
The investigation focuses on two key issues in satellite ocean color remote sensing, namely the presence of whitecaps on the sea surface and the validity of the aerosol models selected for the atmospheric correction of SeaWiFS data. Experiments were designed and conducted at the Scripps Institution of Oceanography to measure the optical properties of whitecaps and to study the aerosol optical properties in a typical mid-latitude coastal environment. CIMEL Electronique sunphotometers, now integrated in the AERONET network, were also deployed permanently in Bermuda and in Lanai, calibration/validation sites for SeaWiFS and MODIS. Original results were obtained on the spectral reflectance of whitecaps and on the choice of aerosol models for atmospheric correction schemes and the type of measurements that should be made to verify those schemes. Bio-optical algorithms to remotely sense primary productivity from space were also evaluated, as well as current algorithms to estimate PAR at the earth's surface.
Advanced CHP Control Algorithms: Scope Specification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katipamula, Srinivas; Brambley, Michael R.
2006-04-28
The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.
Birkegård, Anna Camilla; Andersen, Vibe Dalhoff; Halasa, Tariq; Jensen, Vibeke Frøkjær; Toft, Nils; Vigre, Håkan
2017-10-01
Accurate and detailed data on antimicrobial exposure in pig production are essential when studying the association between antimicrobial exposure and antimicrobial resistance. Due to difficulties in obtaining primary data on antimicrobial exposure in a large number of farms, there is a need for a robust and valid method to estimate the exposure using register data. An approach that estimates the antimicrobial exposure in every rearing period during the lifetime of a pig using register data was developed into a computational algorithm. In this approach data from national registers on antimicrobial purchases, movements of pigs and farm demographics registered at farm level are used. The algorithm traces batches of pigs retrospectively from slaughter to the farm(s) that housed the pigs during their finisher, weaner, and piglet period. Subsequently, the algorithm estimates the antimicrobial exposure as the number of Animal Defined Daily Doses for treatment of one kg pig in each of the rearing periods. Thus, the antimicrobial purchase data at farm level are translated into antimicrobial exposure estimates at batch level. A batch of pigs is defined here as pigs sent to slaughter at the same day from the same farm. In this study we present, validate, and optimise a computational algorithm that calculate the lifetime exposure of antimicrobials for slaughter pigs. The algorithm was evaluated by comparing the computed estimates to data on antimicrobial usage from farm records in 15 farm units. We found a good positive correlation between the two estimates. The algorithm was run for Danish slaughter pigs sent to slaughter in January to March 2015 from farms with more than 200 finishers to estimate the proportion of farms that it was applicable for. In the final process, the algorithm was successfully run for batches of pigs originating from 3026 farms with finisher units (77% of the initial population). This number can be increased if more accurate register data can be obtained. The algorithm provides a systematic and repeatable approach to estimating the antimicrobial exposure throughout the rearing period, independent of rearing site for finisher batches, as a lifetime exposure measurement. Copyright © 2017 Elsevier B.V. All rights reserved.
Demosaicking algorithm for the Kodak-RGBW color filter array
NASA Astrophysics Data System (ADS)
Rafinazari, M.; Dubois, E.
2015-01-01
Digital cameras capture images through different Color Filter Arrays and then reconstruct the full color image. Each CFA pixel only captures one primary color component; the other primary components will be estimated using information from neighboring pixels. During the demosaicking algorithm, the two unknown color components will be estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red, Green and Blue filters. The least-Squares Luma-Chroma demultiplexing method is a state of the art demosaicking method for the Bayer CFA. In this paper we develop a new demosaicking algorithm using the Kodak-RGBW CFA. This particular CFA reduces noise and improves the quality of the reconstructed images by adding white pixels. We have applied non-adaptive and adaptive demosaicking method using the Kodak-RGBW CFA on the standard Kodak image dataset and the results have been compared with previous work.
NASA Astrophysics Data System (ADS)
Ha, Jeongmok; Jeong, Hong
2016-07-01
This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.
Parallel language constructs for tensor product computations on loosely coupled architectures
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Van Rosendale, John
1989-01-01
A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. The authors focus on tensor product array computations, a simple but important class of numerical algorithms. They consider first the problem of programming one-dimensional kernel routines, such as parallel tridiagonal solvers, and then look at how such parallel kernels can be combined to form parallel tensor product algorithms.
Han, Bing; Ding, Chibiao; Zhong, Lihua; Liu, Jiayin; Qiu, Xiaolan; Hu, Yuxin; Lei, Bin
2018-01-01
The Gaofen-3 (GF-3) data processor was developed as a workstation-based GF-3 synthetic aperture radar (SAR) data processing system. The processor consists of two vital subsystems of the GF-3 ground segment, which are referred to as data ingesting subsystem (DIS) and product generation subsystem (PGS). The primary purpose of DIS is to record and catalogue GF-3 raw data with a transferring format, and PGS is to produce slant range or geocoded imagery from the signal data. This paper presents a brief introduction of the GF-3 data processor, including descriptions of the system architecture, the processing algorithms and its output format. PMID:29534464
NASA Technical Reports Server (NTRS)
Casas, J. C.; Koziana, J. V.; Saylor, M. S.; Kindle, E. C.
1982-01-01
Problems associated with the development of the measurement of air pollution from satellites (MAPS) experiment program are addressed. The primary thrust of this research was the utilization of the MAPS experiment data in three application areas: low altitude aircraft flights (one to six km); mid altitude aircraft flights (eight to 12 km); and orbiting space platforms. Extensive research work in four major areas of data management was the framework for implementation of the MAPS experiment technique. These areas are: (1) data acquisition; (2) data processing, analysis and interpretation algorithms; (3) data display techniques; and (4) information production.
A scalable parallel algorithm for multiple objective linear programs
NASA Technical Reports Server (NTRS)
Wiecek, Malgorzata M.; Zhang, Hong
1994-01-01
This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.
Accuracy of Geophysical Parameters Derived from AIRS/AMSU as a Function of Fractional Cloud Cover
NASA Technical Reports Server (NTRS)
Susskind, Joel; Barnet, Chris; Blaisdell, John; Iredell, Lena; Keita, Fricky; Kouvaris, Lou; Molnar, Gyula; Chahine, Moustafa
2005-01-01
AIRS was launched on EOS Aqua on May 4,2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1K, and layer precipitable water with an rms error of 20%, in cases with up to 80% effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, was described previously. Pre-launch simulation studies using this algorithm indicated that these results should be achievable. Some modifications have been made to the at-launch retrieval algorithm as described in this paper. Sample fields of parameters retrieved from AIRS/AMSU/HSB data are presented and validated as a function of retrieved fractional cloud cover. As in simulation, the degradation of retrieval accuracy with increasing cloud cover is small. HSB failed in February 2005, and consequently HSB channel radiances are not used in the results shown in this paper. The AIRS/AMSU retrieval algorithm described in this paper, called Version 4, become operational at the Goddard DAAC in April 2005 and is being used to analyze near-real time AIRS/AMSU data. Historical AIRS/AMSU data, going backwards from March 2005 through September 2002, is also being analyzed by the DAAC using the Version 4 algorithm.
Incorporation of quality updates for JPSS CGS Products
NASA Astrophysics Data System (ADS)
Cochran, S.; Grant, K. D.; Ibrahim, W.; Brueske, K. F.; Smit, P.
2016-12-01
NOAA's next-generation environmental satellite, the Joint Polar Satellite System (JPSS) replaces the current Polar-orbiting Operational Environmental Satellites (POES). JPSS satellites carry sensors which collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The first JPSS satellite was launched in 2011 and is currently NOAA's primary operational polar satellite. The JPSS ground system is the Common Ground System (CGS), and provides command, control, and communications (C3) and data processing (DP). A multi-mission system, CGS provides combinations of C3/DP for numerous NASA, NOAA, DoD, and international missions. In preparation for the next JPSS satellite, CGS improved its multi-mission capabilities to enhance mission operations for larger constellations of earth observing satellites with the added benefit of streamlining mission operations for other NOAA missions. This paper will discuss both the theoretical basis and the actual practices used to date to identify, test and incorporate algorithm updates into the CGS processing baseline. To provide a basis for this support, Raytheon developed a theoretical analysis framework, and the application of derived engineering processes, for the maintenance of consistency and integrity of remote sensing operational algorithm outputs. The framework is an abstraction of the operationalization of the science-grade algorithm (Sci2Ops) process used throughout the JPSS program. By combining software and systems engineering controls, manufacturing disciplines to detect and reduce defects, and a standard process to control analysis, an environment to maintain operational algorithm maturity is achieved. Results of the use of this approach to implement algorithm changes into operations will also be detailed.
Methods and Tools for Product Quality Maintenance in JPSS CGS
NASA Astrophysics Data System (ADS)
Cochran, S.; Smit, P.; Grant, K. D.; Jamilkowski, M. L.
2015-12-01
NOAA's next-generation environmental satellite, the Joint Polar Satellite System (JPSS) replaces the current Polar-orbiting Operational Environmental Satellites (POES). JPSS satellites carry sensors which collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The first JPSS satellite was launched in 2011 and is currently NOAA's primary operational polar satellite. The JPSS ground system is the Common Ground System (CGS), and provides command, control, and communications (C3) and data processing (DP). A multi-mission system, CGS provides combinations of C3/DP for numerous NASA, NOAA, DoD, and international missions. In preparation for the next JPSS satellite, CGS improved its multi-mission capabilities to enhance mission operations for larger constellations of earth observing satellites with the added benefit of streamlining mission operations for other NOAA missions. This paper will discuss both the theoretical basis and the actual practices used to date to identify, test and incorporate algorithm updates into the CGS processing baseline. To provide a basis for this support, Raytheon developed a theoretical analysis framework, and the application of derived engineering processes, for the maintenance of consistency and integrity of remote sensing operational algorithm outputs. The framework is an abstraction of the operationalization of the science-grade algorithm (Sci2Ops) process used throughout the JPSS program. By combining software and systems engineering controls, manufacturing disciplines to detect and reduce defects, and a standard process to control analysis, an environment to maintain operational algorithm maturity is achieved. Results of the use of this approach to implement algorithm changes into operations will also be detailed.
Zeng, Chen; Xu, Huiping; Fischer, Andrew M.
2016-01-01
Ocean color remote sensing significantly contributes to our understanding of phytoplankton distribution and abundance and primary productivity in the Southern Ocean (SO). However, the current SO in situ optical database is still insufficient and unevenly distributed. This limits the ability to produce robust and accurate measurements of satellite-based chlorophyll. Based on data collected on cruises around the Antarctica Peninsula (AP) on January 2014 and 2016, this research intends to enhance our knowledge of SO water and atmospheric optical characteristics and address satellite algorithm deficiency of ocean color products. We collected high resolution in situ water leaving reflectance (±1 nm band resolution), simultaneous in situ chlorophyll-a concentrations and satellite (MODIS and VIIRS) water leaving reflectance. Field samples show that clouds have a great impact on the visible green bands and are difficult to detect because NASA protocols apply the NIR band as a cloud contamination threshold. When compared to global case I water, water around the AP has lower water leaving reflectance and a narrower blue-green band ratio, which explains chlorophyll-a underestimation in high chlorophyll-a regions and overestimation in low chlorophyll-a regions. VIIRS shows higher spatial coverage and detection accuracy than MODIS. After coefficient improvement, VIIRS is able to predict chlorophyll a with 53% accuracy. PMID:27941596
Zeng, Chen; Xu, Huiping; Fischer, Andrew M
2016-12-07
Ocean color remote sensing significantly contributes to our understanding of phytoplankton distribution and abundance and primary productivity in the Southern Ocean (SO). However, the current SO in situ optical database is still insufficient and unevenly distributed. This limits the ability to produce robust and accurate measurements of satellite-based chlorophyll. Based on data collected on cruises around the Antarctica Peninsula (AP) on January 2014 and 2016, this research intends to enhance our knowledge of SO water and atmospheric optical characteristics and address satellite algorithm deficiency of ocean color products. We collected high resolution in situ water leaving reflectance (±1 nm band resolution), simultaneous in situ chlorophyll-a concentrations and satellite (MODIS and VIIRS) water leaving reflectance. Field samples show that clouds have a great impact on the visible green bands and are difficult to detect because NASA protocols apply the NIR band as a cloud contamination threshold. When compared to global case I water, water around the AP has lower water leaving reflectance and a narrower blue-green band ratio, which explains chlorophyll-a underestimation in high chlorophyll-a regions and overestimation in low chlorophyll-a regions. VIIRS shows higher spatial coverage and detection accuracy than MODIS. After coefficient improvement, VIIRS is able to predict chlorophyll a with 53% accuracy.
"ON ALGEBRAIC DECODING OF Q-ARY REED-MULLER AND PRODUCT REED-SOLOMON CODES"
DOE Office of Scientific and Technical Information (OSTI.GOV)
SANTHI, NANDAKISHORE
We consider a list decoding algorithm recently proposed by Pellikaan-Wu for q-ary Reed-Muller codes RM{sub q}({ell}, m, n) of length n {le} q{sup m} when {ell} {le} q. A simple and easily accessible correctness proof is given which shows that this algorithm achieves a relative error-correction radius of {tau} {le} (1-{radical}{ell}q{sup m-1}/n). This is an improvement over the proof using one-point Algebraic-Geometric decoding method given in. The described algorithm can be adapted to decode product Reed-Solomon codes. We then propose a new low complexity recursive aJgebraic decoding algorithm for product Reed-Solomon codes and Reed-Muller codes. This algorithm achieves a relativemore » error correction radius of {tau} {le} {Pi}{sub i=1}{sup m} (1 - {radical}k{sub i}/q). This algorithm is then proved to outperform the Pellikaan-Wu algorithm in both complexity and error correction radius over a wide range of code rates.« less
Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia
2017-04-01
Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The 'predictAL-10' risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the 'predictAL-9'), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. © British Journal of General Practice 2017.
Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia
2017-01-01
Background Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. Aim To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Design and setting Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Method Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. Results From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The ‘predictAL-10’ risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the ‘predictAL-9’), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. Conclusion The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. PMID:28360074
Giordano, Anna; Barresi, Antonello A; Fissore, Davide
2011-01-01
The aim of this article is to show a procedure to build the design space for the primary drying of a pharmaceuticals lyophilization process. Mathematical simulation of the process is used to identify the operating conditions that allow preserving product quality and meeting operating constraints posed by the equipment. In fact, product temperature has to be maintained below a limit value throughout the operation, and the sublimation flux has to be lower than the maximum value allowed by the capacity of the condenser, besides avoiding choking flow in the duct connecting the drying chamber to the condenser. Few experimental runs are required to get the values of the parameters of the model: the dynamic parameters estimation algorithm, an advanced tool based on the pressure rise test, is used to this purpose. A simple procedure is proposed to take into account parameters uncertainty and, thus, it is possible to find the recipes that allow fulfilling the process constraints within the required uncertainty range. The same approach can be effective to take into account the heterogeneity of the batch when designing the freeze-drying recipe. Copyright © 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Identifying patients with ischemic heart disease in an electronic medical record.
Ivers, Noah; Pylypenko, Bogdan; Tu, Karen
2011-01-01
Increasing utilization of electronic medical records (EMRs) presents an opportunity to efficiently measure quality indicators in primary care. Achieving this goal requires the development of accurate patient-disease registries. This study aimed to develop and validate an algorithm for identifying patients with ischemic heart disease (IHD) within the EMR. An algorithm was developed to search the unstructured text within the medical history fields in the EMR for IHD-related terminology. This algorithm was applied to a 5% random sample of adult patient charts (n = 969) drawn from a convenience sample of 17 Ontario family physicians. The accuracy of the algorithm for identifying patients with IHD was compared to the results of 3 trained chart abstractors. The manual chart abstraction identified 87 patients with IHD in the random sample (prevalence = 8.98%). The accuracy of the algorithm for identifying patients with IHD was as follows: sensitivity = 72.4% (95% confidence interval [CI]: 61.8-81.5); specificity = 99.3% (95% CI: 98.5-99.8); positive predictive value = 91.3% (95% CI: 82.0-96.7); negative predictive value = 97.3 (95% CI: 96.1-98.3); and kappa = 0.79 (95% CI: 0.72-0.86). Patients with IHD can be accurately identified by applying a search algorithm for the medical history fields in the EMR of primary care providers who were not using standardized approaches to code diagnoses. The accuracy compares favorably to other methods for identifying patients with IHD. The results of this study may aid policy makers, researchers, and clinicians to develop registries and to examine quality indicators for IHD in primary care.
NASA Astrophysics Data System (ADS)
Li, Zhong-xiao; Li, Zhen-chun
2016-09-01
The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.
Research on laser marking speed optimization by using genetic algorithm.
Wang, Dongyun; Yu, Qiwei; Zhang, Yu
2015-01-01
Laser Marking Machine is the most common coding equipment on product packaging lines. However, the speed of laser marking has become a bottleneck of production. In order to remove this bottleneck, a new method based on a genetic algorithm is designed. On the basis of this algorithm, a controller was designed and simulations and experiments were performed. The results show that using this algorithm could effectively improve laser marking efficiency by 25%.
NASA Astrophysics Data System (ADS)
Yelkenci Köse, Simge; Demir, Leyla; Tunalı, Semra; Türsel Eliiyi, Deniz
2015-02-01
In manufacturing systems, optimal buffer allocation has a considerable impact on capacity improvement. This study presents a simulation optimization procedure to solve the buffer allocation problem in a heat exchanger production plant so as to improve the capacity of the system. For optimization, three metaheuristic-based search algorithms, i.e. a binary-genetic algorithm (B-GA), a binary-simulated annealing algorithm (B-SA) and a binary-tabu search algorithm (B-TS), are proposed. These algorithms are integrated with the simulation model of the production line. The simulation model, which captures the stochastic and dynamic nature of the production line, is used as an evaluation function for the proposed metaheuristics. The experimental study with benchmark problem instances from the literature and the real-life problem show that the proposed B-TS algorithm outperforms B-GA and B-SA in terms of solution quality.
NASA Astrophysics Data System (ADS)
Madani, Nima; Kimball, John S.; Running, Steven W.
2017-11-01
In the light use efficiency (LUE) approach of estimating the gross primary productivity (GPP), plant productivity is linearly related to absorbed photosynthetically active radiation assuming that plants absorb and convert solar energy into biomass within a maximum LUE (LUEmax) rate, which is assumed to vary conservatively within a given biome type. However, it has been shown that photosynthetic efficiency can vary within biomes. In this study, we used 149 global CO2 flux towers to derive the optimum LUE (LUEopt) under prevailing climate conditions for each tower location, stratified according to model training and test sites. Unlike LUEmax, LUEopt varies according to heterogeneous landscape characteristics and species traits. The LUEopt data showed large spatial variability within and between biome types, so that a simple biome classification explained only 29% of LUEopt variability over 95 global tower training sites. The use of explanatory variables in a mixed effect regression model explained 62.2% of the spatial variability in tower LUEopt data. The resulting regression model was used for global extrapolation of the LUEopt data and GPP estimation. The GPP estimated using the new LUEopt map showed significant improvement relative to global tower data, including a 15% R2 increase and 34% root-mean-square error reduction relative to baseline GPP calculations derived from biome-specific LUEmax constants. The new global LUEopt map is expected to improve the performance of LUE-based GPP algorithms for better assessment and monitoring of global terrestrial productivity and carbon dynamics.
NASA Technical Reports Server (NTRS)
Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.
2010-01-01
Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement
Orgeron, Gabriela M; Te Riele, Anneline; Tichnell, Crystal; Wang, Weijia; Murray, Brittney; Bhonsale, Aditya; Judge, Daniel P; Kamel, Ihab R; Zimmerman, Stephan L; Tandri, Harikrishna; Calkins, Hugh; James, Cynthia A
2018-02-01
Ventricular arrhythmias are a feared complication of arrhythmogenic right ventricular dysplasia/cardiomyopathy. In 2015, an International Task Force Consensus Statement proposed a risk stratification algorithm for implantable cardioverter-defibrillator placement in arrhythmogenic right ventricular dysplasia/cardiomyopathy. To evaluate performance of the algorithm, 365 arrhythmogenic right ventricular dysplasia/cardiomyopathy patients were classified as having a Class I, IIa, IIb, or III indication per the algorithm at baseline. Survival free from sustained ventricular arrhythmia (VT/VF) in follow-up was the primary outcome. Incidence of ventricular fibrillation/flutter cycle length <240 ms was also assessed. Two hundred twenty-four (61%) patients had a Class I implantable cardioverter-defibrillator indication; 80 (22%), Class IIa; 54 (15%), Class IIb; and 7 (2%), Class III. During a median 4.2 (interquartile range, 1.7-8.4)-year follow-up, 190 (52%) patients had VT/VF and 60 (16%) had ventricular fibrillation/flutter. Although the algorithm appropriately differentiated risk of VT/VF, incidence of VT/VF was underestimated (observed versus expected: 29.6 [95% confidence interval, 25.2-34.0] versus >10%/year Class I; 15.5 [confidence interval 11.1-21.6] versus 1% to 10%/year Class IIa). In addition, the algorithm did not differentiate survival free from ventricular fibrillation/flutter between Class I and IIa patients ( P =0.97) or for VT/VF in Class I and IIa primary prevention patients ( P =0.22). Adding Holter results (<1000 premature ventricular contractions/24 hours) to International Task Force Consensus classification differentiated risks. While the algorithm differentiates arrhythmic risk well overall, it did not distinguish ventricular fibrillation/flutter risks of patients with Class I and IIa implantable cardioverter-defibrillator indications. Limited differentiation was seen for primary prevention cases. As these are vital uncertainties in clinical decision-making, refinements to the algorithm are suggested prior to implementation. © 2018 American Heart Association, Inc.
NASA Technical Reports Server (NTRS)
Brown, Christopher W.; Brock, John C.
1998-01-01
The successful launch of the National Space Development Agency of Japan (NASDA) Ocean Color and Temperature Sensor (OCTS) in August 1996, and the launch of Orbital Science Corporation's (OSC) Sea-viewing Wide-Field-of-view Sensor (SeaWiFS) in August 1997 signaled the beginning of a new era for ocean color research and application. These data may be used to remotely evaluate 1) water quality, 2) transport of sediments and adhered pollutants, 3) primary production, upon which commercial shellfish and finfish populations depend for food, and 4) harmful algal blooms which pose a threat to public health and economies of affected areas. Several US government agencies have recently expressed interest in monitoring U.S. coastal waters using optical remote sensing. This renewed interest is broadly driven by 1) resource management concerns over the impact of coastward shifts in population and land use on the ecosystems of estuaries, wetlands, nearshore benthic environments and fisheries, 2) recognition of the need to understand short time scale global change due to urbanization of sensitive land-margin ecosystems, and 3) national security issues. Satellite ocean color sensors have the potential to furnish data at the appropriate time and space scales to evaluate and resolve these concerns and problems. In this draft technical memorandum, we outline our progress during the first year of our SIMBIOS project to evaluate ocean color bio-optical algorithms and products generated using OCTS and SeaWiFS data in coastal US waters.
Analyzing angular distributions for two-step dissociation mechanisms in velocity map imaging.
Straus, Daniel B; Butler, Lynne M; Alligood, Bridget W; Butler, Laurie J
2013-08-15
Increasingly, velocity map imaging is becoming the method of choice to study photoinduced molecular dissociation processes. This paper introduces an algorithm to analyze the measured net speed, P(vnet), and angular, β(vnet), distributions of the products from a two-step dissociation mechanism, where the first step but not the second is induced by absorption of linearly polarized laser light. Typically, this might be the photodissociation of a C-X bond (X = halogen or other atom) to produce an atom and a momentum-matched radical that has enough internal energy to subsequently dissociate (without the absorption of an additional photon). It is this second step, the dissociation of the unstable radicals, that one wishes to study, but the measured net velocity of the final products is the vector sum of the velocity imparted to the radical in the primary photodissociation (which is determined by taking data on the momentum-matched atomic cophotofragment) and the additional velocity vector imparted in the subsequent dissociation of the unstable radical. The algorithm allows one to determine, from the forward-convolution fitting of the net velocity distribution, the distribution of velocity vectors imparted in the second step of the mechanism. One can thus deduce the secondary velocity distribution, characterized by a speed distribution P(v1,2°) and an angular distribution I(θ2°), where θ2° is the angle between the dissociating radical's velocity vector and the additional velocity vector imparted to the product detected from the subsequent dissociation of the radical.
NASA Astrophysics Data System (ADS)
Min, Min; Wu, Chunqiang; Li, Chuan; Liu, Hui; Xu, Na; Wu, Xiao; Chen, Lin; Wang, Fu; Sun, Fenglin; Qin, Danyu; Wang, Xi; Li, Bo; Zheng, Zhaojun; Cao, Guangzhen; Dong, Lixin
2017-08-01
Fengyun-4A (FY-4A), the first of the Chinese next-generation geostationary meteorological satellites, launched in 2016, offers several advances over the FY-2: more spectral bands, faster imaging, and infrared hyperspectral measurements. To support the major objective of developing the prototypes of FY-4 science algorithms, two science product algorithm testbeds for imagers and sounders have been developed by the scientists in the FY-4 Algorithm Working Group (AWG). Both testbeds, written in FORTRAN and C programming languages for Linux or UNIX systems, have been tested successfully by using Intel/g compilers. Some important FY-4 science products, including cloud mask, cloud properties, and temperature profiles, have been retrieved successfully through using a proxy imager, Himawari-8/Advanced Himawari Imager (AHI), and sounder data, obtained from the Atmospheric InfraRed Sounder, thus demonstrating their robustness. In addition, in early 2016, the FY-4 AWG was developed based on the imager testbed—a near real-time processing system for Himawari-8/AHI data for use by Chinese weather forecasters. Consequently, robust and flexible science product algorithm testbeds have provided essential and productive tools for popularizing FY-4 data and developing substantial improvements in FY-4 products.
Variational optimization algorithms for uniform matrix product states
NASA Astrophysics Data System (ADS)
Zauner-Stauber, V.; Vanderstraeten, L.; Fishman, M. T.; Verstraete, F.; Haegeman, J.
2018-01-01
We combine the density matrix renormalization group (DMRG) with matrix product state tangent space concepts to construct a variational algorithm for finding ground states of one-dimensional quantum lattices in the thermodynamic limit. A careful comparison of this variational uniform matrix product state algorithm (VUMPS) with infinite density matrix renormalization group (IDMRG) and with infinite time evolving block decimation (ITEBD) reveals substantial gains in convergence speed and precision. We also demonstrate that VUMPS works very efficiently for Hamiltonians with long-range interactions and also for the simulation of two-dimensional models on infinite cylinders. The new algorithm can be conveniently implemented as an extension of an already existing DMRG implementation.
Toward a computational psycholinguistics of reference production.
van Deemter, Kees; Gatt, Albert; van Gompel, Roger P G; Krahmer, Emiel
2012-04-01
This article introduces the topic ''Production of Referring Expressions: Bridging the Gap between Computational and Empirical Approaches to Reference'' of the journal Topics in Cognitive Science. We argue that computational and psycholinguistic approaches to reference production can benefit from closer interaction, and that this is likely to result in the construction of algorithms that differ markedly from the ones currently known in the computational literature. We focus particularly on determinism, the feature of existing algorithms that is perhaps most clearly at odds with psycholinguistic results, discussing how future algorithms might include non-determinism, and how new psycholinguistic experiments could inform the development of such algorithms. Copyright © 2012 Cognitive Science Society, Inc.
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Koch, Grady J.; Kavaya, Michael J.; Ray, Taylor J.
2013-01-01
Two versions of airborne wind profiling algorithms for the pulsed 2-micron coherent Doppler lidar system at NASA Langley Research Center in Virginia are presented. Each algorithm utilizes different number of line-of-sight (LOS) lidar returns while compensating the adverse effects of different coordinate systems between the aircraft and the Earth. One of the two algorithms APOLO (Airborne Wind Profiling Algorithm for Doppler Wind Lidar) estimates wind products using two LOSs. The other algorithm utilizes five LOSs. The airborne lidar data were acquired during the NASA's Genesis and Rapid Intensification Processes (GRIP) campaign in 2010. The wind profile products from the two algorithms are compared with the dropsonde data to validate their results.
A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems
Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik; ...
2017-07-25
Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.
A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik
Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.
Research on Laser Marking Speed Optimization by Using Genetic Algorithm
Wang, Dongyun; Yu, Qiwei; Zhang, Yu
2015-01-01
Laser Marking Machine is the most common coding equipment on product packaging lines. However, the speed of laser marking has become a bottleneck of production. In order to remove this bottleneck, a new method based on a genetic algorithm is designed. On the basis of this algorithm, a controller was designed and simulations and experiments were performed. The results show that using this algorithm could effectively improve laser marking efficiency by 25%. PMID:25955831
Ding, N S; Hart, A; De Cruz, P
2016-01-01
Nonresponse and loss of response to anti-TNF therapies in Crohn's disease represent significant clinical problems for which clear management guidelines are lacking. To review the incidence, mechanisms and predictors of primary nonresponse and secondary loss of response to formulate practical clinical algorithms to guide management. Through a systematic literature review, 503 articles were identified which fit the inclusion criteria. Primary nonresponse to anti-TNF treatment affects 13-40% of patients. Secondary loss of response to anti-TNF occurs in 23-46% of patients when determined according to dose intensification, and 5-13% of patients when gauged by drug discontinuation rates. Recent evidence suggests that the mechanisms underlying primary nonresponse and secondary loss of response are multifactorial and include disease characteristics (phenotype, location, severity); drug (pharmacokinetic, pharmacodynamic or immunogenicity) and treatment strategy (dosing regimen) related factors. Clinical algorithms that employ therapeutic drug monitoring (using anti-TNF tough levels and anti-drug antibody levels) may be used to determine the underlying cause of primary nonresponse and secondary loss of response respectively and guide clinicians as to which patients are most likely to respond to anti-TNF therapy and help optimise drug therapy for those who are losing response to anti-TNF therapy. Nonresponse or loss of response to anti-TNF occurs commonly in Crohn's disease. Clinical algorithms utilising therapeutic drug monitoring may establish the mechanisms for treatment failure and help guide the subsequent therapeutic approach. © 2015 John Wiley & Sons Ltd.
The Improved Locating Algorithm of Particle Filter Based on ROS Robot
NASA Astrophysics Data System (ADS)
Fang, Xun; Fu, Xiaoyang; Sun, Ming
2018-03-01
This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.
17 CFR 41.27 - Prohibition of dual trading in security futures products by floor brokers.
Code of Federal Regulations, 2013 CFR
2013-04-01
... through a trading system that electronically matches bids and offers pursuant to a predetermined algorithm... participants with a time or place advantage or the ability to override a predetermined algorithm must submit an... override a predetermined algorithm from trading a security futures product for accounts in which these same...
17 CFR 41.27 - Prohibition of dual trading in security futures products by floor brokers.
Code of Federal Regulations, 2014 CFR
2014-04-01
... through a trading system that electronically matches bids and offers pursuant to a predetermined algorithm... participants with a time or place advantage or the ability to override a predetermined algorithm must submit an... override a predetermined algorithm from trading a security futures product for accounts in which these same...
Rayan, Anwar; Raiyn, Jamal
2017-01-01
Cancer is considered one of the primary diseases that cause morbidity and mortality in millions of people worldwide and due to its prevalence, there is undoubtedly an unmet need to discover novel anticancer drugs. However, the traditional process of drug discovery and development is lengthy and expensive, so the application of in silico techniques and optimization algorithms in drug discovery projects can provide a solution, saving time and costs. A set of 617 approved anticancer drugs, constituting the active domain, and a set of 2,892 natural products, constituting the inactive domain, were employed to build predictive models and to index natural products for their anticancer bioactivity. Using the iterative stochastic elimination optimization technique, we obtained a highly discriminative and robust model, with an area under the curve of 0.95. Twelve natural products that scored highly as potential anticancer drug candidates are disclosed. Searching the scientific literature revealed that few of those molecules (Neoechinulin, Colchicine, and Piperolactam) have already been experimentally screened for their anticancer activity and found active. The other phytochemicals await evaluation for their anticancerous activity in wet lab. PMID:29121120
Rayan, Anwar; Raiyn, Jamal; Falah, Mizied
2017-01-01
Cancer is considered one of the primary diseases that cause morbidity and mortality in millions of people worldwide and due to its prevalence, there is undoubtedly an unmet need to discover novel anticancer drugs. However, the traditional process of drug discovery and development is lengthy and expensive, so the application of in silico techniques and optimization algorithms in drug discovery projects can provide a solution, saving time and costs. A set of 617 approved anticancer drugs, constituting the active domain, and a set of 2,892 natural products, constituting the inactive domain, were employed to build predictive models and to index natural products for their anticancer bioactivity. Using the iterative stochastic elimination optimization technique, we obtained a highly discriminative and robust model, with an area under the curve of 0.95. Twelve natural products that scored highly as potential anticancer drug candidates are disclosed. Searching the scientific literature revealed that few of those molecules (Neoechinulin, Colchicine, and Piperolactam) have already been experimentally screened for their anticancer activity and found active. The other phytochemicals await evaluation for their anticancerous activity in wet lab.
Using L-M BP Algorithm Forecase the 305 Days Production of First-Breed Dairy
NASA Astrophysics Data System (ADS)
Wei, Xiaoli; Qi, Guoqiang; Shen, Weizheng; Jian, Sun
Aiming at the shortage of conventional BP algorithm, a BP neural net works improved by L-M algorithm is put forward. On the basis of the network, a Prediction model for 305 day's milk productions was set up. Traditional methods finish these data must spend at least 305 days, But this model can forecast first-breed dairy's 305 days milk production ahead of 215 days. The validity of the improved BP neural network predictive model was validated through the experiments.
GLASS daytime all-wave net radiation product: Algorithm development and preliminary validation
Jiang, Bo; Liang, Shunlin; Ma, Han; ...
2016-03-09
Mapping surface all-wave net radiation (R n) is critically needed for various applications. Several existing R n products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS) daytime R n product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS) model is determined after comparison with three other algorithms. The validation of the GLASS R n product based on high-quality in situ measurementsmore » in the United States shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm -2, and an average bias of 17.59 Wm -2. Furthermore, we also compare our product/algorithm with another satellite product (CERES-SYN) and two reanalysis products (MERRA and JRA55), and find that the accuracy of the much higher spatial resolution GLASS R n product is satisfactory. The GLASS R n product from 2000 to the present is operational and freely available to the public.« less
GLASS daytime all-wave net radiation product: Algorithm development and preliminary validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Bo; Liang, Shunlin; Ma, Han
Mapping surface all-wave net radiation (R n) is critically needed for various applications. Several existing R n products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS) daytime R n product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS) model is determined after comparison with three other algorithms. The validation of the GLASS R n product based on high-quality in situ measurementsmore » in the United States shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm -2, and an average bias of 17.59 Wm -2. Furthermore, we also compare our product/algorithm with another satellite product (CERES-SYN) and two reanalysis products (MERRA and JRA55), and find that the accuracy of the much higher spatial resolution GLASS R n product is satisfactory. The GLASS R n product from 2000 to the present is operational and freely available to the public.« less
A Tensor Product Formulation of Strassen's Matrix Multiplication Algorithm with Memory Reduction
Kumar, B.; Huang, C. -H.; Sadayappan, P.; ...
1995-01-01
In this article, we present a program generation strategy of Strassen's matrix multiplication algorithm using a programming methodology based on tensor product formulas. In this methodology, block recursive programs such as the fast Fourier Transforms and Strassen's matrix multiplication algorithm are expressed as algebraic formulas involving tensor products and other matrix operations. Such formulas can be systematically translated to high-performance parallel/vector codes for various architectures. In this article, we present a nonrecursive implementation of Strassen's algorithm for shared memory vector processors such as the Cray Y-MP. A previous implementation of Strassen's algorithm synthesized from tensor product formulas required working storagemore » of size O(7 n ) for multiplying 2 n × 2 n matrices. We present a modified formulation in which the working storage requirement is reduced to O(4 n ). The modified formulation exhibits sufficient parallelism for efficient implementation on a shared memory multiprocessor. Performance results on a Cray Y-MP8/64 are presented.« less
Verification and Improvement of ERS-1/2 Altimeter Geophysical Data Records for Global Change Studies
NASA Technical Reports Server (NTRS)
Shum, C. K.
2000-01-01
This Final Technical Report summarizes the research work conducted under NASA's Physical Oceanography Program entitled, Verification And Improvement Of ERS-112 Altimeter Geophysical Data Recorders For Global Change Studies, for the time period from January 1, 2000 through June 30, 2000. This report also provides a summary of the investigation from July 1, 1997 - June 30, 2000. The primary objectives of this investigation include verification and improvement of the ERS-1 and ERS-2 radar altimeter geophysical data records for distribution of the data to the ESA-approved U.S. ERS-1/-2 investigators for global climate change studies. Specifically, the investigation is to verify and improve the ERS geophysical data record products by calibrating the instrument and assessing accuracy for the ERS-1/-2 orbital, geophysical, media, and instrument corrections. The purpose is to ensure that the consistency of constants, standards and algorithms with TOPEX/POSEIDON radar altimeter for global climate change studies such as the monitoring and interpretation of long-term sea level change. This investigation has provided the current best precise orbits, with the radial orbit accuracy for ERS-1 (Phases C-G) and ERS-2 estimated at the 3-5 cm rms level, an 30-fold improvement compared to the 1993 accuracy. We have finalized the production and verification of the value-added ERS-1 mission (Phases A, B, C, D, E, F, and G), in collaboration with JPL PODAAC and the University of Texas. Orbit and data verification and improvement of algorithms led to the best data product available to-date. ERS-2 altimeter data have been improved and we have been active on Envisat (2001 launch) GDR algorithm review and improvement. The data improvement of ERS-1 and ERS-2 led to improvement in the global mean sea surface, marine gravity anomaly and bathymetry models, and a study of Antarctica mass balance, which was published in Science in 1998.
Primary chromatic aberration elimination via optimization work with genetic algorithm
NASA Astrophysics Data System (ADS)
Wu, Bo-Wen; Liu, Tung-Kuan; Fang, Yi-Chin; Chou, Jyh-Horng; Tsai, Hsien-Lin; Chang, En-Hao
2008-09-01
Chromatic Aberration plays a part in modern optical systems, especially in digitalized and smart optical systems. Much effort has been devoted to eliminating specific chromatic aberration in order to match the demand for advanced digitalized optical products. Basically, the elimination of axial chromatic and lateral color aberration of an optical lens and system depends on the selection of optical glass. According to reports from glass companies all over the world, the number of various newly developed optical glasses in the market exceeds three hundred. However, due to the complexity of a practical optical system, optical designers have so far had difficulty in finding the right solution to eliminate small axial and lateral chromatic aberration except by the Damped Least Squares (DLS) method, which is limited in so far as the DLS method has not yet managed to find a better optical system configuration. In the present research, genetic algorithms are used to replace traditional DLS so as to eliminate axial and lateral chromatic, by combining the theories of geometric optics in Tessar type lenses and a technique involving Binary/Real Encoding, Multiple Dynamic Crossover and Random Gene Mutation to find a much better configuration for optical glasses. By implementing the algorithms outlined in this paper, satisfactory results can be achieved in eliminating axial and lateral color aberration.
NASA Astrophysics Data System (ADS)
Wu, Dongjun
Network industries have technologies characterized by a spatial hierarchy, the "network," with capital-intensive interconnections and time-dependent, capacity-limited flows of products and services through the network to customers. This dissertation studies service pricing, investment and business operating strategies for the electric power network. First-best solutions for a variety of pricing and investment problems have been studied. The evaluation of genetic algorithms (GA, which are methods based on the idea of natural evolution) as a primary means of solving complicated network problems, both w.r.t. pricing: as well as w.r.t. investment and other operating decisions, has been conducted. New constraint-handling techniques in GAs have been studied and tested. The actual application of such constraint-handling techniques in solving practical non-linear optimization problems has been tested on several complex network design problems with encouraging initial results. Genetic algorithms provide solutions that are feasible and close to optimal when the optimal solution is know; in some instances, the near-optimal solutions for small problems by the proposed GA approach can only be tested by pushing the limits of currently available non-linear optimization software. The performance is far better than several commercially available GA programs, which are generally inadequate in solving any of the problems studied in this dissertation, primarily because of their poor handling of constraints. Genetic algorithms, if carefully designed, seem very promising in solving difficult problems which are intractable by traditional analytic methods.
MODIS 3km Aerosol Product: Algorithm and Global Perspective
NASA Technical Reports Server (NTRS)
Remer, L. A.; Mattoo, S.; Levy, R. C.; Munchak, L.
2013-01-01
After more than a decade of producing a nominal 10 km aerosol product based on the dark target method, the MODIS aerosol team will be releasing a nominal 3 km product as part of their Collection 6 release. The new product differs from the original 10 km product only in the manner in which reflectance pixels are ingested, organized and selected by the aerosol algorithm. Overall, the 3 km product closely mirrors the 10 km product. However, the finer resolution product is able to retrieve over ocean closer to islands and coastlines, and is better able to resolve fine aerosol features such as smoke plumes over both ocean and land. In some situations, it provides retrievals over entire regions that the 10 km product barely samples. In situations traditionally difficult for the dark target algorithm, such as over bright or urban surfaces the 3 km product introduces isolated spikes of artificially high aerosol optical depth (AOD) that the 10 km algorithm avoids. Over land, globally, the 3 km product appears to be 0.01 to 0.02 higher than the 10 km product, while over ocean, the 3 km algorithm is retrieving a proportionally greater number of very low aerosol loading situations. Based on collocations with ground-based observations for only six months, expected errors associated with the 3 km land product are determined to be greater than for the 10 km product: 0.05 0.25 AOD. Over ocean, the suggestion is for expected errors to be the same as the 10 km product: 0.03 0.05 AOD. The advantage of the product is on the local scale, which will require continued evaluation not addressed here. Nevertheless, the new 3 km product is expected to provide important information complementary to existing satellite-derived products and become an important tool for the aerosol community.
Derived crop management data for the LandCarbon Project
Schmidt, Gail; Liu, Shu-Guang; Oeding, Jennifer
2011-01-01
The LandCarbon project is assessing potential carbon pools and greenhouse gas fluxes under various scenarios and land management regimes to provide information to support the formulation of policies governing climate change mitigation, adaptation and land management strategies. The project is unique in that spatially explicit maps of annual land cover and land-use change are created at the 250-meter pixel resolution. The project uses vast amounts of data as input to the models, including satellite, climate, land cover, soil, and land management data. Management data have been obtained from the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS) and USDA Economic Research Service (ERS) that provides information regarding crop type, crop harvesting, manure, fertilizer, tillage, and cover crop (U.S. Department of Agriculture, 2011a, b, c). The LandCarbon team queried the USDA databases to pull historic crop-related management data relative to the needs of the project. The data obtained was in table form with the County or State Federal Information Processing Standard (FIPS) and the year as the primary and secondary keys. Future projections were generated for the A1B, A2, B1, and B2 Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenarios (SRES) scenarios using the historic data values along with coefficients generated by the project. The PBL Netherlands Environmental Assessment Agency (PBL) Integrated Model to Assess the Global Environment (IMAGE) modeling framework (Integrated Model to Assess the Global Environment, 2006) was used to develop coefficients for each IPCC SRES scenario, which were applied to the historic management data to produce future land management practice projections. The LandCarbon project developed algorithms for deriving gridded data, using these tabular management data products as input. The derived gridded crop type, crop harvesting, manure, fertilizer, tillage, and cover crop products are used as input to the LandCarbon models to represent the historic and the future scenario management data. The overall algorithm to generate each of the gridded management products is based on the land cover and the derived crop type. For each year in the land cover dataset, the algorithm loops through each 250-meter pixel in the ecoregion. If the current pixel in the land cover dataset is an agriculture pixel, then the crop type is determined. Once the crop type is derived, then the crop harvest, manure, fertilizer, tillage, and cover crop values are derived independently for that crop type. The following is the overall algorithm used for the set of derived grids. The specific algorithm to generate each management dataset is discussed in the respective section for that dataset, along with special data handling and a description of the output product.
Early Results from the Global Precipitation Measurement (GPM) Mission in Japan
NASA Astrophysics Data System (ADS)
Kachi, Misako; Kubota, Takuji; Masaki, Takeshi; Kaneko, Yuki; Kanemaru, Kaya; Oki, Riko; Iguchi, Toshio; Nakamura, Kenji; Takayabu, Yukari N.
2015-04-01
The Global Precipitation Measurement (GPM) mission is an international collaboration to achieve highly accurate and highly frequent global precipitation observations. The GPM mission consists of the GPM Core Observatory jointly developed by U.S. and Japan and Constellation Satellites that carry microwave radiometers and provided by the GPM partner agencies. The Dual-frequency Precipitation Radar (DPR) was developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT), and installed on the GPM Core Observatory. The GPM Core Observatory chooses a non-sun-synchronous orbit to carry on diurnal cycle observations of rainfall from the Tropical Rainfall Measuring Mission (TRMM) satellite and was successfully launched at 3:37 a.m. on February 28, 2014 (JST), while the Constellation Satellites, including JAXA's Global Change Observation Mission (GCOM) - Water (GCOM-W1) or "SHIZUKU," are launched by each partner agency sometime around 2014 and contribute to expand observation coverage and increase observation frequency JAXA develops the DPR Level 1 algorithm, and the NASA-JAXA Joint Algorithm Team develops the DPR Level 2 and DPR-GMI combined Level2 algorithms. JAXA also develops the Global Rainfall Map (GPM-GSMaP) algorithm, which is a latest version of the Global Satellite Mapping of Precipitation (GSMaP), as national product to distribute hourly and 0.1-degree horizontal resolution rainfall map. Major improvements in the GPM-GSMaP algorithm is; 1) improvements in microwave imager algorithm based on AMSR2 precipitation standard algorithm, including new land algorithm, new coast detection scheme; 2) Development of orographic rainfall correction method for warm rainfall in coastal area (Taniguchi et al., 2012); 3) Update of database, including rainfall detection over land and land surface emission database; 4) Development of microwave sounder algorithm over land (Kida et al., 2012); and 5) Development of gauge-calibrated GSMaP algorithm (Ushio et al., 2013). In addition to those improvements in the algorithms number of passive microwave imagers and/or sounders used in the GPM-GSMaP was increased compared to the previous version. After the early calibration and validation of the products and evaluation that all products achieved the release criteria, all GPM standard products and the GPM-GSMaP product has been released to the public since September 2014. The GPM products can be downloaded via the internet through the JAXA G-Portal (https://www.gportal.jaxa.jp).
Guglielmi, Valeria; Bellia, Alfonso; Pecchioli, Serena; Medea, Gerardo; Parretti, Damiano; Lauro, Davide; Sbraccia, Paolo; Federici, Massimo; Cricelli, Iacopo; Cricelli, Claudio; Lapi, Francesco
2016-11-15
There are some inconsistencies on prevalence estimates of familial hypercholesterolemia (FH) in general population across Europe due to variable application of its diagnostic criteria. We aimed to investigate the FH epidemiology in Italy applying the Dutch Lipid Clinical Network (DLCN) score, and two alternative diagnostic algorithms to a primary care database. We performed a retrospective population-based study using the Health Search IMS Health Longitudinal Patient Database (HSD) and including active (alive and currently registered with their general practitioners (GPs)) patients on December 31, 2014. Cases of FH were identified by applying DLCN score. Two further algorithms, based on either ICD9CM coding for FH or some clinical items adopted by the DLCN, were tested towards DLCN itself as gold standard. We estimated a prevalence of 0.01% for "definite" and 0.18% for "definite" plus "probable" cases as per the DLCN. Algorithms 1 and 2 reported a FH prevalence of 0.9 and 0.13%, respectively. Both algorithms resulted in consistent specificity (1: 99.10%; 2: 99.9%) towards DLCN, but Algorithm 2 considerably better identified true positive (sensitivity=85.90%) than Algorithm 1 (sensitivity=10.10%). The application of DLCN or valid diagnostic alternatives in the Italian primary care setting provides estimates of FH prevalence consistent with those reported in other screening studies in Caucasian population. These diagnostic criteria should be therefore fostered among GPs. In the perspective of FH new therapeutic options, the epidemiological picture of FH is even more relevant to foresee the costs and to plan affordable reimbursement programs in Italy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
OLYMPEX Data Workshop: GPM View
NASA Technical Reports Server (NTRS)
Petersen, W.
2017-01-01
OLYMPEX Primary Objectives: Datasets to enable: (1) Direct validation over complex terrain at multiple scales, liquid and frozen precip types, (a) Do we capture terrain and synoptic regime transitions, orographic enhancements/structure, full range of precipitation intensity (e.g., very light to heavy) and types, spatial variability? (b) How well can we estimate space/time-accumulated precipitation over terrain (liquid + frozen)? (2) Physical validation of algorithms in mid-latitude cold season frontal systems over ocean and complex terrain, (a) What are the column properties of frozen, melting, liquid hydrometeors-their relative contributions to estimated surface precipitation, transition under the influence of terrain gradients, and systematic variability as a function of synoptic regime? (3) Integrated hydrologic validation in complex terrain, (a) Can satellite estimates be combined with modeling over complex topography to drive improved products (assimilation, downscaling) [Level IV products] (b) What are capabilities and limitations for use of satellite-based precipitation estimates in stream/river flow forecasting?
Joint Interdisciplinary Earth Science Information Center
NASA Technical Reports Server (NTRS)
Kafatos, Menas
2004-01-01
The report spans the three year period beginning in June of 2001 and ending June of 2004. Joint Interdisciplinary Earth Science Information Center's (JIESIC) primary purpose has been to carry out research in support of the Global Change Data Center and other Earth science laboratories at Goddard involved in Earth science, remote sensing and applications data and information services. The purpose is to extend the usage of NASA Earth Observing System data, microwave data and other Earth observing data. JIESIC projects fall within the following categories: research and development; STW and WW prototyping; science data, information products and services; and science algorithm support. JIESIC facilitates extending the utility of NASA's Earth System Enterprise (ESE) data, information products and services to better meet the science data and information needs of a number of science and applications user communities, including domain users such as discipline Earth scientists, interdisciplinary Earth scientists, Earth science applications users and educators.
The Scientific and Societal Need for Accurate Global Remote Sensing of Marine Suspended Sediments
NASA Technical Reports Server (NTRS)
Acker, James G.
2006-01-01
Population pressure, commercial development, and climate change are expected to cause continuing alteration of the vital oceanic coastal zone environment. These pressures will influence both the geology and biology of the littoral, nearshore, and continental shelf regions. A pressing need for global observation of coastal change processes is an accurate remotely-sensed data product for marine suspended sediments. The concentration, delivery, transport, and deposition of sediments is strongly relevant to coastal primary production, inland and coastal hydrology, coastal erosion, and loss of fragile wetland and island habitats. Sediment transport and deposition is also related to anthropogenic activities including agriculture, fisheries, aquaculture, harbor and port commerce, and military operations. Because accurate estimation of marine suspended sediment concentrations requires advanced ocean optical analysis, a focused collaborative program of algorithm development and assessment is recommended, following the successful experience of data refinement for remotely-sensed global ocean chlorophyll concentrations.
Improved Global Ocean Color Using Polymer Algorithm
NASA Astrophysics Data System (ADS)
Steinmetz, Francois; Ramon, Didier; Deschamps, ierre-Yves; Stum, Jacques
2010-12-01
A global ocean color product has been developed based on the use of the POLYMER algorithm to correct atmospheric scattering and sun glint and to process the data to a Level 2 ocean color product. Thanks to the use of this algorithm, the coverage and accuracy of the MERIS ocean color product have been significantly improved when compared to the standard product, therefore increasing its usefulness for global ocean monitor- ing applications like GLOBCOLOUR. We will present the latest developments of the algorithm, its first application to MODIS data and its validation against in-situ data from the MERMAID database. Examples will be shown of global NRT chlorophyll maps produced by CLS with POLYMER for operational applications like fishing or oil and gas industry, as well as its use by Scripps for a NASA study of the Beaufort and Chukchi seas.
Hanekom, Susan D; Brooks, Dina; Denehy, Linda; Fagevik-Olsén, Monika; Hardcastle, Timothy C; Manie, Shamila; Louw, Quinette
2012-02-06
Postoperative pulmonary complications remain the most significant cause of morbidity following open upper abdominal surgery despite advances in perioperative care. However, due to the poor quality primary research uncertainty surrounding the value of prophylactic physiotherapy intervention in the management of patients following abdominal surgery persists. The Delphi process has been proposed as a pragmatic methodology to guide clinical practice when evidence is equivocal. The objective was to develop a clinical management algorithm for the post operative management of abdominal surgery patients. Eleven draft algorithm statements extracted from the extant literature by the primary research team were verified and rated by scientist clinicians (n=5) in an electronic three round Delphi process. Algorithm statements which reached a priori defined consensus-semi-interquartile range (SIQR)<0.5-were collated into the algorithm. The five panelists allocated to the abdominal surgery Delphi panel were from Australia, Canada, Sweden, and South Africa. The 11 draft algorithm statements were edited and 5 additional statements were formulated. The panel reached consensus on the rating of all statements. Four statements were rated essential. An expert Delphi panel interpreted the equivocal evidence for the physiotherapeutic management of patients following upper abdominal surgery. Through a process of consensus a clinical management algorithm was formulated. This algorithm can now be used by clinicians to guide clinical practice in this population.
Ladner, Travis R; Greenberg, Jacob K; Guerrero, Nicole; Olsen, Margaret A; Shannon, Chevis N; Yarbrough, Chester K; Piccirillo, Jay F; Anderson, Richard C E; Feldstein, Neil A; Wellons, John C; Smyth, Matthew D; Park, Tae Sung; Limbrick, David D
2016-05-01
OBJECTIVE Administrative billing data may facilitate large-scale assessments of treatment outcomes for pediatric Chiari malformation Type I (CM-I). Validated International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code algorithms for identifying CM-I surgery are critical prerequisites for such studies but are currently only available for adults. The objective of this study was to validate two ICD-9-CM code algorithms using hospital billing data to identify pediatric patients undergoing CM-I decompression surgery. METHODS The authors retrospectively analyzed the validity of two ICD-9-CM code algorithms for identifying pediatric CM-I decompression surgery performed at 3 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-I), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression or laminectomy). Algorithm 2 restricted this group to the subset of patients with a primary discharge diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. RESULTS Among 625 first-time admissions identified by Algorithm 1, the overall PPV for CM-I decompression was 92%. Among the 581 admissions identified by Algorithm 2, the PPV was 97%. The PPV for Algorithm 1 was lower in one center (84%) compared with the other centers (93%-94%), whereas the PPV of Algorithm 2 remained high (96%-98%) across all subgroups. The sensitivity of Algorithms 1 (91%) and 2 (89%) was very good and remained so across subgroups (82%-97%). CONCLUSIONS An ICD-9-CM algorithm requiring a primary diagnosis of CM-I has excellent PPV and very good sensitivity for identifying CM-I decompression surgery in pediatric patients. These results establish a basis for utilizing administrative billing data to assess pediatric CM-I treatment outcomes.
Results from CrIS-ATMS Obtained Using the AIRS Science Team Retrieval Methodology
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis C.; Iredell, Lena
2013-01-01
AIRS was launched on EOS Aqua in May 2002, together with AMSU-A and HSB (which subsequently failed early in the mission), to form a next generation polar orbiting infrared and microwave atmospheric sounding system. AIRS/AMSU had two primary objectives. The first objective was to provide real-time data products available for use by the operational Numerical Weather Prediction Centers in a data assimilation mode to improve the skill of their subsequent forecasts. The second objective was to provide accurate unbiased sounding products with good spatial coverage that are used to generate stable multi-year climate data sets to study the earth's interannual variability, climate processes, and possibly long-term trends. AIRS/AMSU data for all time periods are now being processed using the state of the art AIRS Science Team Version-6 retrieval methodology. The Suomi-NPP mission was launched in October 2011 as part of a sequence of Low Earth Orbiting satellite missions under the "Joint Polar Satellite System" (JPSS). NPP carries CrIS and ATMS, which are advanced infra-red and microwave atmospheric sounders that were designed as follow-ons to the AIRS and AMSU instruments. The main objective of this work is to assess whether CrIS/ATMS will be an adequate replacement for AIRS/AMSU from the perspective of the generation of accurate and consistent long term climate data records, or if improved instruments should be developed for future flight. It is critical for CrIS/ATMS to be processed using an algorithm similar to, or at least comparable to, AIRS Version-6 before such an assessment can be made. We have been conducting research to optimize products derived from CrIS/ATMS observations using a scientific approach analogous to the AIRS Version-6 retrieval algorithm. Our latest research uses Version-5.70 of the CrIS/ATMS retrieval algorithm, which is otherwise analogous to AIRS Version-6, but does not yet contain the benefit of use of a Neural-Net first guess start-up system which significantly improved results of AIRS Version-6. Version-5.70 CrIS/ATMS temperature profile and surface skin temperature retrievals are of very good quality, and are better than AIRS Version-5 retrievals, but are still significantly poorer than those of AIRS Version-6. CrIS/ATMS retrievals should improve when a Neural-Net start-up system is ready for use. We also examined CrIS/ATMS retrievals generated by NOAA using their NUCAPS retrieval algorithm, which is based on earlier versions of the AIRS Science Team retrieval algorithms. We show that the NUCAPS algorithm as currently configured is not well suited for climate monitoring purposes.
Evaluation and Validation of Updated MODIS C6 and VIIRS LAI/FPAR
NASA Astrophysics Data System (ADS)
Yan, K.; Park, T.; Chen, C.; Yang, B.; Yan, G.; Knyazikhin, Y.; Myneni, R. B.; CHOI, S.
2015-12-01
Leaf Area Index (LAI) and Fraction of Photosynthetically Active Radiation (0.4-0.7 μm) absorbed by vegetation (FPAR) play a key role in characterizing vegetation canopy functioning and energy absorption capacity. With radiative transfer realization, MODIS onboard NASA EOS Terra and Aqua satellites has provided globally continuous LAI/FPAR since 2000 and continuously updated the products with better quality. And NPP VIIRS shows the measurement capability to extend high-quality LAI/FPAR time series data records as a successor of MODIS. The primary objectives of this study are 1) to evaluate and validate newly updated MODIS Collection 6 (C6) LAI/FPAR product which has finer resolution (500m) and improved biome type input, and 2) to examine and adjust VIIRS LAI/FPAR algorithm for continuity with MODIS'. For MODIS C6 investigation, we basically measure the spatial coverage (i.e., main radiative transfer algorithm execution), continuity and consistency with Collection 5 (C5), and accuracy with field measured LAI/FPAR. And we also validate C6 LAI/FPAR via comparing other possible global LAI/FPAR products (e.g., GLASS and CYCLOPES) and capturing co-varying seasonal signatures with climatic variables (e.g., temperature and precipitation). For VIIRS evaluation and adjustment, we first quantify possible difference between C5 and MODIS heritage based VIIRS LAI/FPAR. Then based on the radiative transfer theory of canopy spectral invariants, we find VIIRS- and biome-specific configurable parameters (single scattering albedo and uncertainty). These two practices for MODIS C6 and VIIRS LAI/FPAR products clearly suggest that (a) MODIS C6 has better coverage and accuracy than C5, (b) C6 shows consistent spatiotemporal pattern with C5, (c) VIIRS has the potential for producing MODIS-like global LAI/FPAR Earth System Data Records.
DOT National Transportation Integrated Search
2012-10-01
The primary goal of this project is to demonstrate the accuracy and utility of a freezing drizzle algorithm that can be implemented on roadway environmental sensing systems (ESSs). : The types of problems related to the occurrence of freezing precipi...
Optimal Draft requirement for vibratory tillage equipment using Genetic Algorithm Technique
NASA Astrophysics Data System (ADS)
Rao, Gowripathi; Chaudhary, Himanshu; Singh, Prem
2018-03-01
Agriculture is an important sector of Indian economy. Primary and secondary tillage operations are required for any land preparation process. Conventionally different tractor-drawn implements such as mouldboard plough, disc plough, subsoiler, cultivator and disc harrow, etc. are used for primary and secondary manipulations of soils. Among them, oscillatory tillage equipment is one such type which uses vibratory motion for tillage purpose. Several investigators have reported that the requirement for draft consumption in primary tillage implements is more as compared to oscillating one because they are always in contact with soil. Therefore in this paper, an attempt is made to find out the optimal parameters from the experimental data available in the literature to obtain minimum draft consumption through genetic algorithm technique.
NASA Technical Reports Server (NTRS)
Lu, Yun-Chi; Chang, Hyo Duck; Krupp, Brian; Kumar, Ravindra; Swaroop, Anand
1992-01-01
Information on Earth Observing System (EOS) output data products and input data requirements that has been compiled by the Science Processing Support Office (SPSO) at GSFC is presented. Since Version 1.0 of the SPSO Report was released in August 1991, there have been significant changes in the EOS program. In anticipation of a likely budget cut for the EOS Project, NASA HQ restructured the EOS program. An initial program consisting of two large platforms was replaced by plans for multiple, smaller platforms, and some EOS instruments were either deselected or descoped. Updated payload information reflecting the restructured EOS program superseding the August 1991 version of the SPSO report is included. This report has been expanded to cover information on non-EOS data products, and consists of three volumes (Volumes 1, 2, and 3). Volume 1 provides information on instrument outputs and input requirements. Volume 2 is devoted to Interdisciplinary Science (IDS) outputs and input requirements, including the 'best' and 'alternative' match analysis. Volume 3 provides information about retrieval algorithms, non-EOS input requirements of instrument teams and IDS investigators, and availability of non-EOS data products at seven primary Distributed Active Archive Centers (DAAC's).
Muche-Borowski, Cathleen; Lühmann, Dagmar; Schäfer, Ingmar; Mundt, Rebekka; Wagner, Hans-Otto; Scherer, Martin
2017-06-22
The study aimed to develop a comprehensive algorithm (meta-algorithm) for primary care encounters of patients with multimorbidity. We used a novel, case-based and evidence-based procedure to overcome methodological difficulties in guideline development for patients with complex care needs. Systematic guideline development methodology including systematic evidence retrieval (guideline synopses), expert opinions and informal and formal consensus procedures. Primary care. The meta-algorithm was developed in six steps:1. Designing 10 case vignettes of patients with multimorbidity (common, epidemiologically confirmed disease patterns and/or particularly challenging health care needs) in a multidisciplinary workshop.2. Based on the main diagnoses, a systematic guideline synopsis of evidence-based and consensus-based clinical practice guidelines was prepared. The recommendations were prioritised according to the clinical and psychosocial characteristics of the case vignettes.3. Case vignettes along with the respective guideline recommendations were validated and specifically commented on by an external panel of practicing general practitioners (GPs).4. Guideline recommendations and experts' opinions were summarised as case specific management recommendations (N-of-one guidelines).5. Healthcare preferences of patients with multimorbidity were elicited from a systematic literature review and supplemented with information from qualitative interviews.6. All N-of-one guidelines were analysed using pattern recognition to identify common decision nodes and care elements. These elements were put together to form a generic meta-algorithm. The resulting meta-algorithm reflects the logic of a GP's encounter of a patient with multimorbidity regarding decision-making situations, communication needs and priorities. It can be filled with the complex problems of individual patients and hereby offer guidance to the practitioner. Contrary to simple, symptom-oriented algorithms, the meta-algorithm illustrates a superordinate process that permanently keeps the entire patient in view. The meta-algorithm represents the back bone of the multimorbidity guideline of the German College of General Practitioners and Family Physicians. This article presents solely the development phase; the meta-algorithm needs to be piloted before it can be implemented. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Hybrid algorithms for fuzzy reverse supply chain network design.
Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.
Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design
Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057
Parametric inference for biological sequence analysis.
Pachter, Lior; Sturmfels, Bernd
2004-11-16
One of the major successes in computational biology has been the unification, by using the graphical model formalism, of a multitude of algorithms for annotating and comparing biological sequences. Graphical models that have been applied to these problems include hidden Markov models for annotation, tree models for phylogenetics, and pair hidden Markov models for alignment. A single algorithm, the sum-product algorithm, solves many of the inference problems that are associated with different statistical models. This article introduces the polytope propagation algorithm for computing the Newton polytope of an observation from a graphical model. This algorithm is a geometric version of the sum-product algorithm and is used to analyze the parametric behavior of maximum a posteriori inference calculations for graphical models.
Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David
2013-05-21
We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.
Terra MODIS Band 27 Electronic Crosstalk Effect and Its Removal
NASA Technical Reports Server (NTRS)
Sun, Junqiang; Xiong, Xiaoxiong; Madhavan, Sriharsha; Wenny, Brian
2012-01-01
The MODerate-resolution Imaging Spectroradiometer (MODIS) is one of the primary instruments in the NASA Earth Observing System (EOS). The first MODIS instrument was launched in December, 1999 on-board the Terra spacecraft. MODIS has 36 bands, covering a wavelength range from 0.4 micron to 14.4 micron. MODIS band 27 (6.72 micron) is a water vapor band, which is designed to be insensitive to Earth surface features. In recent Earth View (EV) images of Terra band 27, surface feature contamination is clearly seen and striping has become very pronounced. In this paper, it is shown that band 27 is impacted by electronic crosstalk from bands 28-30. An algorithm using a linear approximation is developed to correct the crosstalk effect. The crosstalk coefficients are derived from Terra MODIS lunar observations. They show that the crosstalk is strongly detector dependent and the crosstalk pattern has changed dramatically since launch. The crosstalk contributions are positive to the instrument response of band 27 early in the mission but became negative and much larger in magnitude at later stages of the mission for most detectors of the band. The algorithm is applied to both Black Body (BB) calibration and MODIS L1B products. With the crosstalk effect removed, the calibration coefficients of Terra MODIS band 27 derived from the BB show that the detector differences become smaller. With the algorithm applied to MODIS L1B products, the Earth surface features are significantly removed and the striping is substantially reduced in the images of the band. The approach developed in this report for removal of the electronic crosstalk effect can be applied to other MODIS bands if similar crosstalk behaviors occur.
Quantitative Electron Probe Microanalysis: State of the Art
NASA Technical Reports Server (NTRS)
Carpernter, P. K.
2005-01-01
Quantitative electron-probe microanalysis (EPMA) has improved due to better instrument design and X-ray correction methods. Design improvement of the electron column and X-ray spectrometer has resulted in measurement precision that exceeds analytical accuracy. Wavelength-dispersive spectrometer (WDS) have layered-dispersive diffraction crystals with improved light-element sensitivity. Newer energy-dispersive spectrometers (EDS) have Si-drift detector elements, thin window designs, and digital processing electronics with X-ray throughput approaching that of WDS Systems. Using these systems, digital X-ray mapping coupled with spectrum imaging is a powerful compositional mapping tool. Improvements in analytical accuracy are due to better X-ray correction algorithms, mass absorption coefficient data sets,and analysis method for complex geometries. ZAF algorithms have ban superceded by Phi(pz) algorithms that better model the depth distribution of primary X-ray production. Complex thin film and particle geometries are treated using Phi(pz) algorithms, end results agree well with Monte Carlo simulations. For geological materials, X-ray absorption dominates the corretions end depends on the accuracy of mass absorption coefficient (MAC) data sets. However, few MACs have been experimentally measured, and the use of fitted coefficients continues due to general success of the analytical technique. A polynomial formulation of the Bence-Albec alpha-factor technique, calibrated using Phi(pz) algorithms, is used to critically evaluate accuracy issues and can be also be used for high 2% relative and is limited by measurement precision for ideal cases, but for many elements the analytical accuracy is unproven. The EPMA technique has improved to the point where it is frequently used instead of the petrogaphic microscope for reconnaissance work. Examples of stagnant research areas are: WDS detector design characterization of calibration standards, and the need for more complete treatment of the continuum X-ray fluorescence correction.
NASA Technical Reports Server (NTRS)
Neale, Christopher M. U.; Mcdonnell, Jeffrey J.; Ramsey, Douglas; Hipps, Lawrence; Tarboton, David
1993-01-01
Since the launch of the DMSP Special Sensor Microwave/Imager (SSM/I), several algorithms have been developed to retrieve overland parameters. These include the present operational algorithms resulting from the Navy calibration/validation effort such as land surface type (Neale et al. 1990), land surface temperature (McFarland et al. 1990), surface moisture (McFarland and Neale, 1991) and snow parameters (McFarland and Neale, 1991). In addition, other work has been done including the classification of snow cover and precipitation using the SSM/I (Grody, 1991). Due to the empirical nature of most of the above mentioned algorithms, further research is warranted and improvements can probably be obtained through a combination of radiative transfer modelling to study the physical processes governing the microwave emissions at the SSM/I frequencies, and the incorporation of additional ground truth data and special cases into the regression data sets. We have proposed specifically to improve the retrieval of surface moisture and snow parameters using the WetNet SSM/I data sets along with ground truth information namely climatic variables from the NOAA cooperative network of weather stations as well as imagery from other satellite sensors such as the AVHRR and Thematic Mapper. In the case of surface moisture retrievals the characterization of vegetation density is of primary concern. The higher spatial resolution satellite imagery collected at concurrent periods will be used to characterize vegetation types and amounts which, along with radiative transfer modelling should lead to more physically based retrievals. Snow parameter retrieval algorithm improvement will initially concentrate on the classification of snowpacks (dry snow, wet snow, refrozen snow) and later on specific products such as snow water equivalent. Significant accomplishments in the past year are presented.
Production scheduling and rescheduling with genetic algorithms.
Bierwirth, C; Mattfeld, D C
1999-01-01
A general model for job shop scheduling is described which applies to static, dynamic and non-deterministic production environments. Next, a Genetic Algorithm is presented which solves the job shop scheduling problem. This algorithm is tested in a dynamic environment under different workload situations. Thereby, a highly efficient decoding procedure is proposed which strongly improves the quality of schedules. Finally, this technique is tested for scheduling and rescheduling in a non-deterministic environment. It is shown by experiment that conventional methods of production control are clearly outperformed at reasonable run-time costs.
Zhao, Chengquan
2015-01-01
Screening for cervical cancer with cytology testing has been very effective in reducing cervical cancer in the United States. For decades, the approach was an annual Pap test. In 2000, the Hybrid Capture 2 human papillomavirus (HPV) test was approved by the U.S. Food and Drug Administration (FDA) for screening women who have atypical squamous cells of underdetermined significance (ASCUS) detected by Pap test to determine the need for colposcopy. In 2003, the FDA approved expanding the use of the test to include screening performed in conjunction with a Pap test for women over the age of 30 years, referred to as “cotesting.” Cotesting allows women to extend the testing interval to 3 years if both tests have negative results. In April of 2014, the FDA approved the use of an HPV test (the cobas HPV test) for primary cervical cancer screening for women over the age of 25 years, without the need for a concomitant Pap test. The approval recommended either colposcopy or a Pap test for patients with specific high-risk HPV types detected by the HPV test. This was based on the results of the ATHENA trial, which included more than 40,000 women. Reaction to this decision has been mixed. Supporters point to the fact that the primary-screening algorithm found more disease (cervical intraepithelial neoplasia 3 or worse [CIN3+]) and also found it earlier than did cytology or cotesting. Moreover, the positive predictive value and positive-likelihood ratio of the primary-screening algorithm were higher than those of cytology. Opponents of the decision prefer cotesting, as this approach detects more disease than the HPV test alone. In addition, the performance of this new algorithm has not been assessed in routine clinical use. Professional organizations will need to develop guidelines that incorporate this testing algorithm. In this Point-Counterpoint, Dr. Stoler explains why he favors the primary-screening algorithm, while Drs. Austin and Zhao explain why they prefer the cotesting approach to screening for cervical cancer. PMID:25948606
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patrick, C. E.; Aliaga, L.; Bashyal, A.
We present double-differential measurements of antineutrino charged-current quasielastic scattering in the MINERvA detector. This study improves on a previous single-differential measurement by using updated reconstruction algorithms and interaction models and provides a complete description of observed muon kinematics in the form of a double-differential cross section with respect to muon transverse and longitudinal momentum. We also include in our signal definition, zero-meson final states arising from multinucleon interactions and from resonant pion production followed by pion absorption in the primary nucleus. We find that model agreement is considerably improved by a model tuned to MINERvA inclusive neutrino scattering data thatmore » incorporates nuclear effects such as weak nuclear screening and two-particle, two-hole enhancements.« less
Hyperspectral data compression using a Wiener filter predictor
NASA Astrophysics Data System (ADS)
Villeneuve, Pierre V.; Beaven, Scott G.; Stocker, Alan D.
2013-09-01
The application of compression to hyperspectral image data is a significant technical challenge. A primary bottleneck in disseminating data products to the tactical user community is the limited communication bandwidth between the airborne sensor and the ground station receiver. This report summarizes the newly-developed "Z-Chrome" algorithm for lossless compression of hyperspectral image data. A Wiener filter prediction framework is used as a basis for modeling new image bands from already-encoded bands. The resulting residual errors are then compressed using available state-of-the-art lossless image compression functions. Compression performance is demonstrated using a large number of test data collected over a wide variety of scene content from six different airborne and spaceborne sensors .
Whitney, Gina; Daves, Suanne; Hughes, Alex; Watkins, Scott; Woods, Marcella; Kreger, Michael; Marincola, Paula; Chocron, Isaac; Donahue, Brian
2013-07-01
The goal of this project is to measure the impact of standardization of transfusion practice on blood product utilization and postoperative bleeding in pediatric cardiac surgery patients. Transfusion is common following cardiopulmonary bypass (CPB) in children and is associated with increased mortality, infection, and duration of mechanical ventilation. Transfusion in pediatric cardiac surgery is often based on clinical judgment rather than objective data. Although objective transfusion algorithms have demonstrated efficacy for reducing transfusion in adult cardiac surgery, such algorithms have not been applied in the pediatric setting. This quality improvement effort was designed to reduce blood product utilization in pediatric cardiac surgery using a blood product transfusion algorithm. We implemented an evidence-based transfusion protocol in January 2011 and monitored the impact of this algorithm on blood product utilization, chest tube output during the first 12 h of intensive care unit (ICU) admission, and predischarge mortality. When compared with the 12 months preceding implementation, blood utilization per case in the operating room odds ratio (OR) for the 11 months following implementation decreased by 66% for red cells (P = 0.001) and 86% for cryoprecipitate (P < 0.001). Blood utilization during the first 12 h of ICU did not increase during this time and actually decreased 56% for plasma (P = 0.006) and 41% for red cells (P = 0.031), indicating that the decrease in OR transfusion did not shift the transfusion burden to the ICU. Postoperative bleeding, as measured by chest tube output in the first 12 ICU hours, did not increase following implementation of the algorithm. Monthly surgical volume did not change significantly following implementation of the algorithm (P = 0.477). In a logistic regression model for predischarge mortality among the nontransplant patients, after accounting for surgical severity and duration of CPB, use of the transfusion algorithm was associated with a 0.247 relative risk of mortality (P = 0.013). These results indicate that introduction of an objective transfusion algorithm in pediatric cardiac surgery significantly reduces perioperative blood product utilization and mortality, without increasing postoperative chest tube losses. © 2013 John Wiley & Sons Ltd.
The PHQ-PD as a Screening Tool for Panic Disorder in the Primary Care Setting in Spain
Wood, Cristina Mae; Ruíz-Rodríguez, Paloma; Tomás-Tomás, Patricia; Gracia-Gracia, Irene; Dongil-Collado, Esperanza; Iruarrizaga, M. Iciar
2016-01-01
Introduction Panic disorder is a common anxiety disorder and is highly prevalent in Spanish primary care centres. The use of validated tools can improve the detection of panic disorder in primary care populations, thus enabling referral for specialized treatment. The aim of this study is to determine the accuracy of the Patient Health Questionnaire-Panic Disorder (PHQ-PD) as a screening and diagnostic tool for panic disorder in Spanish primary care centres. Method We compared the psychometric properties of the PHQ-PD to the reference standard, the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I) interview. General practitioners referred 178 patients who completed the entire PHQ test, including the PHQ-PD, to undergo the SCID-I. The sensitivity, specificity, positive and negative predictive values and positive and negative likelihood ratios of the PHQ-PD were assessed. Results The operating characteristics of the PHQ-PD are moderate. The best cut-off score was 5 (sensitivity .77, specificity .72). Modifications to the questionnaire's algorithms improved test characteristics (sensitivity .77, specificity .72) compared to the original algorithm. The screening question alone yielded the highest sensitivity score (.83). Conclusion Although the modified algorithm of the PHQ-PD only yielded moderate results as a diagnostic test for panic disorder, it was better than the original. Using only the first question of the PHQ-PD showed the best psychometric properties (sensitivity). Based on these findings, we suggest the use of the screening questions for screening purposes and the modified algorithm for diagnostic purposes. PMID:27525977
Recursive approach to the moment-based phase unwrapping method.
Langley, Jason A; Brice, Robert G; Zhao, Qun
2010-06-01
The moment-based phase unwrapping algorithm approximates the phase map as a product of Gegenbauer polynomials, but the weight function for the Gegenbauer polynomials generates artificial singularities along the edge of the phase map. A method is presented to remove the singularities inherent to the moment-based phase unwrapping algorithm by approximating the phase map as a product of two one-dimensional Legendre polynomials and applying a recursive property of derivatives of Legendre polynomials. The proposed phase unwrapping algorithm is tested on simulated and experimental data sets. The results are then compared to those of PRELUDE 2D, a widely used phase unwrapping algorithm, and a Chebyshev-polynomial-based phase unwrapping algorithm. It was found that the proposed phase unwrapping algorithm provides results that are comparable to those obtained by using PRELUDE 2D and the Chebyshev phase unwrapping algorithm.
Sharma, Manuj; Petersen, Irene; Nazareth, Irwin; Coton, Sonia J
2016-01-01
Background Research into diabetes mellitus (DM) often requires a reproducible method for identifying and distinguishing individuals with type 1 DM (T1DM) and type 2 DM (T2DM). Objectives To develop a method to identify individuals with T1DM and T2DM using UK primary care electronic health records. Methods Using data from The Health Improvement Network primary care database, we developed a two-step algorithm. The first algorithm step identified individuals with potential T1DM or T2DM based on diagnostic records, treatment, and clinical test results. We excluded individuals with records for rarer DM subtypes only. For individuals to be considered diabetic, they needed to have at least two records indicative of DM; one of which was required to be a diagnostic record. We then classified individuals with T1DM and T2DM using the second algorithm step. A combination of diagnostic codes, medication prescribed, age at diagnosis, and whether the case was incident or prevalent were used in this process. We internally validated this classification algorithm through comparison against an independent clinical examination of The Health Improvement Network electronic health records for a random sample of 500 DM individuals. Results Out of 9,161,866 individuals aged 0–99 years from 2000 to 2014, we classified 37,693 individuals with T1DM and 418,433 with T2DM, while 1,792 individuals remained unclassified. A small proportion were classified with some uncertainty (1,155 [3.1%] of all individuals with T1DM and 6,139 [1.5%] with T2DM) due to unclear health records. During validation, manual assignment of DM type based on clinical assessment of the entire electronic record and algorithmic assignment led to equivalent classification in all instances. Conclusion The majority of individuals with T1DM and T2DM can be readily identified from UK primary care electronic health records. Our approach can be adapted for use in other health care settings. PMID:27785102
Sharma, Manuj; Petersen, Irene; Nazareth, Irwin; Coton, Sonia J
2016-01-01
Research into diabetes mellitus (DM) often requires a reproducible method for identifying and distinguishing individuals with type 1 DM (T1DM) and type 2 DM (T2DM). To develop a method to identify individuals with T1DM and T2DM using UK primary care electronic health records. Using data from The Health Improvement Network primary care database, we developed a two-step algorithm. The first algorithm step identified individuals with potential T1DM or T2DM based on diagnostic records, treatment, and clinical test results. We excluded individuals with records for rarer DM subtypes only. For individuals to be considered diabetic, they needed to have at least two records indicative of DM; one of which was required to be a diagnostic record. We then classified individuals with T1DM and T2DM using the second algorithm step. A combination of diagnostic codes, medication prescribed, age at diagnosis, and whether the case was incident or prevalent were used in this process. We internally validated this classification algorithm through comparison against an independent clinical examination of The Health Improvement Network electronic health records for a random sample of 500 DM individuals. Out of 9,161,866 individuals aged 0-99 years from 2000 to 2014, we classified 37,693 individuals with T1DM and 418,433 with T2DM, while 1,792 individuals remained unclassified. A small proportion were classified with some uncertainty (1,155 [3.1%] of all individuals with T1DM and 6,139 [1.5%] with T2DM) due to unclear health records. During validation, manual assignment of DM type based on clinical assessment of the entire electronic record and algorithmic assignment led to equivalent classification in all instances. The majority of individuals with T1DM and T2DM can be readily identified from UK primary care electronic health records. Our approach can be adapted for use in other health care settings.
Time-ordered product expansions for computational stochastic system biology.
Mjolsness, Eric
2013-06-01
The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.
Hardjojo, Antony; Gunachandran, Arunan; Pang, Long; Abdullah, Mohammed Ridzwan Bin; Wah, Win; Chong, Joash Wen Chen; Goh, Ee Hui; Teo, Sok Huang; Lim, Gilbert; Lee, Mong Li; Hsu, Wynne; Lee, Vernon; Chen, Mark I-Cheng; Wong, Franco; Phang, Jonathan Siung King
2018-06-11
Free-text clinical records provide a source of information that complements traditional disease surveillance. To electronically harness these records, they need to be transformed into codified fields by natural language processing algorithms. The aim of this study was to develop, train, and validate Clinical History Extractor for Syndromic Surveillance (CHESS), an natural language processing algorithm to extract clinical information from free-text primary care records. CHESS is a keyword-based natural language processing algorithm to extract 48 signs and symptoms suggesting respiratory infections, gastrointestinal infections, constitutional, as well as other signs and symptoms potentially associated with infectious diseases. The algorithm also captured the assertion status (affirmed, negated, or suspected) and symptom duration. Electronic medical records from the National Healthcare Group Polyclinics, a major public sector primary care provider in Singapore, were randomly extracted and manually reviewed by 2 human reviewers, with a third reviewer as the adjudicator. The algorithm was evaluated based on 1680 notes against the human-coded result as the reference standard, with half of the data used for training and the other half for validation. The symptoms most commonly present within the 1680 clinical records at the episode level were those typically present in respiratory infections such as cough (744/7703, 9.66%), sore throat (591/7703, 7.67%), rhinorrhea (552/7703, 7.17%), and fever (928/7703, 12.04%). At the episode level, CHESS had an overall performance of 96.7% precision and 97.6% recall on the training dataset and 96.0% precision and 93.1% recall on the validation dataset. Symptoms suggesting respiratory and gastrointestinal infections were all detected with more than 90% precision and recall. CHESS correctly assigned the assertion status in 97.3%, 97.9%, and 89.8% of affirmed, negated, and suspected signs and symptoms, respectively (97.6% overall accuracy). Symptom episode duration was correctly identified in 81.2% of records with known duration status. We have developed an natural language processing algorithm dubbed CHESS that achieves good performance in extracting signs and symptoms from primary care free-text clinical records. In addition to the presence of symptoms, our algorithm can also accurately distinguish affirmed, negated, and suspected assertion statuses and extract symptom durations. ©Antony Hardjojo, Arunan Gunachandran, Long Pang, Mohammed Ridzwan Bin Abdullah, Win Wah, Joash Wen Chen Chong, Ee Hui Goh, Sok Huang Teo, Gilbert Lim, Mong Li Lee, Wynne Hsu, Vernon Lee, Mark I-Cheng Chen, Franco Wong, Jonathan Siung King Phang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 11.06.2018.
Real-time energy-saving metro train rescheduling with primary delay identification
Li, Keping; Schonfeld, Paul
2018-01-01
This paper aims to reschedule online metro trains in delay scenarios. A graph representation and a mixed integer programming model are proposed to formulate the optimization problem. The solution approach is a two-stage optimization method. In the first stage, based on a proposed train state graph and system analysis, the primary and flow-on delays are specifically analyzed and identified with a critical path algorithm. For the second stage a hybrid genetic algorithm is designed to optimize the schedule, with the delay identification results as input. Then, based on the infrastructure data of Beijing Subway Line 4 of China, case studies are presented to demonstrate the effectiveness and efficiency of the solution approach. The results show that the algorithm can quickly and accurately identify primary delays among different types of delays. The economic cost of energy consumption and total delay is considerably reduced (by more than 10% in each case). The computation time of the Hybrid-GA is low enough for rescheduling online. Sensitivity analyses further demonstrate that the proposed approach can be used as a decision-making support tool for operators. PMID:29474471
NASA Astrophysics Data System (ADS)
Barnes, M.; Moore, D. J.; Scott, R. L.; MacBean, N.; Ponce-Campos, G. E.; Breshears, D. D.
2017-12-01
Both satellite observations and eddy covariance estimates provide crucial information about the Earth's carbon, water and energy cycles. Continuous measurements from flux towers facilitate exploration of the exchange of carbon dioxide, water and energy between the land surface and the atmosphere at fine temporal and spatial scales, while satellite observations can fill in the large spatial gaps of in-situ measurements and provide long-term temporal continuity. The Southwest (Southwest United States and Northwest Mexico) and other semi-arid regions represent a key uncertainty in interannual variability in carbon uptake. Comparisons of existing global upscaled gross primary production (GPP) products with flux tower data at sites across the Southwest show widespread mischaracterization of seasonality in vegetation carbon uptake, resulting in large (up to 200%) errors in annual carbon uptake estimates. Here, remotely sensed and distributed meteorological inputs are used to upscale GPP estimates from 25 Ameriflux towers across the Southwest to the regional scale using a machine learning approach. Our random forest model incorporates two novel features that improve the spatial and temporal variability in GPP. First, we incorporate a multi-scalar drought index at multiple timescales to account for differential seasonality between ecosystem types. Second, our machine learning algorithm was trained on twenty five ecologically diverse sites to optimize both the monthly variability in and the seasonal cycle of GPP. The product and its components will be used to examine drought impacts on terrestrial carbon cycling across the Southwest including the effects of drought seasonality and on carbon uptake. Our spatially and temporally continuous upscaled GPP product drawing from both ground and satellite data over the Southwest region helps us understand linkages between the carbon and water cycles in semi-arid ecosystems and informs predictions of vegetation response to future climate conditions.
A Distance-based Energy Aware Routing algorithm for wireless sensor networks.
Wang, Jin; Kim, Jeong-Uk; Shu, Lei; Niu, Yu; Lee, Sungyoung
2010-01-01
Energy efficiency and balancing is one of the primary challenges for wireless sensor networks (WSNs) since the tiny sensor nodes cannot be easily recharged once they are deployed. Up to now, many energy efficient routing algorithms or protocols have been proposed with techniques like clustering, data aggregation and location tracking etc. However, many of them aim to minimize parameters like total energy consumption, latency etc., which cause hotspot nodes and partitioned network due to the overuse of certain nodes. In this paper, a Distance-based Energy Aware Routing (DEAR) algorithm is proposed to ensure energy efficiency and energy balancing based on theoretical analysis of different energy and traffic models. During the routing process, we consider individual distance as the primary parameter in order to adjust and equalize the energy consumption among involved sensors. The residual energy is also considered as a secondary factor. In this way, all the intermediate nodes will consume their energy at similar rate, which maximizes network lifetime. Simulation results show that the DEAR algorithm can reduce and balance the energy consumption for all sensor nodes so network lifetime is greatly prolonged compared to other routing algorithms.
The Development of Point Doppler Velocimeter Data Acquisition and Processing Software
NASA Technical Reports Server (NTRS)
Cavone, Angelo A.
2008-01-01
In order to develop efficient and quiet aircraft and validate Computational Fluid Dynamic predications, aerodynamic researchers require flow parameter measurements to characterize flow fields about wind tunnel models and jet flows. A one-component Point Doppler Velocimeter (pDv), a non-intrusive, laser-based instrument, was constructed using a design/develop/test/validate/deploy approach. A primary component of the instrument is software required for system control/management and data collection/reduction. This software along with evaluation algorithms, advanced pDv from a laboratory curiosity to a production level instrument. Simultaneous pDv and pitot probe velocity measurements obtained at the centerline of a flow exiting a two-inch jet, matched within 0.4%. Flow turbulence spectra obtained with pDv and a hot-wire detected the primary and secondary harmonics with equal dynamic range produced by the fan driving the flow. Novel,hardware and software methods were developed, tested and incorporated into the system to eliminate and/or minimize error sources and improve system reliability.
Terrestrial remote sensing science and algorithms planned for EOS/MODIS
Running, S. W.; Justice, C.O.; Salomonson, V.V.; Hall, D.; Barker, J.; Kaufmann, Y. J.; Strahler, Alan H.; Huete, A.R.; Muller, Jan-Peter; Vanderbilt, V.; Wan, Z.; Teillet, P.; Carneggie, David M. Geological Survey (U.S.) Ohlen
1994-01-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) will be the primary daily global monitoring sensor on the NASA Earth Observing System (EOS) satellites, scheduled for launch on the EOS-AM platform in June 1998 and the EOS-PM platform in December 2000. MODIS is a 36 channel radiometer covering 0·415-14·235 μm wavelengths, with spatial resolution from 250 m to 1 km at nadir. MODIS will be the primary EOS sensor for providing data on terrestrial biospheric dynamics and process activity. This paper presents the suite of global land products currently planned for EOSDIS implementation, to be developed by the authors of this paper, the MODIS land team (MODLAND). These include spectral albedo, land cover, spectral vegetation indices, snow and ice cover, surface temperature and fire, and a number of biophysical variables that will allow computation of global carbon cycles, hydrologic balances and biogeochemistry of critical greenhouse gases. Additionally, the regular global coverage of these variables will allow accurate surface change detection, a fundamental determinant of global change.
Data-driven classification of patients with primary progressive aphasia.
Hoffman, Paul; Sajjadi, Seyed Ahmad; Patterson, Karalyn; Nestor, Peter J
2017-11-01
Current diagnostic criteria classify primary progressive aphasia into three variants-semantic (sv), nonfluent (nfv) and logopenic (lv) PPA-though the adequacy of this scheme is debated. This study took a data-driven approach, applying k-means clustering to data from 43 PPA patients. The algorithm grouped patients based on similarities in language, semantic and non-linguistic cognitive scores. The optimum solution consisted of three groups. One group, almost exclusively those diagnosed as svPPA, displayed a selective semantic impairment. A second cluster, with impairments to speech production, repetition and syntactic processing, contained a majority of patients with nfvPPA but also some lvPPA patients. The final group exhibited more severe deficits to speech, repetition and syntax as well as semantic and other cognitive deficits. These results suggest that, amongst cases of non-semantic PPA, differentiation mainly reflects overall degree of language/cognitive impairment. The observed patterns were scarcely affected by inclusion/exclusion of non-linguistic cognitive scores. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Somayajula, Srikanth Ayyala; Devred, Emmanuel; Bélanger, Simon; Antoine, David; Vellucci, V; Babin, Marcel
2018-04-20
In this study, we report on the performance of satellite-based photosynthetically available radiation (PAR) algorithms used in published oceanic primary production models. The performance of these algorithms was evaluated using buoy observations under clear and cloudy skies, and for the particular case of low sun angles typically encountered at high latitudes or at moderate latitudes in winter. The PAR models consisted of (i) the standard one from the NASA-Ocean Biology Processing Group (OBPG), (ii) the Gregg and Carder (GC) semi-analytical clear-sky model, and (iii) look-up-tables based on the Santa Barbara DISORT atmospheric radiative transfer (SBDART) model. Various combinations of atmospheric inputs, empirical cloud corrections, and semi-analytical irradiance models yielded a total of 13 (11 + 2 developed in this study) different PAR products, which were compared with in situ measurements collected at high frequency (15 min) at a buoy site in the Mediterranean Sea (the "BOUée pour l'acquiSition d'une Série Optique à Long termE," or, "BOUSSOLE" site). An objective ranking method applied to the algorithm results indicated that seven PAR products out of 13 were well in agreement with the in situ measurements. Specifically, the OBPG method showed the best overall performance with a root mean square difference (RMSD) (bias) of 19.7% (6.6%) and 10% (6.3%) followed by the look-up-table method with a RMSD (bias) of 25.5% (6.8%) and 9.6% (2.6%) at daily and monthly scales, respectively. Among the four methods based on clear-sky PAR empirically corrected for cloud cover, the Dobson and Smith method consistently underestimated daily PAR while the Budyko formulation overestimated daily PAR. Empirically cloud-corrected methods using cloud fraction (CF) performed better under quasi-clear skies (CF<0.3) with an RMSD (bias) of 9.7%-14.8% (3.6%-11.3%) than under partially clear to cloudy skies (0.3
NASA Astrophysics Data System (ADS)
Liou, L.
2012-12-01
A changing climate in the Lake Erie region appears to be having direct impacts on the quality of Lake Erie's drinking water. A dramatic increase in the size and duration of harmful algal blooms (HABs), changes in chlorophyll (Chl) levels and related primary production (PP), prominent sediment plumes, and nearshore production of submerged aquatic vegetation (SAV) are likely being impacted by warmer winters, more intense storms, and reduced ice extent, amongst other meteorological factors. Hypoxia, another major drinking water issue in the lake, is exacerbated by HABs and nearshore SAV. A Michigan Tech research team (Shuchman, Sayers, Brooks) has recently been developing algorithms to derive HAB extents, Chl levels, PP, sediment plume extents, and nearshore SAV maps for the Great Lakes. Inputs have primarily been derived from MODIS Aqua imagery from the NASA Oceancolor website; investigations in the capability of VIIRS imagery to provide the same critical data are being pursued. Remote sensing-derived ice extent and thickness spatial data are also being analyzed. Working with Liou and Lekki of the NASA Glenn Research Center, the study team is deriving algorithms specifically for Lake Erie and integrating them into an analysis of the lake's changing trends over the last 10 years (2002-2012) to improve understanding of how they are impacting the area's water quality, especially for customers dependent on Lake Erie drinking water. This analysis is tying these remote sensing-derived products to climate-driven meteorological factors to enable an initial assessment of how future changes could continue to impact the region's drinking water quality.
SeaWiFS calibration and validation plan, volume 3
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Mcclain, Charles R.; Esaias, Wayne E.; Barnes, William; Guenther, Bruce; Endres, Daniel; Mitchell, B. Greg; Barnes, Robert
1992-01-01
The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) will be the first ocean-color satellite since the Nimbus-7 Coastal Zone Color Scanner (CZCS), which ceased operation in 1986. Unlike the CZCS, which was designed as a proof-of-concept experiment, SeaWiFS will provide routine global coverage every 2 days and is designed to provide estimates of photosynthetic concentrations of sufficient accuracy for use in quantitative studies of the ocean's primary productivity and biogeochemistry. A review of the CZCS mission is included that describes that data set's limitations and provides justification for a comprehensive SeaWiFS calibration and validation program. To accomplish the SeaWiFS scientific objectives, the sensor's calibration must be constantly monitored, and robust atmospheric corrections and bio-optical algorithms must be developed. The plan incorporates a multi-faceted approach to sensor calibration using a combination of vicarious (based on in situ observations) and onboard calibration techniques. Because of budget constraints and the limited availability of ship resources, the development of the operational algorithms (atmospheric and bio-optical) will rely heavily on collaborations with the Earth Observing System (EOS), the Moderate Resolution Imaging Spectrometer (MODIS) oceans team, and projects sponsored by other agencies, e.g., the U.S. Navy and the National Science Foundation (NSF). Other elements of the plan include the routine quality control of input ancillary data (e.g., surface wind, surface pressure, ozone concentration, etc.) used in the processing and verification of the level-0 (raw) data to level-1 (calibrated radiances), level-2 (derived products), and level-3 (gridded and averaged derived data) products.
NASA Astrophysics Data System (ADS)
Babaveisi, Vahid; Paydar, Mohammad Mahdi; Safaei, Abdul Sattar
2018-07-01
This study aims to discuss the solution methodology for a closed-loop supply chain (CLSC) network that includes the collection of used products as well as distribution of the new products. This supply chain is presented on behalf of the problems that can be solved by the proposed meta-heuristic algorithms. A mathematical model is designed for a CLSC that involves three objective functions of maximizing the profit, minimizing the total risk and shortages of products. Since three objective functions are considered, a multi-objective solution methodology can be advantageous. Therefore, several approaches have been studied and an NSGA-II algorithm is first utilized, and then the results are validated using an MOSA and MOPSO algorithms. Priority-based encoding, which is used in all the algorithms, is the core of the solution computations. To compare the performance of the meta-heuristics, random numerical instances are evaluated by four criteria involving mean ideal distance, spread of non-dominance solution, the number of Pareto solutions, and CPU time. In order to enhance the performance of the algorithms, Taguchi method is used for parameter tuning. Finally, sensitivity analyses are performed and the computational results are presented based on the sensitivity analyses in parameter tuning.
NASA Astrophysics Data System (ADS)
Babaveisi, Vahid; Paydar, Mohammad Mahdi; Safaei, Abdul Sattar
2017-07-01
This study aims to discuss the solution methodology for a closed-loop supply chain (CLSC) network that includes the collection of used products as well as distribution of the new products. This supply chain is presented on behalf of the problems that can be solved by the proposed meta-heuristic algorithms. A mathematical model is designed for a CLSC that involves three objective functions of maximizing the profit, minimizing the total risk and shortages of products. Since three objective functions are considered, a multi-objective solution methodology can be advantageous. Therefore, several approaches have been studied and an NSGA-II algorithm is first utilized, and then the results are validated using an MOSA and MOPSO algorithms. Priority-based encoding, which is used in all the algorithms, is the core of the solution computations. To compare the performance of the meta-heuristics, random numerical instances are evaluated by four criteria involving mean ideal distance, spread of non-dominance solution, the number of Pareto solutions, and CPU time. In order to enhance the performance of the algorithms, Taguchi method is used for parameter tuning. Finally, sensitivity analyses are performed and the computational results are presented based on the sensitivity analyses in parameter tuning.
NASA Technical Reports Server (NTRS)
Yang, Wenze; Huang, Dong; Tan, Bin; Stroeve, Julienne C.; Shabanov, Nikolay V.; Knyazikhin, Yuri; Nemani, Ramakrishna R.; Myneni, Ranga B.
2006-01-01
The analysis of two years of Collection 3 and five years of Collection 4 Terra Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf Area Index (LAI) and Fraction of Photosynthetically Active Radiation (FPAR) data sets is presented in this article with the goal of understanding product quality with respect to version (Collection 3 versus 4), algorithm (main versus backup), snow (snow-free versus snow on the ground), and cloud (cloud-free versus cloudy) conditions. Retrievals from the main radiative transfer algorithm increased from 55% in Collection 3 to 67% in Collection 4 due to algorithm refinements and improved inputs. Anomalously high LAI/FPAR values observed in Collection 3 product in some vegetation types were corrected in Collection 4. The problem of reflectance saturation and too few main algorithm retrievals in broadleaf forests persisted in Collection 4. The spurious seasonality in needleleaf LAI/FPAR fields was traced to fewer reliable input data and retrievals during the boreal winter period. About 97% of the snow covered pixels were processed by the backup Normalized Difference Vegetation Index-based algorithm. Similarly, a majority of retrievals under cloudy conditions were obtained from the backup algorithm. For these reasons, the users are advised to consult the quality flags accompanying the LAI and FPAR product.
NASA Technical Reports Server (NTRS)
Smith, Kelly M.
2016-01-01
NASA is scheduled to launch the Orion spacecraft atop the Space Launch System on Exploration Mission 1 in late 2018. When Orion returns from its lunar sortie, it will encounter Earth's atmosphere with speeds in excess of 11 kilometers per second, and Orion will attempt its first precision-guided skip entry. A suite of flight software algorithms collectively called the Entry Monitor has been developed in order to enhance crew situational awareness and enable high levels of onboard autonomy. The Entry Monitor determines the vehicle capability footprint in real-time, provides manual piloting cues, evaluates landing target feasibility, predicts the ballistic instantaneous impact point, and provides intelligent recommendations for alternative landing sites if the primary landing site is not achievable. The primary engineering challenges of the Entry Monitor is in the algorithmic implementation in making a highly reliable, efficient set of algorithms suitable for onboard applications.
NASA Astrophysics Data System (ADS)
Harbeck, J.; Kurtz, N. T.; Studinger, M.; Onana, V.; Yi, D.
2015-12-01
The NASA Operation IceBridge Project Science Office has recently released an updated version of the sea ice freeboard, snow depth and thickness product (IDCSI4). This product is generated through the combination of multiple IceBridge instrument data, primarily the ATM laser altimeter, DMS georeferenced imagery and the CReSIS snow radar, and is available on a campaign-specific basis as all upstream data sets become available. Version 1 data (IDCSI2) was the initial data production; we have subsequently received community feedback that has now been incorporated, allowing us to provide an improved data product. All data now available to the public at the National Snow and Ice Data Center (NSIDC) have been homogeneously reprocessed using the new IDCSI4 algorithm. This algorithm contains significant upgrades that improve the quality and consistency of the dataset, including updated atmospheric and oceanic tidal models and replacement of the geoid with a more representative mean sea surface height product. Known errors with the IDCSI2 algorithm, identified by the Project Science Office as well as feedback from the scientific community, have been incorporated into the new algorithm as well. We will describe in detail the various steps of the IDCSI4 algorithm, show the improvements made over the IDCSI2 dataset and their beneficial impact and discuss future upgrades planned for the next version.
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
Yale, Jean-François; Berard, Lori; Groleau, Mélanie; Javadi, Pasha; Stewart, John; Harris, Stewart B
2017-10-01
It was uncertain whether an algorithm that involves increasing insulin dosages by 1 unit/day may cause more hypoglycemia with the longer-acting insulin glargine 300 units/mL (GLA-300). The objective of this study was to compare safety and efficacy of 2 titration algorithms, INSIGHT and EDITION, for GLA-300 in people with uncontrolled type 2 diabetes mellitus, mainly in a primary care setting. This was a 12-week, open-label, randomized, multicentre pilot study. Participants were randomly assigned to 1 of 2 algorithms: they either increased their dosage by 1 unit/day (INSIGHT, n=108) or the dose was adjusted by the investigator at least once weekly, but no more often than every 3 days (EDITION, n=104). The target fasting self-monitored blood glucose was in the range of 4.4 to 5.6 mmol/L. The percentages of participants reaching the primary endpoint of fasting self-monitored blood glucose ≤5.6 mmol/L without nocturnal hypoglycemia were 19.4% (INSIGHT) and 18.3% (EDITION). At week 12, 26.9% (INSIGHT) and 28.8% (EDITION) of participants achieved a glycated hemoglobin value of ≤7%. No differences in the incidence of hypoglycemia of any category were noted between algorithms. Participants in both arms of the study were much more satisfied with their new treatment as assessed by the Diabetes Treatment Satisfaction Questionnaire. Most health-care professionals (86%) preferred the INSIGHT over the EDITION algorithm. The frequency of adverse events was similar between algorithms. A patient-driven titration algorithm of 1 unit/day with GLA-300 is effective and comparable to the previously tested EDITION algorithm and is preferred by health-care professionals. Copyright © 2017 Diabetes Canada. Published by Elsevier Inc. All rights reserved.
Nigatu, Yeshambel T; Liu, Yan; Wang, JianLi
2016-07-22
Multivariable risk prediction algorithms are useful for making clinical decisions and for health planning. While prediction algorithms for new onset of major depression in the primary care attendees in Europe and elsewhere have been developed, the performance of these algorithms in different populations is not known. The objective of this study was to validate the PredictD algorithm for new onset of major depressive episode (MDE) in the US general population. Longitudinal study design was conducted with approximate 3-year follow-up data from a nationally representative sample of the US general population. A total of 29,621 individuals who participated in Wave 1 and 2 of the US National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) and who did not have an MDE in the past year at Wave 1 were included. The PredictD algorithm was directly applied to the selected participants. MDE was assessed by the Alcohol Use Disorder and Associated Disabilities Interview Schedule, based on the DSM-IV criteria. Among the participants, 8 % developed an MDE over three years. The PredictD algorithm had acceptable discriminative power (C-statistics = 0.708, 95 % CI: 0.696, 0.720), but poor calibration (p < 0.001) with the NESARC data. In the European primary care attendees, the algorithm had a C-statistics of 0.790 (95 % CI: 0.767, 0.813) with a perfect calibration. The PredictD algorithm has acceptable discrimination, but the calibration capacity was poor in the US general population despite of re-calibration. Therefore, based on the results, at current stage, the use of PredictD in the US general population for predicting individual risk of MDE is not encouraged. More independent validation research is needed.
Neoplastic causes of abnormal puberty.
Wendt, Susanne; Shelso, John; Wright, Karen; Furman, Wayne
2014-04-01
Neoplasm-related precocious puberty (PP) is a rare presenting feature of childhood cancer. Moreover, evaluation of suspected PP in a child is complex, and cancer is often not considered. We characterized the clinicopathologic features of patients presenting with PP at a large pediatric cancer center, reviewed the relevant literature, and developed an algorithm for the diagnostic work-up of these patients. We examined the records of all patients with a neoplasm and concomitant PP treated at St. Jude Children's Research Hospital from January 1975 through October 2011, reviewed the available literature, and analyzed the demographic, clinical, endocrine, and neoplasm-related features. Twenty-four of 13,615 children and adolescents (0.18%) were diagnosed with PP within 60 days of presentation. Primary diagnoses included brain tumor (12), adrenocortical carcinoma (5), hepatoblastoma (4), and others (3). PP was observed 0-48 months before diagnosis of neoplasm; 17 patients had peripheral PP and 7 had central PP. Neoplasm-related PP is rare and takes the form of a paraneoplastic syndrome caused by tumor production of hormones or by alteration of physiologic gonadotropin production. PP can precede diagnosis of malignancy by months or years, and neoplastic causes should be considered early to avoid delayed cancer diagnosis. Treatment of the primary malignancy resolved or diminished PP in surviving patients with an intact hypothalamic-pituitary-gonadal axis. © 2013 Wiley Periodicals, Inc.
The JPSS Ground Project Algorithm Verification, Test and Evaluation System
NASA Astrophysics Data System (ADS)
Vicente, G. A.; Jain, P.; Chander, G.; Nguyen, V. T.; Dixon, V.
2016-12-01
The Government Resource for Algorithm Verification, Independent Test, and Evaluation (GRAVITE) is an operational system that provides services to the Suomi National Polar-orbiting Partnership (S-NPP) Mission. It is also a unique environment for Calibration/Validation (Cal/Val) and Data Quality Assessment (DQA) of the Join Polar Satellite System (JPSS) mission data products. GRAVITE provides a fast and direct access to the data and products created by the Interface Data Processing Segment (IDPS), the NASA/NOAA operational system that converts Raw Data Records (RDR's) generated by sensors on the S-NPP into calibrated geo-located Sensor Data Records (SDR's) and generates Mission Unique Products (MUPS). It also facilitates algorithm investigation, integration, checkouts and tuning, instrument and product calibration and data quality support, monitoring and data/products distribution. GRAVITE is the portal for the latest S-NPP and JPSS baselined Processing Coefficient Tables (PCT's) and Look-Up-Tables (LUT's) and hosts a number DQA offline tools that takes advantage of the proximity to the near-real time data flows. It also contains a set of automated and ad-hoc Cal/Val tools used for algorithm analysis and updates, including an instance of the IDPS called GRAVITE Algorithm Development Area (G-ADA), that has the latest installation of the IDPS algorithms running in an identical software and hardware platforms. Two other important GRAVITE component are the Investigator-led Processing System (IPS) and the Investigator Computing Facility (ICF). The IPS is a dedicated environment where authorized users run automated scripts called Product Generation Executables (PGE's) to support Cal/Val and data quality assurance offline. This data-rich and data-driven service holds its own distribution system and allows operators to retrieve science data products. The ICF is a workspace where users can share computing applications and resources and have full access to libraries and science and sensor quality analysis tools. In this presentation we will describe the GRAVITE systems and subsystems, architecture, technical specifications, capabilities and resources, distributed data and products and the latest advances to support the JPSS science algorithm implementation, validation and testing.
Sunni, Muna; Mays, Rita; Kaup, Tara; Nathan, Brandon
2011-08-01
During the last two decades, type 2 diabetes mellitus increasingly has been seen in children. Although still not as common as type 1 diabetes among children, it has become the leading form of diabetes among adolescents of certain ethnicities. It is imperative that primary care providers recognize the risk factors, perform appropriate screening tests, and initiate therapy for children who have type 2 diabetes or prediabetes. This article discusses the epidemiology and pathogenesis of the disease, complications, and treatments, and includes a concise, easy-to-follow algorithm to assist providers in diagnosing and treating young patients.
NASA Astrophysics Data System (ADS)
Li, H.; Yang, Y.; Yongming, D.; Cao, B.; Qinhuo, L.
2017-12-01
Land surface temperature (LST) is a key parameter for hydrological, meteorological, climatological and environmental studies. During the past decades, many efforts have been devoted to the establishment of methodology for retrieving the LST from remote sensing data and significant progress has been achieved. Many operational LST products have been generated using different remote sensing data. MODIS LST product (MOD11) is one of the most commonly used LST products, which is produced using a generalized split-window algorithm. Many validation studies have showed that MOD11 LST product agrees well with ground measurements over vegetated and inland water surfaces, however, large negative biases of up to 5 K are present over arid regions. In addition, land surface emissivity of MOD11 are estimated by assigning fixed emissivities according to a land cover classification dataset, which may introduce large errors to the LST product due to misclassification of the land cover. Therefore, a new MODIS LSE&E product (MOD21) is developed based on the temperature emissivity separation (TES) algorithm, and the water vapor scaling (WVS) method has also been incorporated into the MODIS TES algorithm for improving the accuracy of the atmospheric correction. The MOD21 product will be released with MODIS collection 6 Tier-2 land products in 2017. Due to the MOD21 products are not available right now, the MODTES algorithm was implemented including the TES and WVS methods as detailed in the MOD21 Algorithm Theoretical Basis Document. The MOD21 and MOD11 C6 LST products are validated using ground measurements and ASTER LST products collected in an arid area of Northwest China during the Heihe Watershed Allied Telemetry Experimental Research (HiWATER) experiment. In addition, lab emissivity spectra of four sand dunes in the Northwest China are also used to validate the MOD21 and MOD11 emissivity products.
NASA Astrophysics Data System (ADS)
Asmar, Joseph Al; Lahoud, Chawki; Brouche, Marwan
2018-05-01
Cogeneration and trigeneration systems can contribute to the reduction of primary energy consumption and greenhouse gas emissions in residential and tertiary sectors, by reducing fossil fuels demand and grid losses with respect to conventional systems. The cogeneration systems are characterized by a very high energy efficiency (80 to 90%) as well as a less polluting aspect compared to the conventional energy production. The integration of these systems into the energy network must simultaneously take into account their economic and environmental challenges. In this paper, a decision-making strategy will be introduced and is divided into two parts. The first one is a strategy based on a multi-objective optimization tool with data analysis and the second part is based on an optimization algorithm. The power dispatching of the Lebanese electricity grid is then simulated and considered as a case study in order to prove the compatibility of the cogeneration power calculated by our decision-making technique. In addition, the thermal energy produced by the cogeneration systems which capacity is selected by our technique shows compatibility with the thermal demand for district heating.
TRMM Version 7 Near-Realtime Data Products
NASA Technical Reports Server (NTRS)
Tocker, Erich Franz; Kelley, Owen
2012-01-01
The TRMM data system has been providing near-realtime data products to the community since late 1999. While the TRMM project never had near-realtime production requirements, the science and applications communities had a great interest in receiving TRMM data as quickly as possible. As a result these NRT data are provided under a best-effort scenario but with the objective of having the swath data products available within three hours of data collection 90% of the time. In July of 2011 the Joint Precipitation Measurement Missions Science Team (JPST) authorized the reprocessing of TRMM mission data using the new version 7 algorithms. The reprocessing of the 14+ years of the mission was concluded within 30 days. Version 7 algorithms had substantial changes in the data product file formats both for data and metadata. In addition, the algorithms themselves had major modifications and improvements. The general approach to versioning up the NRT is to wait for the regular production algorithms to have run for a while and shake out any issues that might arise from the new version before updating the NRT products. Because of the substantial changes in data/metadata formats as well as the algorithm improvements themselves, the update of NRT to V7 followed an even more conservative path than usual. This was done to ensure that applications agencies and other users of the TRMM NRT would not be faces with short-timeframes for conversion to the new format. This paper will describe the process by which the TRMM NRT was updated to V7 and the V7 data products themselves.
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Reichle, Rolf H.; De Lannoy, Gabrielle J. M.; Liu, Qing; Colliander, Andreas; Conaty, Austin; Jackson, Thomas; Kimball, John
2015-01-01
During the post-launch SMAP calibration and validation (Cal/Val) phase there are two objectives for each science data product team: 1) calibrate, verify, and improve the performance of the science algorithm, and 2) validate the accuracy of the science data product as specified in the science requirements and according to the Cal/Val schedule. This report provides an assessment of the SMAP Level 4 Surface and Root Zone Soil Moisture Passive (L4_SM) product specifically for the product's public beta release scheduled for 30 October 2015. The primary objective of the beta release is to allow users to familiarize themselves with the data product before the validated product becomes available. The beta release also allows users to conduct their own assessment of the data and to provide feedback to the L4_SM science data product team. The assessment of the L4_SM data product includes comparisons of SMAP L4_SM soil moisture estimates with in situ soil moisture observations from core validation sites and sparse networks. The assessment further includes a global evaluation of the internal diagnostics from the ensemble-based data assimilation system that is used to generate the L4_SM product. This evaluation focuses on the statistics of the observation-minus-forecast (O-F) residuals and the analysis increments. Together, the core validation site comparisons and the statistics of the assimilation diagnostics are considered primary validation methodologies for the L4_SM product. Comparisons against in situ measurements from regional-scale sparse networks are considered a secondary validation methodology because such in situ measurements are subject to upscaling errors from the point-scale to the grid cell scale of the data product. Based on the limited set of core validation sites, the assessment presented here meets the criteria established by the Committee on Earth Observing Satellites for Stage 1 validation and supports the beta release of the data. The validation against sparse network measurements and the evaluation of the assimilation diagnostics address Stage 2 validation criteria by expanding the assessment to regional and global scales.
Using qualitative research to inform development of a diagnostic algorithm for UTI in children.
de Salis, Isabel; Whiting, Penny; Sterne, Jonathan A C; Hay, Alastair D
2013-06-01
Diagnostic and prognostic algorithms can help reduce clinical uncertainty. The selection of candidate symptoms and signs to be measured in case report forms (CRFs) for potential inclusion in diagnostic algorithms needs to be comprehensive, clearly formulated and relevant for end users. To investigate whether qualitative methods could assist in designing CRFs in research developing diagnostic algorithms. Specifically, the study sought to establish whether qualitative methods could have assisted in designing the CRF for the Health Technology Association funded Diagnosis of Urinary Tract infection in Young children (DUTY) study, which will develop a diagnostic algorithm to improve recognition of urinary tract infection (UTI) in children aged <5 years presenting acutely unwell to primary care. Qualitative methods were applied using semi-structured interviews of 30 UK doctors and nurses working with young children in primary care and a Children's Emergency Department. We elicited features that clinicians believed useful in diagnosing UTI and compared these for presence or absence and terminology with the DUTY CRF. Despite much agreement between clinicians' accounts and the DUTY CRFs, we identified a small number of potentially important symptoms and signs not included in the CRF and some included items that could have been reworded to improve understanding and final data analysis. This study uniquely demonstrates the role of qualitative methods in the design and content of CRFs used for developing diagnostic (and prognostic) algorithms. Research groups developing such algorithms should consider using qualitative methods to inform the selection and wording of candidate symptoms and signs.
Memetic algorithms for de novo motif-finding in biomedical sequences.
Bi, Chengpeng
2012-09-01
The objectives of this study are to design and implement a new memetic algorithm for de novo motif discovery, which is then applied to detect important signals hidden in various biomedical molecular sequences. In this paper, memetic algorithms are developed and tested in de novo motif-finding problems. Several strategies in the algorithm design are employed that are to not only efficiently explore the multiple sequence local alignment space, but also effectively uncover the molecular signals. As a result, there are a number of key features in the implementation of the memetic motif-finding algorithm (MaMotif), including a chromosome replacement operator, a chromosome alteration-aware local search operator, a truncated local search strategy, and a stochastic operation of local search imposed on individual learning. To test the new algorithm, we compare MaMotif with a few of other similar algorithms using simulated and experimental data including genomic DNA, primary microRNA sequences (let-7 family), and transmembrane protein sequences. The new memetic motif-finding algorithm is successfully implemented in C++, and exhaustively tested with various simulated and real biological sequences. In the simulation, it shows that MaMotif is the most time-efficient algorithm compared with others, that is, it runs 2 times faster than the expectation maximization (EM) method and 16 times faster than the genetic algorithm-based EM hybrid. In both simulated and experimental testing, results show that the new algorithm is compared favorably or superior to other algorithms. Notably, MaMotif is able to successfully discover the transcription factors' binding sites in the chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) data, correctly uncover the RNA splicing signals in gene expression, and precisely find the highly conserved helix motif in the transmembrane protein sequences, as well as rightly detect the palindromic segments in the primary microRNA sequences. The memetic motif-finding algorithm is effectively designed and implemented, and its applications demonstrate it is not only time-efficient, but also exhibits excellent performance while compared with other popular algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.
Advanced Dispersed Fringe Sensing Algorithm for Coarse Phasing Segmented Mirror Telescopes
NASA Technical Reports Server (NTRS)
Spechler, Joshua A.; Hoppe, Daniel J.; Sigrist, Norbert; Shi, Fang; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.
2013-01-01
Segment mirror phasing, a critical step of segment mirror alignment, requires the ability to sense and correct the relative pistons between segments from up to a few hundred microns to a fraction of wavelength in order to bring the mirror system to its full diffraction capability. When sampling the aperture of a telescope, using auto-collimating flats (ACFs) is more economical. The performance of a telescope with a segmented primary mirror strongly depends on how well those primary mirror segments can be phased. One such process to phase primary mirror segments in the axial piston direction is dispersed fringe sensing (DFS). DFS technology can be used to co-phase the ACFs. DFS is essentially a signal fitting and processing operation. It is an elegant method of coarse phasing segmented mirrors. DFS performance accuracy is dependent upon careful calibration of the system as well as other factors such as internal optical alignment, system wavefront errors, and detector quality. Novel improvements to the algorithm have led to substantial enhancements in DFS performance. The Advanced Dispersed Fringe Sensing (ADFS) Algorithm is designed to reduce the sensitivity to calibration errors by determining the optimal fringe extraction line. Applying an angular extraction line dithering procedure and combining this dithering process with an error function while minimizing the phase term of the fitted signal, defines in essence the ADFS algorithm.
Clinical Guideline for Female Lower Urinary Tract Symptoms.
Takahashi, Satoru; Takei, Mineo; Nishizawa, Osamu; Yamaguchi, Osamu; Kato, Kumiko; Gotoh, Momokazu; Yoshimura, Yasukuni; Takeyama, Masami; Ozawa, Hideo; Shimada, Makoto; Yamanishi, Tomonori; Yoshida, Masaki; Tomoe, Hikaru; Yokoyama, Osamu; Koyama, Masayasu
2016-01-01
The "Japanese Clinical Guideline for Female Lower Urinary Tract Symptoms," published in Japan in November 2013, contains two algorithms (a primary and a specialized treatment algorithm) that are novel worldwide as they cover female lower urinary tract symptoms other than urinary incontinence. For primary treatment, necessary types of evaluation include querying the patient regarding symptoms and medical history, examining physical findings, and performing urinalysis. The types of evaluations that should be performed for select cases include evaluation with symptom/quality of life (QOL) questionnaires, urination records, residual urine measurement, urine cytology, urine culture, serum creatinine measurement, and ultrasonography. If the main symptoms are voiding/post-voiding, specialized treatment should be considered because multiple conditions may be involved. When storage difficulties are the main symptoms, the patient should be assessed using the primary algorithm. When conditions such as overactive bladder or stress incontinence are diagnosed and treatment is administered, but sufficient improvement is not achieved, the specialized algorithm should be considered. In case of specialized treatment, physiological re-evaluation, urinary tract/pelvic imaging evaluation, and urodynamic testing are conducted for conditions such as refractory overactive bladder and stress incontinence. There are two causes of voiding/post-voiding symptoms: lower urinary tract obstruction and detrusor underactivity. Lower urinary tract obstruction caused by pelvic organ prolapse may be improved by surgery. © 2015 Wiley Publishing Asia Pty Ltd.
Rabe, Benjamin; Peeken, Ilka; Bracher, Astrid
2018-01-01
As consequences of global warming sea-ice shrinking, permafrost thawing and changes in fresh water and terrestrial material export have already been reported in the Arctic environment. These processes impact light penetration and primary production. To reach a better understanding of the current status and to provide accurate forecasts Arctic biogeochemical and physical parameters need to be extensively monitored. In this sense, bio-optical properties are useful to be measured due to the applicability of optical instrumentation to autonomous platforms, including satellites. This study characterizes the non-water absorbers and their coupling to hydrographic conditions in the poorly sampled surface waters of the central and eastern Arctic Ocean. Over the entire sampled area colored dissolved organic matter (CDOM) dominates the light absorption in surface waters. The distribution of CDOM, phytoplankton and non-algal particles absorption reproduces the hydrographic variability in this region of the Arctic Ocean which suggests a subdivision into five major bio-optical provinces: Laptev Sea Shelf, Laptev Sea, Central Arctic/Transpolar Drift, Beaufort Gyre and Eurasian/Nansen Basin. Evaluating ocean color algorithms commonly applied in the Arctic Ocean shows that global and regionally tuned empirical algorithms provide poor chlorophyll-a (Chl-a) estimates. The semi-analytical algorithms Generalized Inherent Optical Property model (GIOP) and Garver-Siegel-Maritorena (GSM), on the other hand, provide robust estimates of Chl-a and absorption of colored matter. Applying GSM with modifications proposed for the western Arctic Ocean produced reliable information on the absorption by colored matter, and specifically by CDOM. These findings highlight that only semi-analytical ocean color algorithms are able to identify with low uncertainty the distribution of the different optical water constituents in these high CDOM absorbing waters. In addition, a clustering of the Arctic Ocean into bio-optical provinces will help to develop and then select province-specific ocean color algorithms. PMID:29304182
Gonçalves-Araujo, Rafael; Rabe, Benjamin; Peeken, Ilka; Bracher, Astrid
2018-01-01
As consequences of global warming sea-ice shrinking, permafrost thawing and changes in fresh water and terrestrial material export have already been reported in the Arctic environment. These processes impact light penetration and primary production. To reach a better understanding of the current status and to provide accurate forecasts Arctic biogeochemical and physical parameters need to be extensively monitored. In this sense, bio-optical properties are useful to be measured due to the applicability of optical instrumentation to autonomous platforms, including satellites. This study characterizes the non-water absorbers and their coupling to hydrographic conditions in the poorly sampled surface waters of the central and eastern Arctic Ocean. Over the entire sampled area colored dissolved organic matter (CDOM) dominates the light absorption in surface waters. The distribution of CDOM, phytoplankton and non-algal particles absorption reproduces the hydrographic variability in this region of the Arctic Ocean which suggests a subdivision into five major bio-optical provinces: Laptev Sea Shelf, Laptev Sea, Central Arctic/Transpolar Drift, Beaufort Gyre and Eurasian/Nansen Basin. Evaluating ocean color algorithms commonly applied in the Arctic Ocean shows that global and regionally tuned empirical algorithms provide poor chlorophyll-a (Chl-a) estimates. The semi-analytical algorithms Generalized Inherent Optical Property model (GIOP) and Garver-Siegel-Maritorena (GSM), on the other hand, provide robust estimates of Chl-a and absorption of colored matter. Applying GSM with modifications proposed for the western Arctic Ocean produced reliable information on the absorption by colored matter, and specifically by CDOM. These findings highlight that only semi-analytical ocean color algorithms are able to identify with low uncertainty the distribution of the different optical water constituents in these high CDOM absorbing waters. In addition, a clustering of the Arctic Ocean into bio-optical provinces will help to develop and then select province-specific ocean color algorithms.
A Low-Stress Algorithm for Fractions
ERIC Educational Resources Information Center
Ruais, Ronald W.
1978-01-01
An algorithm is given for the addition and subtraction of fractions based on dividing the sum of diagonal numerator and denominator products by the product of the denominators. As an explanation of the teaching method, activities used in teaching are demonstrated. (MN)
A Constraint-Based Planner for Data Production
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Golden, Keith
2005-01-01
This paper presents a graph-based backtracking algorithm designed to support constrain-tbased planning in data production domains. This algorithm performs backtracking at two nested levels: the outer- backtracking following the structure of the planning graph to select planner subgoals and actions to achieve them and the inner-backtracking inside a subproblem associated with a selected action to find action parameter values. We show this algorithm works well in a planner applied to automating data production in an ecological forecasting system. We also discuss how the idea of multi-level backtracking may improve efficiency of solving semi-structured constraint problems.
NASA Astrophysics Data System (ADS)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; Govind, Niranjan; Yang, Chao
2017-12-01
We present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.
Plumes and Blooms: Observations, Analysis and Modeling for SIMBIOS
NASA Technical Reports Server (NTRS)
Siegel, D. A.; Maritorena, S.; Nelson, N. B.
2003-01-01
The goal of the Plumes and Blooms (PnB) project is to develop, validate and apply to imagery state-of-theart ocean color algorithms for quantifying sediment plumes and phytoplankton blooms for the Case I1 environment of the Santa Barbara Channel. We conduct monthly to twice-monthly transect observations across the Santa Barbara Channel to develop an algorithm development and product validation data set. The PnB field program started in the summer of 1996. At each of the 7 PnB stations, a complete verification bio-geo-optical data set is collected. Included are redundant measures of apparent optical properties (remote sensing reflectance and diffuse attenuation spectra), as well as in situ profiles of spectral absorption, beam attenuation and backscattering coefficients. Water samples are analyzed for component in vivo absorption spectra, fluorometric chlorophyll, phytoplankton pigment (by the SDSU CHORS laboratory), and inorganic nutrient concentrations (Table 1). A primary goal is to use the PnB field data set to objectively tune semi-analytical models of ocean color for this site and apply them using available satellite imagery (SeaWiFS and MODIS). In support of this goal, we have also been addressing SeaWiFS ocean color and AVHRR SST imagery (Otero and Siegel, 2003). We also are using the PnB data set to address time/space variability of water masses in the Santa Barbara Channel and its relationship to the 1997/1998 El Niiio. However, the comparison between PnB field observations and satellite estimates of primary products has been disappointing. We find that field estimates of water-leaving radiance, LwN(h), correspond poorly to satellite estimates for both SeaWiFS and MODIS local area coverage imagery. We believe this is due to poor atmospheric correction due to complex mixtures of aerosol types found in these near-coastal regions. Last, we remain active in outreach activities.
Plumes and Blooms: Modeling the Case II Waters of the Santa Barbara Channel. Chapter 15
NASA Technical Reports Server (NTRS)
Siegel, D. A.; Maritorena, S.; Nelson, N. B.
2003-01-01
The goal of the Plumes and Blooms (PnB) project is to develop, validate and apply to imagery state-of-the-art ocean color algorithms for quantifying sediment plumes and phytoplankton blooms for the Case II environment of the Santa Barbara Channel. We conduct monthly to twice-monthly transect observations across the Santa Barbara Channel to develop an algorithm development and product validation data set. The PnB field program started in the summer of 1996. At each of the 7 PnB stations, a complete verification bio-geo-optical data set is collected. Included are redundant measures of apparent optical properties (remote sensing reflectance and diffuse attenuation spectra), as well as in situ profiles of spectral absorption, beam attenuation and backscattering coefficients. Water samples are analyzed for component in vivo absorption spectra, fluorometric chlorophyll, phytoplankton pigment (by the SDSU CHORS laboratory), and inorganic nutrient concentrations. A primary goal is to use the PnB field data set to objectively tune semi-analytical models of ocean color for this site and apply them using available satellite imagery (SeaWiFS and MODIS). In support of this goal, we have also been addressing SeaWiFS ocean color and AVHRR SST imagery. We also are using the PnB data set to address time/space variability of water masses in the Santa Barbara Channel and its relationship to the 1997/1998 El Nino. However, the comparison between PnB field observations and satellite estimates of primary products has been disappointing. We find that field estimates of water-leaving radiance, L(sub wN)(lambda), correspond poorly to satellite estimates for both SeaWiFS and MODIS local area coverage imagery. We believe this is due to poor atmospheric correction due to complex mixtures of aerosol types found in these near-coastal regions. Last, we remain active in outreach activities.
Alemohammad, Seyed Hamed; Fang, Bin; Konings, Alexandra G; Aires, Filipe; Green, Julia K; Kolassa, Jana; Miralles, Diego; Prigent, Catherine; Gentine, Pierre
2017-01-01
A new global estimate of surface turbulent fluxes, latent heat flux (LE) and sensible heat flux (H), and gross primary production (GPP) is developed using a machine learning approach informed by novel remotely sensed Solar-Induced Fluorescence (SIF) and other radiative and meteorological variables. This is the first study to jointly retrieve LE, H and GPP using SIF observations. The approach uses an artificial neural network (ANN) with a target dataset generated from three independent data sources, weighted based on triple collocation (TC) algorithm. The new retrieval, named Water, Energy, and Carbon with Artificial Neural Networks (WECANN), provides estimates of LE, H and GPP from 2007 to 2015 at 1° × 1° spatial resolution and on monthly time resolution. The quality of ANN training is assessed using the target data, and the WECANN retrievals are evaluated using eddy covariance tower estimates from FLUXNET network across various climates and conditions. When compared to eddy covariance estimates, WECANN typically outperforms other products, particularly for sensible and latent heat fluxes. Analysing WECANN retrievals across three extreme drought and heatwave events demonstrates the capability of the retrievals in capturing the extent of these events. Uncertainty estimates of the retrievals are analysed and the inter-annual variability in average global and regional fluxes show the impact of distinct climatic events - such as the 2015 El Niño - on surface turbulent fluxes and GPP.
NASA Astrophysics Data System (ADS)
Hamed Alemohammad, Seyed; Fang, Bin; Konings, Alexandra G.; Aires, Filipe; Green, Julia K.; Kolassa, Jana; Miralles, Diego; Prigent, Catherine; Gentine, Pierre
2017-09-01
A new global estimate of surface turbulent fluxes, latent heat flux (LE) and sensible heat flux (H), and gross primary production (GPP) is developed using a machine learning approach informed by novel remotely sensed solar-induced fluorescence (SIF) and other radiative and meteorological variables. This is the first study to jointly retrieve LE, H, and GPP using SIF observations. The approach uses an artificial neural network (ANN) with a target dataset generated from three independent data sources, weighted based on a triple collocation (TC) algorithm. The new retrieval, named Water, Energy, and Carbon with Artificial Neural Networks (WECANN), provides estimates of LE, H, and GPP from 2007 to 2015 at 1° × 1° spatial resolution and at monthly time resolution. The quality of ANN training is assessed using the target data, and the WECANN retrievals are evaluated using eddy covariance tower estimates from the FLUXNET network across various climates and conditions. When compared to eddy covariance estimates, WECANN typically outperforms other products, particularly for sensible and latent heat fluxes. Analyzing WECANN retrievals across three extreme drought and heat wave events demonstrates the capability of the retrievals to capture the extent of these events. Uncertainty estimates of the retrievals are analyzed and the interannual variability in average global and regional fluxes shows the impact of distinct climatic events - such as the 2015 El Niño - on surface turbulent fluxes and GPP.
Production and Distribution of NASA MODIS Remote Sensing Products
NASA Technical Reports Server (NTRS)
Wolfe, Robert
2007-01-01
The two Moderate Resolution Imaging Spectroradiometer (MODIS) instruments on-board NASA's Earth Observing System (EOS) Terra and Aqua satellites make key measurements for understanding the Earth's terrestrial ecosystems. Global time-series of terrestrial geophysical parameters have been produced from MODIS/Terra for over 7 years and for MODIS/Aqua for more than 4 1/2 years. These well calibrated instruments, a team of scientists and a large data production, archive and distribution systems have allowed for the development of a new suite of high quality product variables at spatial resolutions as fine as 250m in support of global change research and natural resource applications. This talk describes the MODIS Science team's products, with a focus on the terrestrial (land) products, the data processing approach and the process for monitoring and improving the product quality. The original MODIS science team was formed in 1989. The team's primary role is the development and implementation of the geophysical algorithms. In addition, the team provided feedback on the design and pre-launch testing of the instrument and helped guide the development of the data processing system. The key challenges the science team dealt with before launch were the development of algorithms for a new instrument and provide guidance of the large and complex multi-discipline processing system. Land, Ocean and Atmosphere discipline teams drove the processing system requirements, particularly in the area of the processing loads and volumes needed to daily produce geophysical maps of the Earth at resolutions as fine as 250 m. The processing system had to handle a large number of data products, large data volumes and processing loads, and complex processing requirements. Prior to MODIS, daily global maps from heritage instruments, such as Advanced Very High Resolution Radiometer (AVHRR), were not produced at resolutions finer than 5 km. The processing solution evolved into a combination of processing the lower level (Level 1) products and the higher level discipline specific Land and Atmosphere products in the MODIS Science Investigator Lead Processing System (SIPS), the MODIS Adaptive Processing System (MODAPS), and archive and distribution of the Land products to the user community by two of NASA s EOS Distributed Active Archive Centers (DAACs). Recently, a part of MODAPS, the Level 1 and Atmosphere Archive and Distribution System (LAADS), took over the role of archiving and distributing the Level 1 and Atmosphere products to the user community.
Colorimetric consideration of transparencies for a typical LACIE scene
NASA Technical Reports Server (NTRS)
Juday, R. D. (Principal Investigator)
1979-01-01
The production film converter used to produce LACIE imagery is described as well as schemes designed to provide the analyst with operational film products. Two of these products are discussed from the standpoint of color theory. Colorimetric terminology is defined and the mathematical calculations are given. Topics covered include (1) history of product 1 and 3 algorithm development; (2) colorimetric assumptions for product 1 and 3 algorithms; (3) qualitative results from a colorimetric analysis of a typical LACIE scene; and (4) image-to-image color stability.
Validation and Comparison of AATRS AOD L2 Products over China
NASA Astrophysics Data System (ADS)
Che, Yahui; Xue, Yong; Guang, Jie; Guo, Jianping; Li, Ying
2016-04-01
The Advanced Along-Track Scanning Radiometer (AATSR) aboard on ENVISAT has been used to observe the Earth for more than 10 years since than 2002. One of main applications of AATSR instrument is to observe atmospheric aerosol, especially in retrieval of aerosol optical depth (AOD), taking advantage of its dual-view that helps to separate the contribution of aerosol from top of atmosphere reflectance (A. A. Kokhanovsky and de Leeuw, 2009). The project of Aerosol_CCI, as part of European Space Agency's Climate Change Initiative (CCI), has released new AATSR aerosol AOD products by the of 2015, including the SU v4.21 product from Swansea algorithm, ADV v2.3 product from the ATSR-2/AATSR dual view aerosol retrieval algorithm (ADV) and ORAC v03.04 product from the Oxford-RAL Retrieval of Aerosol and Cloud algorithm. The previous versions of these three AOD level 2 (L2) products in 2008 have been validated over mainland China (Che and Xue, 2015). In this paper, we validated these AATSR AOD products with latest versions in mainland China in 2007, 2008 and 2010 by the means of comparison with the AErosol RObotic NETwork (AERONET) and the China Aerosol Remote Sensing Network (CARSNET). The combination of AERONET and CARSNET helps to make up for the disadvantages of small number and uneven distribution of AEROENT cites. The validation results show different performance of these AOD products over China. The performances of SU and ADV products seem to be the same with close correlation coefficient (CC) about 08~0.9 and root mean square (RMS) within 0.15 in all three years, and sensitive to high AOD values (AOD >1): more AODs and more underestimated. However, these two products do exist difference, which is that the SU algorithm retrieves more high AODs, leading to more space-time validation matches with ground-based data. The ORAC algorithm is different from the others, it can be not only used to retrieve low AODs but also high AODs over different landcover types. Even though ORAC algorithm has ability in retrieving AODs in different values, it shows largest uncertainty in retrieving different AODs.
Algorithms for Coastal-Zone Color-Scanner Data
NASA Technical Reports Server (NTRS)
1986-01-01
Software for Nimbus-7 Coastal-Zone Color-Scanner (CZCS) derived products consists of set of scientific algorithms for extracting information from CZCS-gathered data. Software uses CZCS-generated Calibrated RadianceTemperature (CRT) tape as input and outputs computer-compatible tape and film product.
Analysis of labor employment assessment on production machine to minimize time production
NASA Astrophysics Data System (ADS)
Hernawati, Tri; Suliawati; Sari Gumay, Vita
2018-03-01
Every company both in the field of service and manufacturing always trying to pass efficiency of it’s resource use. One resource that has an important role is labor. Labor has different efficiency levels for different jobs anyway. Problems related to the optimal allocation of labor that has different levels of efficiency for different jobs are called assignment problems, which is a special case of linear programming. In this research, Analysis of Labor Employment Assesment on Production Machine to Minimize Time Production, in PT PDM is done by using Hungarian algorithm. The aim of the research is to get the assignment of optimal labor on production machine to minimize time production. The results showed that the assignment of existing labor is not suitable because the time of completion of the assignment is longer than the assignment by using the Hungarian algorithm. By applying the Hungarian algorithm obtained time savings of 16%.
Optical Detection of Degraded Therapeutic Proteins.
Herrington, William F; Singh, Gajendra P; Wu, Di; Barone, Paul W; Hancock, William; Ram, Rajeev J
2018-03-23
The quality of therapeutic proteins such as hormones, subunit and conjugate vaccines, and antibodies is critical to the safety and efficacy of modern medicine. Identifying malformed proteins at the point-of-care can prevent adverse immune reactions in patients; this is of special concern when there is an insecure supply chain resulting in the delivery of degraded, or even counterfeit, drug product. Identification of degraded protein, for example human growth hormone, is demonstrated by applying automated anomaly detection algorithms. Detection of the degraded protein differs from previous applications of machine-learning and classification to spectral analysis: only example spectra of genuine, high-quality drug products are used to construct the classifier. The algorithm is tested on Raman spectra acquired on protein dilutions typical of formulated drug product and at sample volumes of 25 µL, below the typical overfill (waste) volumes present in vials of injectable drug product. The algorithm is demonstrated to correctly classify anomalous recombinant human growth hormone (rhGH) with 92% sensitivity and 98% specificity even when the algorithm has only previously encountered high-quality drug product.
NASA Technical Reports Server (NTRS)
Hulley, G.; Malakar, N.; Hughes, T.; Islam, T.; Hook, S.
2016-01-01
This document outlines the theory and methodology for generating the Moderate Resolution Imaging Spectroradiometer (MODIS) Level-2 daily daytime and nighttime 1-km land surface temperature (LST) and emissivity product using the Temperature Emissivity Separation (TES) algorithm. The MODIS-TES (MOD21_L2) product, will include the LST and emissivity for three MODIS thermal infrared (TIR) bands 29, 31, and 32, and will be generated for data from the NASA-EOS AM and PM platforms. This is version 1.0 of the ATBD and the goal is maintain a 'living' version of this document with changes made when necessary. The current standard baseline MODIS LST products (MOD11*) are derived from the generalized split-window (SW) algorithm (Wan and Dozier 1996), which produces a 1-km LST product and two classification-based emissivities for bands 31 and 32; and a physics-based day/night algorithm (Wan and Li 1997), which produces a 5-km (C4) and 6-km (C5) LST product and emissivity for seven MODIS bands: 20, 22, 23, 29, 31-33.
The application of dynamic programming in production planning
NASA Astrophysics Data System (ADS)
Wu, Run
2017-05-01
Nowadays, with the popularity of the computers, various industries and fields are widely applying computer information technology, which brings about huge demand for a variety of application software. In order to develop software meeting various needs with most economical cost and best quality, programmers must design efficient algorithms. A superior algorithm can not only soul up one thing, but also maximize the benefits and generate the smallest overhead. As one of the common algorithms, dynamic programming algorithms are used to solving problems with some sort of optimal properties. When solving problems with a large amount of sub-problems that needs repetitive calculations, the ordinary sub-recursive method requires to consume exponential time, and dynamic programming algorithm can reduce the time complexity of the algorithm to the polynomial level, according to which we can conclude that dynamic programming algorithm is a very efficient compared to other algorithms reducing the computational complexity and enriching the computational results. In this paper, we expound the concept, basic elements, properties, core, solving steps and difficulties of the dynamic programming algorithm besides, establish the dynamic programming model of the production planning problem.
2013-01-01
Background Optimization procedures to identify gene knockouts for targeted biochemical overproduction have been widely in use in modern metabolic engineering. Flux balance analysis (FBA) framework has provided conceptual simplifications for genome-scale dynamic analysis at steady states. Based on FBA, many current optimization methods for targeted bio-productions have been developed under the maximum cell growth assumption. The optimization problem to derive gene knockout strategies recently has been formulated as a bi-level programming problem in OptKnock for maximum targeted bio-productions with maximum growth rates. However, it has been shown that knockout mutants in fact reach the steady states with the minimization of metabolic adjustment (MOMA) from the corresponding wild-type strains instead of having maximal growth rates after genetic or metabolic intervention. In this work, we propose a new bi-level computational framework--MOMAKnock--which can derive robust knockout strategies under the MOMA flux distribution approximation. Methods In this new bi-level optimization framework, we aim to maximize the production of targeted chemicals by identifying candidate knockout genes or reactions under phenotypic constraints approximated by the MOMA assumption. Hence, the targeted chemical production is the primary objective of MOMAKnock while the MOMA assumption is formulated as the inner problem of constraining the knockout metabolic flux to be as close as possible to the steady-state phenotypes of wide-type strains. As this new inner problem becomes a quadratic programming problem, a novel adaptive piecewise linearization algorithm is developed in this paper to obtain the exact optimal solution to this new bi-level integer quadratic programming problem for MOMAKnock. Results Our new MOMAKnock model and the adaptive piecewise linearization solution algorithm are tested with a small E. coli core metabolic network and a large-scale iAF1260 E. coli metabolic network. The derived knockout strategies are compared with those from OptKnock. Our preliminary experimental results show that MOMAKnock can provide improved targeted productions with more robust knockout strategies. PMID:23368729
Ren, Shaogang; Zeng, Bo; Qian, Xiaoning
2013-01-01
Optimization procedures to identify gene knockouts for targeted biochemical overproduction have been widely in use in modern metabolic engineering. Flux balance analysis (FBA) framework has provided conceptual simplifications for genome-scale dynamic analysis at steady states. Based on FBA, many current optimization methods for targeted bio-productions have been developed under the maximum cell growth assumption. The optimization problem to derive gene knockout strategies recently has been formulated as a bi-level programming problem in OptKnock for maximum targeted bio-productions with maximum growth rates. However, it has been shown that knockout mutants in fact reach the steady states with the minimization of metabolic adjustment (MOMA) from the corresponding wild-type strains instead of having maximal growth rates after genetic or metabolic intervention. In this work, we propose a new bi-level computational framework--MOMAKnock--which can derive robust knockout strategies under the MOMA flux distribution approximation. In this new bi-level optimization framework, we aim to maximize the production of targeted chemicals by identifying candidate knockout genes or reactions under phenotypic constraints approximated by the MOMA assumption. Hence, the targeted chemical production is the primary objective of MOMAKnock while the MOMA assumption is formulated as the inner problem of constraining the knockout metabolic flux to be as close as possible to the steady-state phenotypes of wide-type strains. As this new inner problem becomes a quadratic programming problem, a novel adaptive piecewise linearization algorithm is developed in this paper to obtain the exact optimal solution to this new bi-level integer quadratic programming problem for MOMAKnock. Our new MOMAKnock model and the adaptive piecewise linearization solution algorithm are tested with a small E. coli core metabolic network and a large-scale iAF1260 E. coli metabolic network. The derived knockout strategies are compared with those from OptKnock. Our preliminary experimental results show that MOMAKnock can provide improved targeted productions with more robust knockout strategies.
Evaluating MODIS satellite versus terrestrial data driven productivity estimates in Austria
NASA Astrophysics Data System (ADS)
Petritsch, R.; Boisvenue, C.; Pietsch, S. A.; Hasenauer, H.; Running, S. W.
2009-04-01
Sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra satellite, are developed for monitoring global and/or regional ecosystem fluxes like net primary production (NPP). Although these systems should allow us to assess carbon sequestration issues, forest management impacts, etc., relatively little is known about the consistency and accuracy in the resulting satellite driven estimates versus production estimates driven from ground data. In this study we compare the following NPP estimation methods: (i) NPP estimates as derived from MODIS and available on the internet; (ii) estimates resulting from the off-line version of the MODIS algorithm; (iii) estimates using regional meteorological data within the offline algorithm; (iv) NPP estimates from a species specific biogeochemical ecosystem model adopted for Alpine conditions; and (v) NPP estimates calculated from individual tree measurements. Single tree measurements were available from 624 forested sites across Austria but only the data from 165 sample plots included all the necessary information for performing the comparison on plot level. To ensure independence of satellite-driven and ground-based predictions, only latitude and longitude for each site were used to obtain MODIS estimates. Along with the comparison of the different methods, we discuss problems like the differing dates of field campaigns (<1999) and acquisition of satellite images (2000-2005) or incompatible productivity definitions within the methods and come up with a framework for combining terrestrial and satellite data based productivity estimates. On average MODIS estimates agreed well with the output of the models self-initialization (spin-up) and biomass increment calculated from tree measurements is not significantly different from model results; however, correlation between satellite-derived versus terrestrial estimates are relatively poor. Considering the different scales as they are 9km² from MODIS and 1000m² from the sample plots together with the heterogeneous landscape may qualify the low correlation, particularly as the correlation increases when strongly fragmented sites are left out.
Quint, Jennifer K; Müllerova, Hana; DiSantostefano, Rachael L; Forbes, Harriet; Eaton, Susan; Hurst, John R; Davis, Kourtney; Smeeth, Liam
2014-01-01
Objectives The optimal method of identifying people with chronic obstructive pulmonary disease (COPD) from electronic primary care records is not known. We assessed the accuracy of different approaches using the Clinical Practice Research Datalink, a UK electronic health record database. Setting 951 participants registered with a CPRD practice in the UK between 1 January 2004 and 31 December 2012. Individuals were selected for ≥1 of 8 algorithms to identify people with COPD. General practitioners were sent a brief questionnaire and additional evidence to support a COPD diagnosis was requested. All information received was reviewed independently by two respiratory physicians whose opinion was taken as the gold standard. Primary outcome measure The primary measure of accuracy was the positive predictive value (PPV), the proportion of people identified by each algorithm for whom COPD was confirmed. Results 951 questionnaires were sent and 738 (78%) returned. After quality control, 696 (73.2%) patients were included in the final analysis. All four algorithms including a specific COPD diagnostic code performed well. Using a diagnostic code alone, the PPV was 86.5% (77.5–92.3%) while requiring a diagnosis plus spirometry plus specific medication; the PPV was slightly higher at 89.4% (80.7–94.5%) but reduced case numbers by 10%. Algorithms without specific diagnostic codes had low PPVs (range 12.2–44.4%). Conclusions Patients with COPD can be accurately identified from UK primary care records using specific diagnostic codes. Requiring spirometry or COPD medications only marginally improved accuracy. The high accuracy applies since the introduction of an incentivised disease register for COPD as part of Quality and Outcomes Framework in 2004. PMID:25056980
Harmonization of forest disturbance datasets of the conterminous USA from 1986 to 2011
Soulard, Christopher E.; Acevedo, William; Cohen, Warren B.; Yang, Zhiqiang; Stehman, Stephen V.; Taylor, Janis L.
2017-01-01
Several spatial forest disturbance datasets exist for the conterminous USA. The major problem with forest disturbance mapping is that variability between map products leads to uncertainty regarding the actual rate of disturbance. In this article, harmonized maps were produced from multiple data sources (i.e., Global Forest Change, LANDFIRE Vegetation Disturbance, National Land Cover Database, Vegetation Change Tracker, and Web-Enabled Landsat Data). The harmonization process involved fitting common class ontologies and determining spatial congruency to produce forest disturbance maps for four time intervals (1986–1992, 1992–2001, 2001–2006, and 2006–2011). Pixels mapped as disturbed for two or more datasets were labeled as disturbed in the harmonized maps. The primary advantage gained by harmonization was improvement in commission error rates relative to the individual disturbance products. Disturbance omission errors were high for both harmonized and individual forest disturbance maps due to underlying limitations in mapping subtle disturbances with Landsat classification algorithms. To enhance the value of the harmonized disturbance products, we used fire perimeter maps to add information on the cause of disturbance.
Harmonization of forest disturbance datasets of the conterminous USA from 1986 to 2011.
Soulard, Christopher E; Acevedo, William; Cohen, Warren B; Yang, Zhiqiang; Stehman, Stephen V; Taylor, Janis L
2017-04-01
Several spatial forest disturbance datasets exist for the conterminous USA. The major problem with forest disturbance mapping is that variability between map products leads to uncertainty regarding the actual rate of disturbance. In this article, harmonized maps were produced from multiple data sources (i.e., Global Forest Change, LANDFIRE Vegetation Disturbance, National Land Cover Database, Vegetation Change Tracker, and Web-Enabled Landsat Data). The harmonization process involved fitting common class ontologies and determining spatial congruency to produce forest disturbance maps for four time intervals (1986-1992, 1992-2001, 2001-2006, and 2006-2011). Pixels mapped as disturbed for two or more datasets were labeled as disturbed in the harmonized maps. The primary advantage gained by harmonization was improvement in commission error rates relative to the individual disturbance products. Disturbance omission errors were high for both harmonized and individual forest disturbance maps due to underlying limitations in mapping subtle disturbances with Landsat classification algorithms. To enhance the value of the harmonized disturbance products, we used fire perimeter maps to add information on the cause of disturbance.
Comparison of OPC job prioritization schemes to generate data for mask manufacturing
NASA Astrophysics Data System (ADS)
Lewis, Travis; Veeraraghavan, Vijay; Jantzen, Kenneth; Kim, Stephen; Park, Minyoung; Russell, Gordon; Simmons, Mark
2015-03-01
Delivering mask ready OPC corrected data to the mask shop on-time is critical for a foundry to meet the cycle time commitment for a new product. With current OPC compute resource sharing technology, different job scheduling algorithms are possible, such as, priority based resource allocation and fair share resource allocation. In order to maximize computer cluster efficiency, minimize the cost of the data processing and deliver data on schedule, the trade-offs of each scheduling algorithm need to be understood. Using actual production jobs, each of the scheduling algorithms will be tested in a production tape-out environment. Each scheduling algorithm will be judged on its ability to deliver data on schedule and the trade-offs associated with each method will be analyzed. It is now possible to introduce advance scheduling algorithms to the OPC data processing environment to meet the goals of on-time delivery of mask ready OPC data while maximizing efficiency and reducing cost.
ERIC Educational Resources Information Center
Vrachnos, Euripides; Jimoyiannis, Athanassios
2017-01-01
Developing students' algorithmic and computational thinking is currently a major objective for primary and secondary education in many countries around the globe. Literature suggests that students face at various difficulties in programming processes, because of their mental models about basic programming constructs. Arrays constitute the first…
Development of the Science Data System for the International Space Station Cold Atom Lab
NASA Technical Reports Server (NTRS)
van Harmelen, Chris; Soriano, Melissa A.
2015-01-01
Cold Atom Laboratory (CAL) is a facility that will enable scientists to study ultra-cold quantum gases in a microgravity environment on the International Space Station (ISS) beginning in 2016. The primary science data for each experiment consists of two images taken in quick succession. The first image is of the trapped cold atoms and the second image is of the background. The two images are subtracted to obtain optical density. These raw Level 0 atom and background images are processed into the Level 1 optical density data product, and then into the Level 2 data products: atom number, Magneto-Optical Trap (MOT) lifetime, magnetic chip-trap atom lifetime, and condensate fraction. These products can also be used as diagnostics of the instrument health. With experiments being conducted for 8 hours every day, the amount of data being generated poses many technical challenges, such as downlinking and managing the required data volume. A parallel processing design is described, implemented, and benchmarked. In addition to optimizing the data pipeline, accuracy and speed in producing the Level 1 and 2 data products is key. Algorithms for feature recognition are explored, facilitating image cropping and accurate atom number calculations.
Modified ADALINE algorithm for harmonic estimation and selective harmonic elimination in inverters
NASA Astrophysics Data System (ADS)
Vasumathi, B.; Moorthi, S.
2011-11-01
In digital signal processing, algorithms are very well developed for the estimation of harmonic components. In power electronic applications, an objective like fast response of a system is of primary importance. An effective method for the estimation of instantaneous harmonic components, along with conventional harmonic elimination technique, is presented in this article. The primary function is to eliminate undesirable higher harmonic components from the selected signal (current or voltage) and it requires only the knowledge of the frequency of the component to be eliminated. A signal processing technique using modified ADALINE algorithm has been proposed for harmonic estimation. The proposed method stays effective as it converges to a minimum error and brings out a finer estimation. A conventional control based on pulse width modulation for selective harmonic elimination is used to eliminate harmonic components after its estimation. This method can be applied to a wide range of equipment. The validity of the proposed method to estimate and eliminate voltage harmonics is proved with a dc/ac inverter as a simulation example. Then, the results are compared with existing ADALINE algorithm for illustrating its effectiveness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buzatu, Adrian; /McGill U.
2006-08-01
Improving our ability to identify the top quark pair (t{bar t}) primary vertex (PV) on an event-by-event basis is essential for many analyses in the lepton-plus-jets channel performed by the Collider Detector at Fermilab (CDF) Collaboration. We compare the algorithm currently used by CDF (A1) with another algorithm (A2) using Monte Carlo simulation at high instantaneous luminosities. We confirm that A1 is more efficient than A2 at selecting the t{bar t} PV at all PV multiplicities, both with efficiencies larger than 99%. Event selection rejects events with a distance larger than 5 cm along the proton beam between the t{barmore » t} PV and the charged lepton. We find flat distributions for the signal over background significance of this cut for all cut values larger than 1 cm, for all PV multiplicities and for both algorithms. We conclude that any cut value larger than 1 cm is acceptable for both algorithms under the Tevatron's expected instantaneous luminosity improvements.« less
Production facility layout by comparing moment displacement using BLOCPLAN and ALDEP Algorithms
NASA Astrophysics Data System (ADS)
Tambunan, M.; Ginting, E.; Sari, R. M.
2018-02-01
Production floor layout settings include the organizing of machinery, materials, and all the equipments used in the production process in the available area. PT. XYZ is a company that manufactures rubber and rubber compounds for retreading tire threaded with hot and cold cooking system. In the production of PT. XYZ is divided into three interrelated parts, namely Masterbatch Department, Department Compound, and Procured Thread Line Department. PT. XYZ has a production process with material flow is irregular and the arrangement of machine is complicated and need to be redesigned. The purpose of this study is comparing movement displacement using BLOCPLAN and ALDEP algorithm in order to redesign existing layout. Redesigning the layout of the production floor is done by applying algorithms of BLOCPLAN and ALDEP. The algorithm used to find the best layout design by comparing the moment displacement and the flow pattern. Moment displacement on the floor layout of the company’s production currently amounts to 2,090,578.5 meters per year and material flow pattern is irregular. Based on the calculation, the moment displacement for the BLOCPLAN is 1,551,344.82 meter per year and ALDEP is 1,600,179 meter per year. Flow Material resulted is in the form of straight the line.
NASA Astrophysics Data System (ADS)
Chen, M.; Butler, E. E.; Wythers, K. R.; Kattge, J.; Ricciuto, D. M.; Thornton, P. E.; Atkin, O. K.; Flores-Moreno, H.; Reich, P. B.
2017-12-01
In order to better estimate the carbon budget of the globe, accurately simulating gross primary productivity (GPP) in earth system models is critical. When upscaling leaf level photosynthesis to the canopy, climate models uses different big-leaf schemes. About half of the state-of-the-art earth system models use a "two-big-leaf" scheme that partitions canopies into direct and diffusively illuminated fractions to reduce high bias of GPP simulated by one-big-leaf models. Some two-big-leaf models, such as ACME (identical in this respect to CLM 4.5) add leaf area index (LAI) and stem area index (SAI) together when calculating canopy radiation transfer. This treatment, however, will result in higher fraction of sunlit leaves. It will also lead to an artificial overestimation of canopy nitrogen content. Here we introduce a new algorithm of simulating SAI in a two-big-leaf model. The new algorithm reduced the sunlit leave fraction of the canopy and conserved the nitrogen content from leaf to canopy level. The lower fraction of sunlit leaves reduced global GPP especially in tropical area. Compared to the default model, for the past 100 years (1909-2009), the averaged global annual GPP is lowered by 4.11 PgC year-1 using this new algorithm.
Algorithms for in-season nutrient management in cereals
USDA-ARS?s Scientific Manuscript database
The demand for improved decision making products for cereal production systems has placed added emphasis on using plant sensors in-season, and that incorporate real-time, site specific, growing environments. The objective of this work was to describe validated in-season sensor based algorithms prese...
Car painting process scheduling with harmony search algorithm
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Maiyasya, A.; Purnamawati, S.; Abdullah, D.; Albra, W.; Heikal, M.; Abdurrahman, A.; Khaddafi, M.
2018-02-01
Automotive painting program in the process of painting the car body by using robot power, making efficiency in the production system. Production system will be more efficient if pay attention to scheduling of car order which will be done by considering painting body shape of car. Flow shop scheduling is a scheduling model in which the job-job to be processed entirely flows in the same product direction / path. Scheduling problems often arise if there are n jobs to be processed on the machine, which must be specified which must be done first and how to allocate jobs on the machine to obtain a scheduled production process. Harmony Search Algorithm is a metaheuristic optimization algorithm based on music. The algorithm is inspired by observations that lead to music in search of perfect harmony. This musical harmony is in line to find optimal in the optimization process. Based on the tests that have been done, obtained the optimal car sequence with minimum makespan value.
Li, Dan; Hu, Xiaoguang
2017-03-01
Because of the high availability requirements from weapon equipment, an in-depth study has been conducted on the real-time fault-tolerance of the widely applied Compact PCI (CPCI) bus measurement and control system. A redundancy design method that uses heartbeat detection to connect the primary and alternate devices has been developed. To address the low successful execution rate and relatively large waste of time slices in the primary version of the task software, an improved algorithm for real-time fault-tolerant scheduling is proposed based on the Basic Checking available time Elimination idle time (BCE) algorithm, applying a single-neuron self-adaptive proportion sum differential (PSD) controller. The experimental validation results indicate that this system has excellent redundancy and fault-tolerance, and the newly developed method can effectively improve the system availability. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
The algorithm of central axis in surface reconstruction
NASA Astrophysics Data System (ADS)
Zhao, Bao Ping; Zhang, Zheng Mei; Cai Li, Ji; Sun, Da Ming; Cao, Hui Ying; Xing, Bao Liang
2017-09-01
Reverse engineering is an important technique means of product imitation and new product development. Its core technology -- surface reconstruction is the current research for scholars. In the various algorithms of surface reconstruction, using axis reconstruction is a kind of important method. For the various reconstruction, using medial axis algorithm was summarized, pointed out the problems existed in various methods, as well as the place needs to be improved. Also discussed the later surface reconstruction and development of axial direction.
Downing, Harriet; Thomas-Jones, Emma; Gal, Micaela; Waldron, Cherry-Ann; Sterne, Jonathan; Hollingworth, William; Hood, Kerenza; Delaney, Brendan; Little, Paul; Howe, Robin; Wootton, Mandy; Macgowan, Alastair; Butler, Christopher C; Hay, Alastair D
2012-07-19
Urinary tract infection (UTI) is common in children, and may cause serious illness and recurrent symptoms. However, obtaining a urine sample from young children in primary care is challenging and not feasible for large numbers. Evidence regarding the predictive value of symptoms, signs and urinalysis for UTI in young children is urgently needed to help primary care clinicians better identify children who should be investigated for UTI. This paper describes the protocol for the Diagnosis of Urinary Tract infection in Young children (DUTY) study. The overall study aim is to derive and validate a cost-effective clinical algorithm for the diagnosis of UTI in children presenting to primary care acutely unwell. DUTY is a multicentre, diagnostic and prospective observational study aiming to recruit at least 7,000 children aged before their fifth birthday, being assessed in primary care for any acute, non-traumatic, illness of ≤ 28 days duration. Urine samples will be obtained from eligible consented children, and data collected on medical history and presenting symptoms and signs. Urine samples will be dipstick tested in general practice and sent for microbiological analysis. All children with culture positive urines and a random sample of children with urine culture results in other, non-positive categories will be followed up to record symptom duration and healthcare resource use. A diagnostic algorithm will be constructed and validated and an economic evaluation conducted.The primary outcome will be a validated diagnostic algorithm using a reference standard of a pure/predominant growth of at least >103, but usually >105 CFU/mL of one, but no more than two uropathogens.We will use logistic regression to identify the clinical predictors (i.e. demographic, medical history, presenting signs and symptoms and urine dipstick analysis results) most strongly associated with a positive urine culture result. We will then use economic evaluation to compare the cost effectiveness of the candidate prediction rules. This study will provide novel, clinically important information on the diagnostic features of childhood UTI and the cost effectiveness of a validated prediction rule, to help primary care clinicians improve the efficiency of their diagnostic strategy for UTI in young children.
2012-01-01
Background Urinary tract infection (UTI) is common in children, and may cause serious illness and recurrent symptoms. However, obtaining a urine sample from young children in primary care is challenging and not feasible for large numbers. Evidence regarding the predictive value of symptoms, signs and urinalysis for UTI in young children is urgently needed to help primary care clinicians better identify children who should be investigated for UTI. This paper describes the protocol for the Diagnosis of Urinary Tract infection in Young children (DUTY) study. The overall study aim is to derive and validate a cost-effective clinical algorithm for the diagnosis of UTI in children presenting to primary care acutely unwell. Methods/design DUTY is a multicentre, diagnostic and prospective observational study aiming to recruit at least 7,000 children aged before their fifth birthday, being assessed in primary care for any acute, non-traumatic, illness of ≤ 28 days duration. Urine samples will be obtained from eligible consented children, and data collected on medical history and presenting symptoms and signs. Urine samples will be dipstick tested in general practice and sent for microbiological analysis. All children with culture positive urines and a random sample of children with urine culture results in other, non-positive categories will be followed up to record symptom duration and healthcare resource use. A diagnostic algorithm will be constructed and validated and an economic evaluation conducted. The primary outcome will be a validated diagnostic algorithm using a reference standard of a pure/predominant growth of at least >103, but usually >105 CFU/mL of one, but no more than two uropathogens. We will use logistic regression to identify the clinical predictors (i.e. demographic, medical history, presenting signs and symptoms and urine dipstick analysis results) most strongly associated with a positive urine culture result. We will then use economic evaluation to compare the cost effectiveness of the candidate prediction rules. Discussion This study will provide novel, clinically important information on the diagnostic features of childhood UTI and the cost effectiveness of a validated prediction rule, to help primary care clinicians improve the efficiency of their diagnostic strategy for UTI in young children. PMID:22812651
NASA Astrophysics Data System (ADS)
Olsen, Kevin S.; Toon, Geoffrey C.; Boone, Chris D.; Strong, Kimberly
2016-03-01
Motivated by the initial selection of a high-resolution solar occultation Fourier transform spectrometer (FTS) to fly to Mars on the ExoMars Trace Gas Orbiter, we have been developing algorithms for retrieving volume mixing ratio vertical profiles of trace gases, the primary component of which is a new algorithm and software for retrieving vertical profiles of temperature and pressure from the spectra. In contrast to Earth-observing instruments, which can rely on accurate meteorological models, a priori information, and spacecraft position, Mars retrievals require a method with minimal reliance on such data. The temperature and pressure retrieval algorithms developed for this work were evaluated using Earth-observing spectra from the Atmospheric Chemistry Experiment (ACE) FTS, a solar occultation instrument in orbit since 2003, and the basis for the instrument selected for a Mars mission. ACE-FTS makes multiple measurements during an occultation, separated in altitude by 1.5-5 km, and we analyse 10 CO2 vibration-rotation bands at each altitude, each with a different usable altitude range. We describe the algorithms and present results of their application and their comparison to the ACE-FTS data products. The Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) provides vertical profiles of temperature up to 40 km with high vertical resolution. Using six satellites and GPS radio occultation, COSMIC's data product has excellent temporal and spatial coverage, allowing us to find coincident measurements with ACE with very tight criteria: less than 1.5 h and 150 km. We present an intercomparison of temperature profiles retrieved from ACE-FTS using our algorithm, that of the ACE Science Team (v3.5), and from COSMIC. When our retrievals are compared to ACE-FTS v3.5, we find mean differences between -5 and +2 K and that our retrieved profiles have no seasonal or zonal biases but do have a warm bias in the stratosphere and a cold bias in the mesosphere. When compared to COSMIC, we do not observe a warm/cool bias and mean differences are between -4 and +1 K. COSMIC comparisons are restricted to below 40 km, where our retrievals have the best agreement with ACE-FTS v3.5. When comparing ACE-FTS v3.5 to COSMIC we observe a cold bias in COSMIC of 0.5 K, and mean differences are between -0.9 and +0.6 K.
Chatzistamatiou, Kimon; Moysiadis, Theodoros; Moschaki, Viktoria; Panteleris, Nikolaos; Agorastos, Theodoros
2016-07-01
The objective of the present study was to identify the most effective cervical cancer screening algorithm incorporating different combinations of cytology, HPV testing and genotyping. Women 25-55years old recruited for the "HERMES" (HEllenic Real life Multicentric cErvical Screening) study were screened in terms of cytology and high-risk (hr) HPV testing with HPV 16/18 genotyping. Women positive for cytology or/and hrHPV were referred for colposcopy, biopsy and treatment. Ten screening algorithms based on different combinations of cytology, HPV testing and HPV 16/18 genotyping were investigated in terms of diagnostic accuracy. Three clusters of algorithms were formed according to the balance between effectiveness and harm caused by screening. The cluster showing the best balance included two algorithms based on co-testing and two based on HPV primary screening with HPV 16/18 genotyping. Among these, hrHPV testing with HPV 16/18 genotyping and reflex cytology (atypical squamous cells of undetermined significance - ASCUS threshold) presented the optimal combination of sensitivity (82.9%) and specificity relative to cytology alone (0.99) with 1.26 false positive rate relative to cytology alone. HPV testing with HPV 16/18 genotyping, referring HPV 16/18 positive women directly to colposcopy, and hrHPV (non 16/18) positive women to reflex cytology (ASCUS threshold), as a triage method to colposcopy, reflects the best equilibrium between screening effectiveness and harm. Algorithms, based on cytology as initial screening method, on co-testing or HPV primary without genotyping, and on HPV primary with genotyping but without cytology triage, are not supported according to the present analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Aref-Eshghi, Erfan; Oake, Justin; Godwin, Marshall; Aubrey-Bassler, Kris; Duke, Pauline; Mahdavian, Masoud; Asghari, Shabnam
2017-03-01
The objective of this study was to define the optimal algorithm to identify patients with dyslipidemia using electronic medical records (EMRs). EMRs of patients attending primary care clinics in St. John's, Newfoundland and Labrador (NL), Canada during 2009-2010, were studied to determine the best algorithm for identification of dyslipidemia. Six algorithms containing three components, dyslipidemia ICD coding, lipid lowering medication use, and abnormal laboratory lipid levels, were tested against a gold standard, defined as the existence of any of the three criteria. Linear discriminate analysis, and bootstrapping were performed following sensitivity/specificity testing and receiver's operating curve analysis. Two validating datasets, NL records of 2011-2014, and Canada-wide records of 2010-2012, were used to replicate the results. Relative to the gold standard, combining laboratory data together with lipid lowering medication consumption yielded the highest sensitivity (99.6%), NPV (98.1%), Kappa agreement (0.98), and area under the curve (AUC, 0.998). The linear discriminant analysis for this combination resulted in an error rate of 0.15 and an Eigenvalue of 1.99, and the bootstrapping led to AUC: 0.998, 95% confidence interval: 0.997-0.999, Kappa: 0.99. This algorithm in the first validating dataset yielded a sensitivity of 97%, Negative Predictive Value (NPV) = 83%, Kappa = 0.88, and AUC = 0.98. These figures for the second validating data set were 98%, 93%, 0.95, and 0.99, respectively. Combining laboratory data with lipid lowering medication consumption within the EMR is the best algorithm for detecting dyslipidemia. These results can generate standardized information systems for dyslipidemia and other chronic disease investigations using EMRs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sohn, A.; Gaudiot, J.-L.
1991-12-31
Much effort has been expanded on special architectures and algorithms dedicated to efficient processing of the pattern matching step of production systems. In this paper, the authors investigate the possible improvement on the Rete pattern matcher for production systems. Inefficiencies in the Rete match algorithm have been identified, based on which they introduce a pattern matcher with multiple root nodes. A complete implementation of the multiple root node-based production system interpreter is presented to investigate its relative algorithmic behavior over the Rete-based Ops5 production system interpreter. Benchmark production system programs are executed (not simulated) on a sequential machine Sun 4/490more » by using both interpreters and various experimental results are presented. Their investigation indicates that the multiple root node-based production system interpreter would give a maximum of up to 6-fold improvement over the Lisp implementation of the Rete-based Ops5 for the match step.« less
A novel iris localization algorithm using correlation filtering
NASA Astrophysics Data System (ADS)
Pohit, Mausumi; Sharma, Jitu
2015-06-01
Fast and efficient segmentation of iris from the eye images is a primary requirement for robust database independent iris recognition. In this paper we have presented a new algorithm for computing the inner and outer boundaries of the iris and locating the pupil centre. Pupil-iris boundary computation is based on correlation filtering approach, whereas iris-sclera boundary is determined through one dimensional intensity mapping. The proposed approach is computationally less extensive when compared with the existing algorithms like Hough transform.
I/O efficient algorithms and applications in geographic information systems
NASA Astrophysics Data System (ADS)
Danner, Andrew
Modern remote sensing methods such a laser altimetry (lidar) and Interferometric Synthetic Aperture Radar (IfSAR) produce georeferenced elevation data at unprecedented rates. Many Geographic Information System (GIS) algorithms designed for terrain modelling applications cannot process these massive data sets. The primary problem is that these data sets are too large to fit in the main internal memory of modern computers and must therefore reside on larger, but considerably slower disks. In these applications, the transfer of data between disk and main memory, or I/O, becomes the primary bottleneck. Working in a theoretical model that more accurately represents this two level memory hierarchy, we can develop algorithms that are I/O-efficient and reduce the amount of disk I/O needed to solve a problem. In this thesis we aim to modernize GIS algorithms and develop a number of I/O-efficient algorithms for processing geographic data derived from massive elevation data sets. For each application, we convert a geographic question to an algorithmic question, develop an I/O-efficient algorithm that is theoretically efficient, implement our approach and verify its performance using real-world data. The applications we consider include constructing a gridded digital elevation model (DEM) from an irregularly spaced point cloud, removing topological noise from a DEM, modeling surface water flow over a terrain, extracting river networks and watershed hierarchies from the terrain, and locating polygons containing query points in a planar subdivision. We initially developed solutions to each of these applications individually. However, we also show how to combine individual solutions to form a scalable geo-processing pipeline that seamlessly solves a sequence of sub-problems with little or no manual intervention. We present experimental results that demonstrate orders of magnitude improvement over previously known algorithms.
Whyte, Joanna L; Engel-Nitz, Nicole M; Teitelbaum, April; Gomez Rey, Gabriel; Kallich, Joel D
2015-07-01
Administrative health care claims data are used for epidemiologic, health services, and outcomes cancer research and thus play a significant role in policy. Cancer stage, which is often a major driver of cost and clinical outcomes, is not typically included in claims data. Evaluate algorithms used in a dataset of cancer patients to identify patients with metastatic breast (BC), lung (LC), or colorectal (CRC) cancer using claims data. Clinical data on BC, LC, or CRC patients (between January 1, 2007 and March 31, 2010) were linked to a health care claims database. Inclusion required health plan enrollment ≥3 months before initial cancer diagnosis date. Algorithms were used in the claims database to identify patients' disease status, which was compared with physician-reported metastases. Generic and tumor-specific algorithms were evaluated using ICD-9 codes, varying diagnosis time frames, and including/excluding other tumors. Positive and negative predictive values, sensitivity, and specificity were assessed. The linked databases included 14,480 patients; of whom, 32%, 17%, and 14.2% had metastatic BC, LC, and CRC, respectively, at diagnosis and met inclusion criteria. Nontumor-specific algorithms had lower specificity than tumor-specific algorithms. Tumor-specific algorithms' sensitivity and specificity were 53% and 99% for BC, 55% and 85% for LC, and 59% and 98% for CRC, respectively. Algorithms to distinguish metastatic BC, LC, and CRC from locally advanced disease should use tumor-specific primary cancer codes with 2 claims for the specific primary cancer >30-42 days apart to reduce misclassification. These performed best overall in specificity, positive predictive values, and overall accuracy to identify metastatic cancer in a health care claims database.
Ajrouche, Aya; Estellat, Candice; De Rycke, Yann; Tubach, Florence
2017-08-01
Administrative databases are increasingly being used in cancer observational studies. Identifying incident cancer in these databases is crucial. This study aimed to develop algorithms to estimate cancer incidence by using health administrative databases and to examine the accuracy of the algorithms in terms of national cancer incidence rates estimated from registries. We identified a cohort of 463 033 participants on 1 January 2012 in the Echantillon Généraliste des Bénéficiaires (EGB; a representative sample of the French healthcare insurance system). The EGB contains data on long-term chronic disease (LTD) status, reimbursed outpatient treatments and procedures, and hospitalizations (including discharge diagnoses, and costly medical procedures and drugs). After excluding cases of prevalent cancer, we applied 15 algorithms to estimate the cancer incidence rates separately for men and women in 2012 and compared them to the national cancer incidence rates estimated from French registries by indirect age and sex standardization. The most accurate algorithm for men combined information from LTD status, outpatient anticancer drugs, radiotherapy sessions and primary or related discharge diagnosis of cancer, although it underestimated the cancer incidence (standardized incidence ratio (SIR) 0.85 [0.80-0.90]). For women, the best algorithm used the same definition of the algorithm for men but restricted hospital discharge to only primary or related diagnosis with an additional inpatient procedure or drug reimbursement related to cancer and gave comparable estimates to those from registries (SIR 1.00 [0.94-1.06]). The algorithms proposed could be used for cancer incidence monitoring and for future etiological cancer studies involving French healthcare databases. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
A novel implementation of homodyne time interval analysis method for primary vibration calibration
NASA Astrophysics Data System (ADS)
Sun, Qiao; Zhou, Ling; Cai, Chenguang; Hu, Hongbo
2011-12-01
In this paper, the shortcomings and their causes of the conventional homodyne time interval analysis (TIA) method is described with respect to its software algorithm and hardware implementation, based on which a simplified TIA method is proposed with the help of virtual instrument technology. Equipped with an ordinary Michelson interferometer and dual channel synchronous data acquisition card, the primary vibration calibration system using the simplified method can perform measurements of complex sensitivity of accelerometers accurately, meeting the uncertainty requirements laid down in pertaining ISO standard. The validity and accuracy of the simplified TIA method is verified by simulation and comparison experiments with its performance analyzed. This simplified method is recommended to apply in national metrology institute of developing countries and industrial primary vibration calibration labs for its simplified algorithm and low requirements on hardware.
The long-term Global LAnd Surface Satellite (GLASS) product suite and applications
NASA Astrophysics Data System (ADS)
Liang, S.
2015-12-01
Our Earth's environment is experiencing rapid changes due to natural variability and human activities. To monitor, understand and predict environment changes to meet the economic, social and environmental needs, use of long-term high-quality satellite data products is critical. The Global LAnd Surface Satellite (GLASS) product suite, generated at Beijing Normal University, currently includes 12 products, including leaf area index (LAI), broadband shortwave albedo, broadband longwave emissivity, downwelling shortwave radiation and photosynthetically active radiation, land surface skin temperature, longwave net radiation, daytime all-wave net radiation, fraction of absorbed photosynetically active radiation absorbed by green vegetation (FAPAR), fraction of green vegetation coverage, gross primary productivity (GPP), and evapotranspiration (ET). Most products span from 1981-2014. The algorithms for producing these products have been published in the top remote sensing related journals and books. More and more applications have being reported in the scientific literature. The GLASS products are freely available at the Center for Global Change Data Processing and Analysis of Beijing Normal University (http://www.bnu-datacenter.com/), and the University of Maryland Global Land Cover Facility (http://glcf.umd.edu). After briefly introducing the basic characteristics of GLASS products, we will present some applications on the long-term environmental changes detected from GLASS products at both global and local scales. Detailed analysis of regional hotspots, such as Greenland, Tibetan plateau, and northern China, will be emphasized, where environmental changes have been mainly associated with climate warming, drought, land-atmosphere interactions, and human activities.
Near-Real-Time Detection and Monitoring of Intense Pyroconvection from Geostationary Satellites
NASA Astrophysics Data System (ADS)
Peterson, D. A.; Fromm, M. D.; Hyer, E. J.; Surratt, M. L.; Solbrig, J. E.; Campbell, J. R.
2016-12-01
Intense fire-triggered thunderstorms, known as pyrocumulonimbus (or pyroCb), can alter fire behavior, influence smoke plume trajectories, and hinder fire suppression efforts. PyroCb are also known for injecting a significant quantity of aerosol mass into the upper-troposphere and lower-stratosphere (UTLS). Near-real-time (NRT) detection and monitoring of pyroCb is highly desirable for a variety of forecasting and research applications. The Naval Research Laboratory (NRL) recently developed the first automated NRT pyroCb detection algorithm for geostationary satellite sensors. The algorithm uses multispectral infrared observations to isolate deep convective clouds with the distinct microphysical signal of pyroCb. Application of this algorithm to 88 intense wildfires observed during the 2013 fire season in western North America resulted in detection of individual intense events, pyroCb embedded within traditional convection, and multiple, short-lived pulses of activity. Comparisons with a community inventory indicate that this algorithm captures the majority of pyroCb. The primary limitation of the current system is that pyroCb anvils can be small relative to satellite pixel size, especially in in regions with large viewing angles. The algorithm is also sensitive to some false positives from traditional convection that either ingests smoke or exhibits extreme updraft velocities. This algorithm has been automated using the GeoIPS processing system developed at NRL, which produces a variety of imagery products and statistical output for rapid analysis of potential pyroCb events. NRT application of this algorithm has been extended to the majority of regions worldwide known to have a high frequency of pyroCb occurrence. This involves a constellation comprised of GOES-East, GOES-West, and Himawari-8. Imagery is posted immediately to an NRL-maintained web page. Alerts are generated by the system and disseminated via email. This detection system also has potential to serve as a data source for other NRT environmental monitoring systems. While the current geostationary constellation has several important limitations, the next-generation of geostationary sensors will offer significant advantages for achieving the goal of global NRT pyroCb detection.
NASA Astrophysics Data System (ADS)
Tilstone, Gavin H.; Lange, Priscila K.; Misra, Ankita; Brewin, Robert J. W.; Cain, Terry
2017-11-01
Micro-phytoplankton is the >20 μm component of the phytoplankton community and plays a major role in the global ocean carbon pump, through the sequestering of anthropogenic CO2 and export of organic carbon to the deep ocean. To evaluate the global impact of the marine carbon cycle, quantification of micro-phytoplankton primary production is paramount. In this paper we use both in situ data and a satellite model to estimate the contribution of micro-phytoplankton to total primary production (PP) in the Atlantic Ocean. From 1995 to 2013, 940 measurements of primary production were made at 258 sites on 23 Atlantic Meridional Transect Cruises from the United Kingdom to the South African or Patagonian Shelf. Micro-phytoplankton primary production was highest in the South Subtropical Convergence (SSTC ∼ 409 ± 720 mg C m-2 d-1), where it contributed between 38 % of the total PP, and was lowest in the North Atlantic Gyre province (NATL ∼ 37 ± 27 mg C m-2 d-1), where it represented 18 % of the total PP. Size-fractionated photosynthesis-irradiance (PE) parameters measured on AMT22 and 23 showed that micro-phytoplankton had the highest maximum photosynthetic rate (PmB) (∼5 mg C (mg Chl a)-1 h-1) followed by nano- (∼4 mg C (mg Chl a)-1 h-1) and pico- (∼2 mg C (mg Chl a)-1 h-1). The highest PmB was recorded in the NATL and lowest in the North Atlantic Drift Region (NADR) and South Atlantic Gyre (SATL). The PE parameters were used to parameterise a remote sensing model of size-fractionated PP, which explained 84 % of the micro-phytoplankton in situ PP variability with a regression slope close to 1. The model was applied to the SeaWiFS time series from 1998-2010, which illustrated that micro-phytoplankton PP remained constant in the NADR, NATL, Canary Current Coastal upwelling (CNRY), Eastern Tropical Atlantic (ETRA), Western Tropical Atlantic (WTRA) and SATL, but showed a gradual increase in the Benguela Upwelling zone (BENG) and South Subtropical Convergence (SSTC). The mean annual carbon fixation of micro-phytoplankton was highest in the CNRY (∼140 g C m-2 yr-1), and lowest in the SATL (27 g C m-2 yr-1). A Thorium-234 based export production (ThExP) algorithm was applied to estimates of total PP in each province. There was a strong coupling between micro-phytoplankton PP and ThExP in the NADR and SSTC where between 23 and 39 % of micro-phytoplankton PP contributed to ThExP. The lowest contribution by micro-phytoplankton to ThExP was in the ETRA and WTRA which were 15 and 21 % respectively. The results suggest that micro-phytoplankton PP in the SSTC is the most efficient export system and the ETRA is the least efficient in the Atlantic Ocean.
Lee, Theresa M; Tu, Karen; Wing, Laura L; Gershon, Andrea S
2017-05-15
Little is known about using electronic medical records to identify patients with chronic obstructive pulmonary disease to improve quality of care. Our objective was to develop electronic medical record algorithms that can accurately identify patients with obstructive pulmonary disease. A retrospective chart abstraction study was conducted on data from the Electronic Medical Record Administrative data Linked Database (EMRALD ® ) housed at the Institute for Clinical Evaluative Sciences. Abstracted charts provided the reference standard based on available physician-diagnoses, chronic obstructive pulmonary disease-specific medications, smoking history and pulmonary function testing. Chronic obstructive pulmonary disease electronic medical record algorithms using combinations of terminology in the cumulative patient profile (CPP; problem list/past medical history), physician billing codes (chronic bronchitis/emphysema/other chronic obstructive pulmonary disease), and prescriptions, were tested against the reference standard. Sensitivity, specificity, and positive/negative predictive values (PPV/NPV) were calculated. There were 364 patients with chronic obstructive pulmonary disease identified in a 5889 randomly sampled cohort aged ≥ 35 years (prevalence = 6.2%). The electronic medical record algorithm consisting of ≥ 3 physician billing codes for chronic obstructive pulmonary disease per year; documentation in the CPP; tiotropium prescription; or ipratropium (or its formulations) prescription and a chronic obstructive pulmonary disease billing code had sensitivity of 76.9% (95% CI:72.2-81.2), specificity of 99.7% (99.5-99.8), PPV of 93.6% (90.3-96.1), and NPV of 98.5% (98.1-98.8). Electronic medical record algorithms can accurately identify patients with chronic obstructive pulmonary disease in primary care records. They can be used to enable further studies in practice patterns and chronic obstructive pulmonary disease management in primary care. NOVEL ALGORITHM SEARCH TECHNIQUE: Researchers develop an algorithm that can accurately search through electronic health records to find patients with chronic lung disease. Mining population-wide data for information on patients diagnosed and treated with chronic obstructive pulmonary disease (COPD) in primary care could help inform future healthcare and spending practices. Theresa Lee at the University of Toronto, Canada, and colleagues used an algorithm to search electronic medical records and identify patients with COPD from doctors' notes, prescriptions and symptom histories. They carefully adjusted the algorithm to improve sensitivity and predictive value by adding details such as specific medications, physician codes related to COPD, and different combinations of terminology in doctors' notes. The team accurately identified 364 patients with COPD in a randomly-selected cohort of 5889 people. Their results suggest opportunities for broader, informative studies of COPD in wider populations.
The algorithms for rational spline interpolation of surfaces
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1986-01-01
Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.
Adaptive jammer nulling in EHF communications satellites
NASA Astrophysics Data System (ADS)
Bhagwan, Jai; Kavanagh, Stephen; Yen, J. L.
A preliminary investigation is reviewed concerning adaptive null steering multibeam uplink receiving system concepts for future extremely high frequency communications satellites. Primary alternatives in the design of the uplink antenna, the multibeam adaptive nulling receiver, and the processing algorithm and optimization criterion are discussed. The alternatives are phased array, lens or reflector antennas, nulling at radio frequency or an intermediate frequency, wideband versus narrowband nulling, and various adaptive nulling algorithms. A primary determinant of the hardware complexity is the receiving system architecture, which is described for the alternative antenna and nulling concepts. The final concept chosen will be influenced by the nulling performance requirements, cost, and technological readiness.
Study on the Algorithm of Judgment Matrix in Analytic Hierarchy Process
NASA Astrophysics Data System (ADS)
Lu, Zhiyong; Qin, Futong; Jin, Yican
2017-10-01
A new algorithm is proposed for the non-consistent judgment matrix in AHP. A primary judgment matrix is generated firstly through pre-ordering the targeted factor set, and a compared matrix is built through the top integral function. Then a relative error matrix is created by comparing the compared matrix with the primary judgment matrix which is regulated under the control of the relative error matrix and the dissimilar degree of the matrix step by step. Lastly, the targeted judgment matrix is generated to satisfy the requirement of consistence and the least dissimilar degree. The feasibility and validity of the proposed method are verified by simulation results.
John, Ann; McGregor, Joanne; Fone, David; Dunstan, Frank; Cornish, Rosie; Lyons, Ronan A; Lloyd, Keith R
2016-03-15
The robustness of epidemiological research using routinely collected primary care electronic data to support policy and practice for common mental disorders (CMD) anxiety and depression would be greatly enhanced by appropriate validation of diagnostic codes and algorithms for data extraction. We aimed to create a robust research platform for CMD using population-based, routinely collected primary care electronic data. We developed a set of Read code lists (diagnosis, symptoms, treatments) for the identification of anxiety and depression in the General Practice Database (GPD) within the Secure Anonymised Information Linkage Databank at Swansea University, and assessed 12 algorithms for Read codes to define cases according to various criteria. Annual incidence rates were calculated per 1000 person years at risk (PYAR) to assess recording practice for these CMD between January 1(st) 2000 and December 31(st) 2009. We anonymously linked the 2799 MHI-5 Caerphilly Health and Social Needs Survey (CHSNS) respondents aged 18 to 74 years to their routinely collected GP data in SAIL. We estimated the sensitivity, specificity and positive predictive value of the various algorithms using the MHI-5 as the gold standard. The incidence of combined depression/anxiety diagnoses remained stable over the ten-year period in a population of over 500,000 but symptoms increased from 6.5 to 20.7 per 1000 PYAR. A 'historical' GP diagnosis for depression/anxiety currently treated plus a current diagnosis (treated or untreated) resulted in a specificity of 0.96, sensitivity 0.29 and PPV 0.76. Adding current symptom codes improved sensitivity (0.32) with a marginal effect on specificity (0.95) and PPV (0.74). We have developed an algorithm with a high specificity and PPV of detecting cases of anxiety and depression from routine GP data that incorporates symptom codes to reflect GP coding behaviour. We have demonstrated that using diagnosis and current treatment alone to identify cases for depression and anxiety using routinely collected primary care data will miss a number of true cases given changes in GP recording behaviour. The Read code lists plus the developed algorithms will be applicable to other routinely collected primary care datasets, creating a platform for future e-cohort research into these conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue
Within this paper, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. Additionally, the solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue
In this article, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; ...
2017-12-01
In this article, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; ...
2017-08-24
Within this paper, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. Additionally, the solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
Adaptive Metropolis Sampling with Product Distributions
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Lee, Chiu Fan
2005-01-01
The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.
Ciaccio, Edward J; Micheli-Tzanakou, Evangelia
2007-07-01
Common-mode noise degrades cardiovascular signal quality and diminishes measurement accuracy. Filtering to remove noise components in the frequency domain often distorts the signal. Two adaptive noise canceling (ANC) algorithms were tested to adjust weighted reference signals for optimal subtraction from a primary signal. Update of weight w was based upon the gradient term of the steepest descent equation: [see text], where the error epsilon is the difference between primary and weighted reference signals. nabla was estimated from Deltaepsilon(2) and Deltaw without using a variable Deltaw in the denominator which can cause instability. The Parallel Comparison (PC) algorithm computed Deltaepsilon(2) using fixed finite differences +/- Deltaw in parallel during each discrete time k. The ALOPEX algorithm computed Deltaepsilon(2)x Deltaw from time k to k + 1 to estimate nabla, with a random number added to account for Deltaepsilon(2) . Deltaw--> 0 near the optimal weighting. Using simulated data, both algorithms stably converged to the optimal weighting within 50-2000 discrete sample points k even with a SNR = 1:8 and weights which were initialized far from the optimal. Using a sharply pulsatile cardiac electrogram signal with added noise so that the SNR = 1:5, both algorithms exhibited stable convergence within 100 ms (100 sample points). Fourier spectral analysis revealed minimal distortion when comparing the signal without added noise to the ANC restored signal. ANC algorithms based upon difference calculations can rapidly and stably converge to the optimal weighting in simulated and real cardiovascular data. Signal quality is restored with minimal distortion, increasing the accuracy of biophysical measurement.
Consistency of two global MODIS aerosol products over ocean on Terra and Aqua CERES SSF datasets
NASA Astrophysics Data System (ADS)
Ignatov, Alexander; Minnis, Patrick; Wielicki, Bruce; Loeb, Norman G.; Remer, Lorraine A.; Kaufman, Yoram J.; Miller, Walter F.; Sun-Mack, Sunny; Laszlo, Istvan; Geier, Erika B.
2004-12-01
MODIS aerosol retrievals over ocean from Terra and Aqua platforms are available from the Clouds and the Earth's Radiant Energy System (CERES) Single Scanner Footprint (SSF) datasets generated at NASA Langley Research Center (LaRC). Two aerosol products are reported side by side. The primary M product is generated by subsetting and remapping the multi-spectral (0.44 - 2.1 μm) MOD04 aerosols onto CERES footprints. MOD04 processing uses cloud screening and aerosol algorithms developed by the MODIS science team. The secondary (AVHRR-like) A product is generated in only two MODIS bands: 1 and 6 on Terra, and ` and 7 on Aqua. The A processing uses NASA/LaRC cloud-screening and NOAA/NESDIS single channel aerosol algorthm. The M and A products have been documented elsewhere and preliminarily compared using two weeks of global Terra CERES SSF (Edition 1A) data in December 2000 and June 2001. In this study, the M and A aerosol optical depths (AOD) in MODIS band 1 and (0.64 μm), τ1M and τ1A, are further checked for cross-platform consistency using 9 days of global Terra CERES SSF (Edition 2A) and Aqua CERES SSF (Edition 1A) data from 13 - 21 October 2002.
Global Patterns in Human Consumption of Net Primary Production
NASA Technical Reports Server (NTRS)
Imhoff, Marc L.; Bounoua, Lahouari; Ricketts, Taylor; Loucks, Colby; Harriss, Robert; Lawrence William T.
2004-01-01
The human population and its consumption profoundly affect the Earth's ecosystems. A particularly compelling measure of humanity's cumulative impact is the fraction of the planet's net primary production that we appropriate for our Net primary production-the net amount of solar energy converted to plant organic matter through photosynthesis-can be measured in units of elemental carbon and represents the primary food energy source for the world's ecosystems. Human appropriation of net primary production, apart from leaving less for other species to use, alters the composition of the atmosphere, levels of biodiversity, flows within food webs and the provision of important primary production required by humans and compare it to the total amount generated on the landscape. We then derive a spatial ba!mce sheet of net primary production supply and demand for the world. We show that human appropriation of net primary production varies spatially from almost zero to many times the local primary production. These analyses reveal the uneven footprint of human consumption and related environmental impacts, indicate the degree to which human populations depend on net primary production "imports" and suggest policy options for slowing future growth of human appropriation of net primary production.
Information-based management mode based on value network analysis for livestock enterprises
NASA Astrophysics Data System (ADS)
Liu, Haoqi; Lee, Changhoon; Han, Mingming; Su, Zhongbin; Padigala, Varshinee Anu; Shen, Weizheng
2018-01-01
With the development of computer and IT technologies, enterprise management has gradually become information-based management. Moreover, due to poor technical competence and non-uniform management, most breeding enterprises show a lack of organisation in data collection and management. In addition, low levels of efficiency result in increasing production costs. This paper adopts 'struts2' in order to construct an information-based management system for standardised and normalised management within the process of production in beef cattle breeding enterprises. We present a radio-frequency identification system by studying multiple-tag anti-collision via a dynamic grouping ALOHA algorithm. This algorithm is based on the existing ALOHA algorithm and uses an improved packet dynamic of this algorithm, which is characterised by a high-throughput rate. This new algorithm can reach a throughput 42% higher than that of the general ALOHA algorithm. With a change in the number of tags, the system throughput is relatively stable.
Assessment of the Broadleaf Crops Leaf Area Index Product from the Terra MODIS Instrument
NASA Technical Reports Server (NTRS)
Tan, Bin; Hu, Jiannan; Huang, Dong; Yang, Wenze; Zhang, Ping; Shabanov, Nikolay V.; Knyazikhin, Yuri; Nemani, Ramakrishna R.; Myneni, Ranga B.
2005-01-01
The first significant processing of Terra MODIS data, called Collection 3, covered the period from November 2000 to December 2002. The Collection 3 leaf area index (LAI) and fraction vegetation absorbed photosynthetically active radiation (FPAR) products for broadleaf crops exhibited three anomalies (a) high LAI values during the peak growing season, (b) differences in LAI seasonality between the radiative transfer-based main algorithm and the vegetation index based back-up algorithm, and (c) too few retrievals from the main algorithm during the summer period when the crops are at full flush. The cause of these anomalies is a mismatch between reflectances modeled by the algorithm and MODIS measurements. Therefore, the Look-Up-Tables accompanying the algorithm were revised and implemented in Collection 4 processing. The main algorithm with the revised Look-Up-Tables generated retrievals for over 80% of the pixels with valid data. Retrievals from the back-up algorithm, although few, should be used with caution as they are generated from surface reflectances with high uncertainties.
NASA Astrophysics Data System (ADS)
Guo, Zhan; Yan, Xuefeng
2018-04-01
Different operating conditions of p-xylene oxidation have different influences on the product, purified terephthalic acid. It is necessary to obtain the optimal combination of reaction conditions to ensure the quality of the products, cut down on consumption and increase revenues. A multi-objective differential evolution (MODE) algorithm co-evolved with the population-based incremental learning (PBIL) algorithm, called PBMODE, is proposed. The PBMODE algorithm was designed as a co-evolutionary system. Each individual has its own parameter individual, which is co-evolved by PBIL. PBIL uses statistical analysis to build a model based on the corresponding symbiotic individuals of the superior original individuals during the main evolutionary process. The results of simulations and statistical analysis indicate that the overall performance of the PBMODE algorithm is better than that of the compared algorithms and it can be used to optimize the operating conditions of the p-xylene oxidation process effectively and efficiently.
NASA Technical Reports Server (NTRS)
Guenther, Bruce W.; Godden, Gerald D.; Xiong, Xiao-Xiong; Knight, Edward J.; Qiu, Shi-Yue; Montgomery, Harry; Hopkins, M. M.; Khayat, Mohammad G.; Hao, Zhi-Dong; Smith, David E. (Technical Monitor)
2000-01-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) radiometric calibration product is described for the thermal emissive and the reflective solar bands. Specific sensor design characteristics are identified to assist in understanding how the calibration algorithm software product is designed. The reflected solar band software products of radiance and reflectance factor both are described. The product file format is summarized and the MODIS Characterization Support Team (MCST) Homepage location for the current file format is provided.
Technical note: Intercomparison of three AATSR Level 2 (L2) AOD products over China
NASA Astrophysics Data System (ADS)
Che, Yahui; Xue, Yong; Mei, Linlu; Guang, Jie; She, Lu; Guo, Jianping; Hu, Yincui; Xu, Hui; He, Xingwei; Di, Aojie; Fan, Cheng
2016-08-01
One of four main focus areas of the PEEX initiative is to establish and sustain long-term, continuous, and comprehensive ground-based, airborne, and seaborne observation infrastructure together with satellite data. The Advanced Along-Track Scanning Radiometer (AATSR) aboard ENVISAT is used to observe the Earth in dual view. The AATSR data can be used to retrieve aerosol optical depth (AOD) over both land and ocean, which is an important parameter in the characterization of aerosol properties. In recent years, aerosol retrieval algorithms have been developed both over land and ocean, taking advantage of the features of dual view, which can help eliminate the contribution of Earth's surface to top-of-atmosphere (TOA) reflectance. The Aerosol_cci project, as a part of the Climate Change Initiative (CCI), provides users with three AOD retrieval algorithms for AATSR data, including the Swansea algorithm (SU), the ATSR-2ATSR dual-view aerosol retrieval algorithm (ADV), and the Oxford-RAL Retrieval of Aerosol and Cloud algorithm (ORAC). The validation team of the Aerosol-CCI project has validated AOD (both Level 2 and Level 3 products) and AE (Ångström Exponent) (Level 2 product only) against the AERONET data in a round-robin evaluation using the validation tool of the AeroCOM (Aerosol Comparison between Observations and Models) project. For the purpose of evaluating different performances of these three algorithms in calculating AODs over mainland China, we introduce ground-based data from CARSNET (China Aerosol Remote Sensing Network), which was designed for aerosol observations in China. Because China is vast in territory and has great differences in terms of land surfaces, the combination of the AERONET and CARSNET data can validate the L2 AOD products more comprehensively. The validation results show different performances of these products in 2007, 2008, and 2010. The SU algorithm performs very well over sites with different surface conditions in mainland China from March to October, but it slightly underestimates AOD over barren or sparsely vegetated surfaces in western China, with mean bias error (MBE) ranging from 0.05 to 0.10. The ADV product has the same precision with a low root mean square error (RMSE) smaller than 0.2 over most sites and the same error distribution as the SU product. The main limits of the ADV algorithm are underestimation and applicability; underestimation is particularly obvious over the sites of Datong, Lanzhou, and Urumchi, where the dominant land cover is grassland, with an MBE larger than 0.2, and the main aerosol sources are coal combustion and dust. The ORAC algorithm has the ability to retrieve AOD at different ranges, including high AOD (larger than 1.0); however, the stability deceases significantly with increasing AOD, especially when AOD > 1.0. In addition, the ORAC product is consistent with the CARSNET product in winter (December, January, and February), whereas other validation results lack matches during winter.
First-Principle Construction of U(1) Symmetric Matrix Product States
NASA Astrophysics Data System (ADS)
Rakov, Mykhailo V.
2018-07-01
The algorithm to calculate the sets of symmetry sectors for virtual indices of U(1) symmetric matrix product states (MPS) is described. The principal differences between open (OBC) and periodic (PBC) boundary conditions are stressed, and the extension of PBC MPS algorithm to projected entangled pair states is outlined.
Ecological Assimilation of Land and Climate Observations - the EALCO model
NASA Astrophysics Data System (ADS)
Wang, S.; Zhang, Y.; Trishchenko, A.
2004-05-01
Ecosystems are intrinsically dynamic and interact with climate at a highly integrated level. Climate variables are the main driving factors in controlling the ecosystem physical, physiological, and biogeochemical processes including energy balance, water balance, photosynthesis, respiration, and nutrient cycling. On the other hand, ecosystems function as an integrity and feedback on the climate system through their control on surface radiation balance, energy partitioning, and greenhouse gases exchange. To improve our capability in climate change impact assessment, a comprehensive ecosystem model is required to address the many interactions between climate change and ecosystems. In addition, different ecosystems can have very different responses to the climate change and its variation. To provide more scientific support for ecosystem impact assessment at national scale, it is imperative that ecosystem models have the capability of assimilating the large scale geospatial information including satellite observations, GIS datasets, and climate model outputs or reanalysis. The EALCO model (Ecological Assimilation of Land and Climate Observations) is developed for such purposes. EALCO includes the comprehensive interactions among ecosystem processes and climate, and assimilates a variety of remote sensing products and GIS database. It provides both national and local scale model outputs for ecosystem responses to climate change including radiation and energy balances, water conditions and hydrological cycles, carbon sequestration and greenhouse gas exchange, and nutrient (N) cycling. These results form the foundation for the assessment of climate change impact on ecosystems, their services, and adaptation options. In this poster, the main algorithms for the radiation, energy, water, carbon, and nitrogen simulations were diagrammed. Sample input data layers at Canada national scale were illustrated. Model outputs including the Canada wide spatial distributions of net radiation, evapotranspiration, gross primary production, net primary production, and net ecosystem production were discussed.
An Optimization Study of Hot Stamping Operation
NASA Astrophysics Data System (ADS)
Ghoo, Bonyoung; Umezu, Yasuyoshi; Watanabe, Yuko; Ma, Ninshu; Averill, Ron
2010-06-01
In the present study, 3-dimensional finite element analyses for hot-stamping processes of Audi B-pillar product are conducted using JSTAMP/NV and HEEDS. Special attention is paid to the optimization of simulation technology coupling with thermal-mechanical formulations. Numerical simulation based on FEM technology and optimization design using the hybrid adaptive SHERPA algorithm are applied to hot stamping operation to improve productivity. The robustness of the SHERPA algorithm is found through the results of the benchmark example. The SHERPA algorithm is shown to be far superior to the GA (Genetic Algorithm) in terms of efficiency, whose calculation time is about 7 times faster than that of the GA. The SHERPA algorithm could show high performance in a large scale problem having complicated design space and long calculation time.
A generalized algorithm to design finite field normal basis multipliers
NASA Technical Reports Server (NTRS)
Wang, C. C.
1986-01-01
Finite field arithmetic logic is central in the implementation of some error-correcting coders and some cryptographic devices. There is a need for good multiplication algorithms which can be easily realized. Massey and Omura recently developed a new multiplication algorithm for finite fields based on a normal basis representation. Using the normal basis representation, the design of the finite field multiplier is simple and regular. The fundamental design of the Massey-Omura multiplier is based on a design of a product function. In this article, a generalized algorithm to locate a normal basis in a field is first presented. Using this normal basis, an algorithm to construct the product function is then developed. This design does not depend on particular characteristics of the generator polynomial of the field.
Minimizing inner product data dependencies in conjugate gradient iteration
NASA Technical Reports Server (NTRS)
Vanrosendale, J.
1983-01-01
The amount of concurrency available in conjugate gradient iteration is limited by the summations required in the inner product computations. The inner product of two vectors of length N requires time c log(N), if N or more processors are available. This paper describes an algebraic restructuring of the conjugate gradient algorithm which minimizes data dependencies due to inner product calculations. After an initial start up, the new algorithm can perform a conjugate gradient iteration in time c*log(log(N)).
Current Status of Japan's Activity for GPM/DPR and Global Rainfall Map algorithm development
NASA Astrophysics Data System (ADS)
Kachi, M.; Kubota, T.; Yoshida, N.; Kida, S.; Oki, R.; Iguchi, T.; Nakamura, K.
2012-04-01
The Global Precipitation Measurement (GPM) mission is composed of two categories of satellites; 1) a Tropical Rainfall Measuring Mission (TRMM)-like non-sun-synchronous orbit satellite (GPM Core Observatory); and 2) constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory carries the Dual-frequency Precipitation Radar (DPR), which is being developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT), and microwave radiometer provided by the National Aeronautics and Space Administration (NASA). GPM Core Observatory will be launched in February 2014, and development of algorithms is underway. DPR Level 1 algorithm, which provides DPR L1B product including received power, will be developed by the JAXA. The first version was submitted in March 2011. Development of the second version of DPR L1B algorithm (Version 2) will complete in March 2012. Version 2 algorithm includes all basic functions, preliminary database, HDF5 I/F, and minimum error handling. Pre-launch code will be developed by the end of October 2012. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The first version of GPM/DPR Level-2 Algorithm Theoretical Basis Document was completed on November 2010. The second version, "Baseline code", was completed in January 2012. Baseline code includes main module, and eight basic sub-modules (Preparation module, Vertical Profile module, Classification module, SRT module, DSD module, Solver module, Input module, and Output module.) The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. It is important to develop algorithm applicable to both TRMM/PR and KuPR in order to produce long-term continuous data set. Pre-launch code will be developed by autumn 2012. Global Rainfall Map algorithm has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project between 2002 and 2007, and near-real-time version operating at JAXA since 2007. "Baseline code" used current operational GSMaP code (V5.222,) and development completed in January 2012. Pre-launch code will be developed by autumn 2012, including update of database for rain type classification and rain/no-rain classification, and introduction of rain-gauge correction.
Parallel Algorithms for Groebner-Basis Reduction
1987-09-25
22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report
NASA Astrophysics Data System (ADS)
Liggio, John; Moussa, Samar G.; Wentzell, Jeremy; Darlington, Andrea; Liu, Peter; Leithead, Amy; Hayden, Katherine; O'Brien, Jason; Mittermeier, Richard L.; Staebler, Ralf; Wolde, Mengistu; Li, Shao-Meng
2017-07-01
Organic acids are known to be emitted from combustion processes and are key photochemical products of biogenic and anthropogenic precursors. Despite their multiple environmental impacts, such as on acid deposition and human-ecosystem health, little is known regarding their emission magnitudes or detailed chemical formation mechanisms. In the current work, airborne measurements of 18 gas-phase low-molecular-weight organic acids were made in the summer of 2013 over the oil sands region of Alberta, Canada, an area of intense unconventional oil extraction. The data from these measurements were used in conjunction with emission retrieval algorithms to derive the total and speciated primary organic acid emission rates, as well as secondary formation rates downwind of oil sands operations. The results of the analysis indicate that approximately 12 t day-1 of low-molecular-weight organic acids, dominated by C1-C5 acids, were emitted directly from off-road diesel vehicles within open pit mines. Although there are no specific reporting requirements for primary organic acids, the measured emissions were similar in magnitude to primary oxygenated hydrocarbon emissions, for which there are reporting thresholds, measured previously ( ≈ 20 t day-1). Conversely, photochemical production of gaseous organic acids significantly exceeded the primary sources, with formation rates of up to ≈ 184 t day-1 downwind of the oil sands facilities. The formation and evolution of organic acids from a Lagrangian flight were modelled with a box model, incorporating a detailed hydrocarbon reaction mechanism extracted from the Master Chemical Mechanism (v3.3). Despite evidence of significant secondary organic acid formation, the explicit chemical box model largely underestimated their formation in the oil sands plumes, accounting for 39, 46, 26, and 23 % of the measured formic, acetic, acrylic, and propionic acids respectively and with little contributions from biogenic VOC precursors. The model results, together with an examination of the carbon mass balance between the organic acids formed and the primary VOCs emitted from oil sands operations, suggest the existence of significant missing secondary sources and precursor emissions related to oil sands and/or an incomplete mechanistic and quantitative understanding of how they are processed in the atmosphere.
Fast Inference with Min-Sum Matrix Product.
Felzenszwalb, Pedro F; McAuley, Julian J
2011-12-01
The MAP inference problem in many graphical models can be solved efficiently using a fast algorithm for computing min-sum products of n × n matrices. The class of models in question includes cyclic and skip-chain models that arise in many applications. Although the worst-case complexity of the min-sum product operation is not known to be much better than O(n(3)), an O(n(2.5)) expected time algorithm was recently given, subject to some constraints on the input matrices. In this paper, we give an algorithm that runs in O(n(2) log n) expected time, assuming that the entries in the input matrices are independent samples from a uniform distribution. We also show that two variants of our algorithm are quite fast for inputs that arise in several applications. This leads to significant performance gains over previous methods in applications within computer vision and natural language processing.
Improved Snow Mapping Accuracy with Revised MODIS Snow Algorithm
NASA Technical Reports Server (NTRS)
Riggs, George; Hall, Dorothy K.
2012-01-01
The MODIS snow cover products have been used in over 225 published studies. From those reports, and our ongoing analysis, we have learned about the accuracy and errors in the snow products. Revisions have been made in the algorithms to improve the accuracy of snow cover detection in Collection 6 (C6), the next processing/reprocessing of the MODIS data archive planned to start in September 2012. Our objective in the C6 revision of the MODIS snow-cover algorithms and products is to maximize the capability to detect snow cover while minimizing snow detection errors of commission and omission. While the basic snow detection algorithm will not change, new screens will be applied to alleviate snow detection commission and omission errors, and only the fractional snow cover (FSC) will be output (the binary snow cover area (SCA) map will no longer be included).
40 CFR 63.11166 - What General Provisions apply to primary beryllium production facilities?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Primary Nonferrous Metals Area Sources-Zinc, Cadmium, and Beryllium Primary Beryllium Production Facilities § 63.11166 What General Provisions apply to primary beryllium production facilities? (a) You must... primary beryllium production facilities? 63.11166 Section 63.11166 Protection of Environment ENVIRONMENTAL...
Tracking Trends in Fractional Forest Cover Change using Long Term Data from AVHRR and MODIS
NASA Astrophysics Data System (ADS)
Kim, D. H.; DiMiceli, C.; Sohlberg, R. A.; Hansen, M.; Carroll, M.; Kelly, M.; Townshend, J. R.
2014-12-01
Tree cover affects terrestrial energy and water exchanges, photosynthesis and transpiration, net primary production, and carbon and nutrient fluxes. Accurate and long-term continuous observation of tree cover change is critical for the study of the gradual ecosystem change. Tree cover is most commonly inferred from categorical maps which may inadequately represent within-class heterogeneity for many analyses. Alternatively, Vegetation Continuous Fields data measures fractions or proportions of pixel area. Recent development in remote sensing data processing and cross sensor calibration techniques enabled the continuous, long-term observations such as Land Long-Term Data Records. Such data products and their surface reflectance data have enhanced the possibilities for long term Vegetation Continuous Fields data, thus enabling the estimation of long term trend of fractional forest cover change. In this presentation, we will summarize the progress in algorithm development including automation of training selection for deciduous and evergreen forest, the preliminary results, and its future applications to relate trends in fractional forest cover change and environmental change.
An Overview of the Characterization of the Space Launch Vehicle Aerodynamic Environments
NASA Technical Reports Server (NTRS)
Blevins, John A.; Campbell, John R., Jr.; Bennett, David W.; Rausch, Russ D.; Gomez, Reynaldo J.; Kiris, Cetin C.
2014-01-01
Aerodynamic environments are some of the rst engineering data products that are needed to design a space launch vehicle. These products are used in performance predic- tions, vehicle control algorithm design, as well as determing loads on primary and secondary structures in multiple discipline areas. When the National Aeronautics and Space Admin- istration (NASA) Space Launch System (SLS) Program was established with the goal of designing a new, heavy-lift launch vehicle rst capable of lifting the Orion Program Multi- Purpose Crew Vehicle (MPCV) to low-earth orbit and preserving the potential to evolve the design to a 200 metric ton cargo launcher, the data needs were no di erent. Upon commencement of the new program, a characterization of aerodynamic environments were immediately initiated. In the time since, the SLS Aerodynamics Team has produced data describing the majority of the aerodynamic environment de nitions needed for structural design and vehicle control under nominal ight conditions. This paper provides an overview of select SLS aerodynamic environments completed to date.
Video segmentation for post-production
NASA Astrophysics Data System (ADS)
Wills, Ciaran
2001-12-01
Specialist post-production is an industry that has much to gain from the application of content-based video analysis techniques. However the types of material handled in specialist post-production, such as television commercials, pop music videos and special effects are quite different in nature from the typical broadcast material which many video analysis techniques are designed to work with; shots are short and highly dynamic, and the transitions are often novel or ambiguous. We address the problem of scene change detection and develop a new algorithm which tackles some of the common aspects of post-production material that cause difficulties for past algorithms, such as illumination changes and jump cuts. Operating in the compressed domain on Motion JPEG compressed video, our algorithm detects cuts and fades by analyzing each JPEG macroblock in the context of its temporal and spatial neighbors. Analyzing the DCT coefficients directly we can extract the mean color of a block and an approximate detail level. We can also perform an approximated cross-correlation between two blocks. The algorithm is part of a set of tools being developed to work with an automated asset management system designed specifically for use in post-production facilities.
NASA Technical Reports Server (NTRS)
Sayer, A. M.; Hsu, N. C.; Bettenhausen, C.; Jeong, M.-J.; Meister, G.
2015-01-01
The Deep Blue (DB) algorithm's primary data product is midvisible aerosol optical depth (AOD). DB applied to Moderate Resolution Imaging Spectroradiometer (MODIS) measurements provides a data record since early 2000 for MODIS Terra and mid-2002 for MODIS Aqua. In the previous data version (Collection 5, C5), DB production from Terra was halted in 2007 due to sensor degradation; the new Collection 6 (C6) has both improved science algorithms and sensor radiometric calibration. This includes additional calibration corrections developed by the Ocean Biology Processing Group to address MODIS Terra's gain, polarization sensitivity, and detector response versus scan angle, meaning DB can now be applied to the whole Terra record. Through validation with Aerosol Robotic Network (AERONET) data, it is shown that the C6 DB Terra AOD quality is stable throughout the mission to date. Compared to the C5 calibration, in recent years the RMS error compared to AERONET is smaller by approximately 0.04 over bright (e.g., desert) and approximately 0.01-0.02 over darker (e.g., vegetated) land surfaces, and the fraction of points in agreement with AERONET within expected retrieval uncertainty higher by approximately 10% and approximately 5%, respectively. Comparisons to the Aqua C6 time series reveal a high level of correspondence between the two MODIS DB data records, with a small positive (Terra-Aqua) average AOD offset <0.01. The analysis demonstrates both the efficacy of the new radiometric calibration efforts and that the C6 MODIS Terra DB AOD data remain stable (to better than 0.01 AOD) throughout the mission to date, suitable for quantitative scientific analyses.
Optimizing Constrained Single Period Problem under Random Fuzzy Demand
NASA Astrophysics Data System (ADS)
Taleizadeh, Ata Allah; Shavandi, Hassan; Riazi, Afshin
2008-09-01
In this paper, we consider the multi-product multi-constraint newsboy problem with random fuzzy demands and total discount. The demand of the products is often stochastic in the real word but the estimation of the parameters of distribution function may be done by fuzzy manner. So an appropriate option to modeling the demand of products is using the random fuzzy variable. The objective function of proposed model is to maximize the expected profit of newsboy. We consider the constraints such as warehouse space and restriction on quantity order for products, and restriction on budget. We also consider the batch size for products order. Finally we introduce a random fuzzy multi-product multi-constraint newsboy problem (RFM-PM-CNP) and it is changed to a multi-objective mixed integer nonlinear programming model. Furthermore, a hybrid intelligent algorithm based on genetic algorithm, Pareto and TOPSIS is presented for the developed model. Finally an illustrative example is presented to show the performance of the developed model and algorithm.
Adaptive power allocation schemes based on IAFS algorithm for OFDM-based cognitive radio systems
NASA Astrophysics Data System (ADS)
Zhang, Shuying; Zhao, Xiaohui; Liang, Cong; Ding, Xu
2017-01-01
In cognitive radio (CR) systems, reasonable power allocation can increase transmission rate of CR users or secondary users (SUs) as much as possible and at the same time insure normal communication among primary users (PUs). This study proposes an optimal power allocation scheme for the OFDM-based CR system with one SU influenced by multiple PU interference constraints. This scheme is based on an improved artificial fish swarm (IAFS) algorithm in combination with the advantage of conventional artificial fish swarm (ASF) algorithm and particle swarm optimisation (PSO) algorithm. In performance comparison of IAFS algorithm with other intelligent algorithms by simulations, the superiority of the IAFS algorithm is illustrated; this superiority results in better performance of our proposed scheme than that of the power allocation algorithms proposed by the previous studies in the same scenario. Furthermore, our proposed scheme can obtain higher transmission data rate under the multiple PU interference constraints and the total power constraint of SU than that of the other mentioned works.
40 CFR 63.11164 - What General Provisions apply to primary zinc production facilities?
Code of Federal Regulations, 2011 CFR
2011-07-01
... primary zinc production facilities? 63.11164 Section 63.11164 Protection of Environment ENVIRONMENTAL... Primary Nonferrous Metals Area Sources-Zinc, Cadmium, and Beryllium Primary Zinc Production Facilities § 63.11164 What General Provisions apply to primary zinc production facilities? (a) If you own or...
40 CFR 63.11164 - What General Provisions apply to primary zinc production facilities?
Code of Federal Regulations, 2010 CFR
2010-07-01
... primary zinc production facilities? 63.11164 Section 63.11164 Protection of Environment ENVIRONMENTAL... Primary Nonferrous Metals Area Sources-Zinc, Cadmium, and Beryllium Primary Zinc Production Facilities § 63.11164 What General Provisions apply to primary zinc production facilities? (a) If you own or...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dean, E. M.; Shin, Y. W.
1999-02-24
The Experimental Breeder Reactor II (EBR-II) at Argonne National Laboratory (ANL) West in Idaho is currently undergoing a plant closing operation, and a number of technical issues need to be addressed. This paper is related to the heat transfer analysis support effort performed for the upcoming draining operation of the primary sodium from the primary system tank. The issue addressed was how much of heat input would be required to the sodium if it were to be maintained in the liquid state during the prolonged period of the draining operation. The fluid dynamics analysis package FIDAP Code of Fluent Incorporatedmore » was used to model the primary tank system. It was possible to obtain solutions to the model in most of the cases considered, which provided the needed information for the project. However, certain appropriate choices of the solution algorithms were necessary in certain cases and in addition certain special measures had to be followed in order to successfully utilize the solution. In certain other instances, only some entirely different algorithm was the only successful choice, while in some other limited instances none of the algorithms or the special measures that were satisfactory for the earlier cases proved successful. Several configurations of the model with varying sodium levels to represent the quasi-steady state draining operation are considered. The reference configuration of the model was first calculated and the results are compared with measurement data. The model thus benchmarked to the reference case then was calculated for other model configurations. This paper discusses details of the experiences we gained, including successes, the difficulties we had to overcome, and in some instances the eventual failures. The results of the successful calculations are first presented. For each of the model configurations calculated, various computational aspects are then discussed in view of the numerical stability, convergence, and robustness of the solution algorithms in use. Finally, effects of certain model simplifications on the solutions and performance of the solution algorithms are discussed.« less
Parallel language constructs for tensor product computations on loosely coupled architectures
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Vanrosendale, John
1989-01-01
Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.
Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J. Harry
2016-01-01
Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types. PMID:27472383
Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J Harry
2016-01-01
Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types.
Alerts of forest disturbance from MODIS imagery
NASA Astrophysics Data System (ADS)
Hammer, Dan; Kraft, Robin; Wheeler, David
2014-12-01
This paper reports the methodology and computational strategy for a forest cover disturbance alerting system. Analytical techniques from time series econometrics are applied to imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor to detect temporal instability in vegetation indices. The characteristics from each MODIS pixel's spectral history are extracted and compared against historical data on forest cover loss to develop a geographically localized classification rule that can be applied across the humid tropical biome. The final output is a probability of forest disturbance for each 500 m pixel that is updated every 16 days. The primary objective is to provide high-confidence alerts of forest disturbance, while minimizing false positives. We find that the alerts serve this purpose exceedingly well in Pará, Brazil, with high probability alerts garnering a user accuracy of 98 percent over the training period and 93 percent after the training period (2000-2005) when compared against the PRODES deforestation data set, which is used to assess spatial accuracy. Implemented in Clojure and Java on the Hadoop distributed data processing platform, the algorithm is a fast, automated, and open source system for detecting forest disturbance. It is intended to be used in conjunction with higher-resolution imagery and data products that cannot be updated as quickly as MODIS-based data products. By highlighting hotspots of change, the algorithm and associated output can focus high-resolution data acquisition and aid in efforts to enforce local forest conservation efforts.
Early Performance Results from the GOES-R Product Generation System
NASA Astrophysics Data System (ADS)
Marley, S.; Weiner, A.; Kalluri, S. N.; Hansen, D.; Dittberner, G.
2013-12-01
Enhancements to remote sensing capabilities for the next generation of Geostationary Operational Environmental Satellite (GOES R-series) scheduled to be launched in 2015 require high performance computing capabilities to output meteorological observations and products at low latency compared to the legacy processing systems. GOES R-series (GOES-R, -S, -T, and -U) represents a generational change in both spacecraft and instrument capability, and the GOES Re-Broadcast (GRB) data which contains calibrated and navigated radiances from all the instruments will be at a data rate of 31 Mb/sec compared to the current 2.11 Mb/sec from existing GOES satellites. To keep up with the data processing rates, the Product Generation (PG) system in the ground segment is designed on a Service Based Architecture (SBA). Each algorithm is executed as a service and subscribes to the data it needs to create higher level products via an enterprise service bus. Various levels of product data are published and retrieved from a data fabric. Together, the SBA and the data fabric provide a flexible, scalable, high performance architecture that meets the needs of product processing now and can grow to accommodate new algorithms in the future. The algorithms are linked together in a precedence chain starting from Level 0 to Level 1b and higher order Level 2 products that are distributed to data distribution nodes for external users. Qualification testing for more than half the product algorithms has so far been completed the PG system.
Development of a South African integrated syndromic respiratory disease guideline for primary care.
English, René G; Bateman, Eric D; Zwarenstein, Merrick F; Fairall, Lara R; Bheekie, Angeni; Bachmann, Max O; Majara, Bosielo; Ottmani, Salah-Eddine; Scherpbier, Robert W
2008-09-01
The Practical Approach to Lung Health in South Africa (PALSA) initiative aimed to develop an integrated symptom- and sign-based (syndromic) respiratory disease guideline for nurse care practitioners working in primary care in a developing country. A multidisciplinary team developed the guideline after reviewing local barriers to respiratory health care provision, relevant health care policies, existing respiratory guidelines, and literature. Guideline drafts were evaluated by means of focus group discussions. Existing evidence-based guideline development methodologies were tailored for development of the guideline. A locally-applicable guideline based on syndromic diagnostic algorithms was developed for the management of patients 15 years and older who presented to primary care facilities with cough or difficulty breathing. PALSA has developed a guideline that integrates and presents diagnostic and management recommendations for priority respiratory diseases in adults using a symptom- and sign-based algorithmic guideline for nurses in developing countries.
Emery, Jon D; Hunter, Judith; Hall, Per N; Watson, Anthony J; Moncrieff, Marc; Walter, Fiona M
2010-09-25
Diagnosing pigmented skin lesions in general practice is challenging. SIAscopy has been shown to increase diagnostic accuracy for melanoma in referred populations. We aimed to develop and validate a scoring system for SIAscopic diagnosis of pigmented lesions in primary care. This study was conducted in two consecutive settings in the UK and Australia, and occurred in three stages: 1) Development of the primary care scoring algorithm (PCSA) on a sub-set of lesions from the UK sample; 2) Validation of the PCSA on a different sub-set of lesions from the same UK sample; 3) Validation of the PCSA on a new set of lesions from an Australian primary care population. Patients presenting with a pigmented lesion were recruited from 6 general practices in the UK and 2 primary care skin cancer clinics in Australia. The following data were obtained for each lesion: clinical history; SIAscan; digital photograph; and digital dermoscopy. SIAscans were interpreted by an expert and validated against histopathology where possible, or expert clinical review of all available data for each lesion. A total of 858 patients with 1,211 lesions were recruited. Most lesions were benign naevi (64.8%) or seborrhoeic keratoses (22.1%); 1.2% were melanoma. The original SIAscopic diagnostic algorithm did not perform well because of the higher prevalence of seborrhoeic keratoses and haemangiomas seen in primary care. A primary care scoring algorithm (PCSA) was developed to account for this. In the UK sample the PCSA had the following characteristics for the diagnosis of 'suspicious': sensitivity 0.50 (0.18-0.81); specificity 0.84 (0.78-0.88); PPV 0.09 (0.03-0.22); NPV 0.98 (0.95-0.99). In the Australian sample the PCSA had the following characteristics for the diagnosis of 'suspicious': sensitivity 0.44 (0.32-0.58); specificity 0.95 (0.93-0.97); PPV 0.52 (0.38-0.66); NPV 0.95 (0.92-0.96). In an analysis of lesions for which histological diagnosis was available (n = 111), the PCSA had a significantly greater Area Under the Curve than the 7-point checklist for the diagnosis of melanoma (0.83; 95% CI 0.71-0.95 versus 0.61; 95% CI 0.44-0.78; p = 0.02 for difference). The PCSA could have a useful role in improving primary care management of pigmented skin lesions. Further work is needed to develop and validate the PCSA in other primary care populations and to evaluate the cost-effectiveness of GP management of pigmented lesions using SIAscopy.
An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.
1991-01-01
The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. An implementation is presented of a look-ahead version of the Lanczos algorithm that, except for the very special situation of an incurable breakdown, overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products as the standard Lanczos process without look-ahead.
Optimal pattern distributions in Rete-based production systems
NASA Technical Reports Server (NTRS)
Scott, Stephen L.
1994-01-01
Since its introduction into the AI community in the early 1980's, the Rete algorithm has been widely used. This algorithm has formed the basis for many AI tools, including NASA's CLIPS. One drawback of Rete-based implementation, however, is that the network structures used internally by the Rete algorithm make it sensitive to the arrangement of individual patterns within rules. Thus while rules may be more or less arbitrarily placed within source files, the distribution of individual patterns within these rules can significantly affect the overall system performance. Some heuristics have been proposed to optimize pattern placement, however, these suggestions can be conflicting. This paper describes a systematic effort to measure the effect of pattern distribution on production system performance. An overview of the Rete algorithm is presented to provide context. A description of the methods used to explore the pattern ordering problem area are presented, using internal production system metrics such as the number of partial matches, and coarse-grained operating system data such as memory usage and time. The results of this study should be of interest to those developing and optimizing software for Rete-based production systems.
Automated segmentation and feature extraction of product inspection items
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1997-03-01
X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.
NASA Astrophysics Data System (ADS)
Imvitthaya, Chomchid; Honda, Kiyoshi; Lertlum, Surat; Tangtham, Nipon
2011-01-01
In this paper, we present the results of a net primary production (NPP) modeling of teak (Tectona grandis Lin F.), an important species in tropical deciduous forests. The biome-biogeochemical cycles or Biome-BGC model was calibrated to estimate net NPP through the inverse modeling approach. A genetic algorithm (GA) was linked with Biome-BGC to determine the optimal ecophysiological model parameters. The Biome-BGC was calibrated by adjusting the ecophysiological model parameters to fit the simulated LAI to the satellite LAI (SPOT-Vegetation), and the best fitness confirmed the high accuracy of generated ecophysioligical parameter from GA. The modeled NPP, using optimized parameters from GA as input data, was evaluated using daily NPP derived by the MODIS satellite and the annual field data in northern Thailand. The results showed that NPP obtained using the optimized ecophysiological parameters were more accurate than those obtained using default literature parameterization. This improvement occurred mainly because the model's optimized parameters reduced the bias by reducing systematic underestimation in the model. These Biome-BGC results can be effectively applied in teak forests in tropical areas. The study proposes a more effective method of using GA to determine ecophysiological parameters at the site level and represents a first step toward the analysis of the carbon budget of teak plantations at the regional scale.
Global patterns and predictions of seafloor biomass using random forests.
Wei, Chih-Lin; Rowe, Gilbert T; Escobar-Briones, Elva; Boetius, Antje; Soltwedel, Thomas; Caley, M Julian; Soliman, Yousria; Huettmann, Falk; Qu, Fangyuan; Yu, Zishan; Pitcher, C Roland; Haedrich, Richard L; Wicksten, Mary K; Rex, Michael A; Baguley, Jeffrey G; Sharma, Jyotsna; Danovaro, Roberto; MacDonald, Ian R; Nunnally, Clifton C; Deming, Jody W; Montagna, Paul; Lévesque, Mélanie; Weslawski, Jan Marcin; Wlodarska-Kowalczuk, Maria; Ingole, Baban S; Bett, Brian J; Billett, David S M; Yool, Andrew; Bluhm, Bodil A; Iken, Katrin; Narayanaswamy, Bhavani E
2010-12-30
A comprehensive seafloor biomass and abundance database has been constructed from 24 oceanographic institutions worldwide within the Census of Marine Life (CoML) field projects. The machine-learning algorithm, Random Forests, was employed to model and predict seafloor standing stocks from surface primary production, water-column integrated and export particulate organic matter (POM), seafloor relief, and bottom water properties. The predictive models explain 63% to 88% of stock variance among the major size groups. Individual and composite maps of predicted global seafloor biomass and abundance are generated for bacteria, meiofauna, macrofauna, and megafauna (invertebrates and fishes). Patterns of benthic standing stocks were positive functions of surface primary production and delivery of the particulate organic carbon (POC) flux to the seafloor. At a regional scale, the census maps illustrate that integrated biomass is highest at the poles, on continental margins associated with coastal upwelling and with broad zones associated with equatorial divergence. Lowest values are consistently encountered on the central abyssal plains of major ocean basins The shift of biomass dominance groups with depth is shown to be affected by the decrease in average body size rather than abundance, presumably due to decrease in quantity and quality of food supply. This biomass census and associated maps are vital components of mechanistic deep-sea food web models and global carbon cycling, and as such provide fundamental information that can be incorporated into evidence-based management.
Global Patterns and Predictions of Seafloor Biomass Using Random Forests
Wei, Chih-Lin; Rowe, Gilbert T.; Escobar-Briones, Elva; Boetius, Antje; Soltwedel, Thomas; Caley, M. Julian; Soliman, Yousria; Huettmann, Falk; Qu, Fangyuan; Yu, Zishan; Pitcher, C. Roland; Haedrich, Richard L.; Wicksten, Mary K.; Rex, Michael A.; Baguley, Jeffrey G.; Sharma, Jyotsna; Danovaro, Roberto; MacDonald, Ian R.; Nunnally, Clifton C.; Deming, Jody W.; Montagna, Paul; Lévesque, Mélanie; Weslawski, Jan Marcin; Wlodarska-Kowalczuk, Maria; Ingole, Baban S.; Bett, Brian J.; Billett, David S. M.; Yool, Andrew; Bluhm, Bodil A.; Iken, Katrin; Narayanaswamy, Bhavani E.
2010-01-01
A comprehensive seafloor biomass and abundance database has been constructed from 24 oceanographic institutions worldwide within the Census of Marine Life (CoML) field projects. The machine-learning algorithm, Random Forests, was employed to model and predict seafloor standing stocks from surface primary production, water-column integrated and export particulate organic matter (POM), seafloor relief, and bottom water properties. The predictive models explain 63% to 88% of stock variance among the major size groups. Individual and composite maps of predicted global seafloor biomass and abundance are generated for bacteria, meiofauna, macrofauna, and megafauna (invertebrates and fishes). Patterns of benthic standing stocks were positive functions of surface primary production and delivery of the particulate organic carbon (POC) flux to the seafloor. At a regional scale, the census maps illustrate that integrated biomass is highest at the poles, on continental margins associated with coastal upwelling and with broad zones associated with equatorial divergence. Lowest values are consistently encountered on the central abyssal plains of major ocean basins The shift of biomass dominance groups with depth is shown to be affected by the decrease in average body size rather than abundance, presumably due to decrease in quantity and quality of food supply. This biomass census and associated maps are vital components of mechanistic deep-sea food web models and global carbon cycling, and as such provide fundamental information that can be incorporated into evidence-based management. PMID:21209928
A split finite element algorithm for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Baker, A. J.
1979-01-01
An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.
NASA Astrophysics Data System (ADS)
Mugnai, A.; Smith, E. A.; Tripoli, G. J.; Bizzarri, B.; Casella, D.; Dietrich, S.; Di Paola, F.; Panegrossi, G.; Sanò, P.
2013-04-01
Satellite Application Facility on Support to Operational Hydrology and Water Management (H-SAF) is a EUMETSAT (European Organisation for the Exploitation of Meteorological Satellites) program, designed to deliver satellite products of hydrological interest (precipitation, soil moisture and snow parameters) over the European and Mediterranean region to research and operations users worldwide. Six satellite precipitation algorithms and concomitant precipitation products are the responsibility of various agencies in Italy. Two of these algorithms have been designed for maximum accuracy by restricting their inputs to measurements from conical and cross-track scanning passive microwave (PMW) radiometers mounted on various low Earth orbiting satellites. They have been developed at the Italian National Research Council/Institute of Atmospheric Sciences and Climate in Rome (CNR/ISAC-Rome), and are providing operational retrievals of surface rain rate and its phase properties. Each of these algorithms is physically based, however, the first of these, referred to as the Cloud Dynamics and Radiation Database (CDRD) algorithm, uses a Bayesian-based solution solver, while the second, referred to as the PMW Neural-net Precipitation Retrieval (PNPR) algorithm, uses a neural network-based solution solver. Herein we first provide an overview of the two initial EU research and applications programs that motivated their initial development, EuroTRMM and EURAINSAT (European Satellite Rainfall Analysis and Monitoring at the Geostationary Scale), and the current H-SAF program that provides the framework for their operational use and continued development. We stress the relevance of the CDRD and PNPR algorithms and their precipitation products in helping secure the goals of H-SAF's scientific and operations agenda, the former helpful as a secondary calibration reference to other algorithms in H-SAF's complete mix of algorithms. Descriptions of the algorithms' designs are provided including a few examples of their performance. This aspect of the development of the two algorithms is placed in the context of what we refer to as the TRMM era, which is the era denoting the active and ongoing period of the Tropical Rainfall Measuring Mission (TRMM) that helped inspire their original development. In 2015, the ISAC-Rome precipitation algorithms will undergo a transformation beginning with the upcoming Global Precipitation Measurement (GPM) mission, particularly the GPM Core Satellite technologies. A few years afterward, the first pair of imaging and sounding Meteosat Third Generation (MTG) satellites will be launched, providing additional technological advances. Various of the opportunities presented by the GPM Core and MTG satellites for improving the current CDRD and PNPR precipitation retrieval algorithms, as well as extending their product capability, are discussed.
NASA Astrophysics Data System (ADS)
Dils, B.; Buchwitz, M.; Reuter, M.; Schneising, O.; Boesch, H.; Parker, R.; Guerlet, S.; Aben, I.; Blumenstock, T.; Burrows, J. P.; Butz, A.; Deutscher, N. M.; Frankenberg, C.; Hase, F.; Hasekamp, O. P.; Heymann, J.; De Mazière, M.; Notholt, J.; Sussmann, R.; Warneke, T.; Griffith, D.; Sherlock, V.; Wunch, D.
2014-06-01
Column-averaged dry-air mole fractions of carbon dioxide and methane have been retrieved from spectra acquired by the TANSO-FTS (Thermal And Near-infrared Sensor for carbon Observations-Fourier Transform Spectrometer) and SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric Cartography) instruments on board GOSAT (Greenhouse gases Observing SATellite) and ENVISAT (ENVIronmental SATellite), respectively, using a range of European retrieval algorithms. These retrievals have been compared with data from ground-based high-resolution Fourier transform spectrometers (FTSs) from the Total Carbon Column Observing Network (TCCON). The participating algorithms are the weighting function modified differential optical absorption spectroscopy (DOAS) algorithm (WFMD, University of Bremen), the Bremen optimal estimation DOAS algorithm (BESD, University of Bremen), the iterative maximum a posteriori DOAS (IMAP, Jet Propulsion Laboratory (JPL) and Netherlands Institute for Space Research algorithm (SRON)), the proxy and full-physics versions of SRON's RemoTeC algorithm (SRPR and SRFP, respectively) and the proxy and full-physics versions of the University of Leicester's adaptation of the OCO (Orbiting Carbon Observatory) algorithm (OCPR and OCFP, respectively). The goal of this algorithm inter-comparison was to identify strengths and weaknesses of the various so-called round- robin data sets generated with the various algorithms so as to determine which of the competing algorithms would proceed to the next round of the European Space Agency's (ESA) Greenhouse Gas Climate Change Initiative (GHG-CCI) project, which is the generation of the so-called Climate Research Data Package (CRDP), which is the first version of the Essential Climate Variable (ECV) "greenhouse gases" (GHGs). For XCO2, all algorithms reach the precision requirements for inverse modelling (< 8 ppm), with only WFMD having a lower precision (4.7 ppm) than the other algorithm products (2.4-2.5 ppm). When looking at the seasonal relative accuracy (SRA, variability of the bias in space and time), none of the algorithms have reached the demanding < 0.5 ppm threshold. For XCH4, the precision for both SCIAMACHY products (50.2 ppb for IMAP and 76.4 ppb for WFMD) fails to meet the < 34 ppb threshold for inverse modelling, but note that this work focusses on the period after the 2005 SCIAMACHY detector degradation. The GOSAT XCH4 precision ranges between 18.1 and 14.0 ppb. Looking at the SRA, all GOSAT algorithm products reach the < 10 ppm threshold (values ranging between 5.4 and 6.2 ppb). For SCIAMACHY, IMAP and WFMD have a SRA of 17.2 and 10.5 ppb, respectively.
Rasin-Waters, Donna; Abel, Valerie; Kearney, Lisa K; Zeiss, Antonette
2018-05-01
Historically, integrated mental and behavioral healthcare in the Department of Veterans Affairs (VA) commenced with initiatives in geriatrics. Innovation and system-wide expansion has occurred over decades and culminated in a unified vision for training and practice in the VA medical home model: Patient Aligned Care Team or PACT approach. In one VA hospital, the integration of neuropsychological services in geriatric primary care is pivotal and increases access for patients, as well as contributing to timely and effective care on an interprofessional team. The development and innovative use of an algorithm to identify problems with cognition, health literacy, and mental and behavioral health has been pragmatic and provides useful information for collaborative treatment planning in GeriPACT, VA geriatric primary care. Use of the algorithm also assists with decision-making regarding brief versus comprehensive neuropsychological assessment in the primary care setting. The model presented here was developed by supervising neuropsychologists as part of a postdoctoral residency program in geropsychology. However, postdoctoral residency programs in neuropsychology, as well as neuropsychological clinics, can also use this model to integrate neuropsychological assessment and interventions in geriatric primary care settings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alemohammad, Seyed Hamed; Fang, Bin; Konings, Alexandra G.
A new global estimate of surface turbulent fluxes, latent heat flux (LE) and sensible heat flux ( H), and gross primary production (GPP) is developed using a machine learning approach informed by novel remotely sensed solar-induced fluorescence (SIF) and other radiative and meteorological variables. This is the first study to jointly retrieve LE, H, and GPP using SIF observations. The approach uses an artificial neural network (ANN) with a target dataset generated from three independent data sources, weighted based on a triple collocation (TC) algorithm. The new retrieval, named Water, Energy, and Carbon with Artificial Neural Networks (WECANN), provides estimatesmore » of LE, H, and GPP from 2007 to 2015 at 1° × 1° spatial resolution and at monthly time resolution. The quality of ANN training is assessed using the target data, and the WECANN retrievals are evaluated using eddy covariance tower estimates from the FLUXNET network across various climates and conditions. When compared to eddy covariance estimates, WECANN typically outperforms other products, particularly for sensible and latent heat fluxes. Analyzing WECANN retrievals across three extreme drought and heat wave events demonstrates the capability of the retrievals to capture the extent of these events. Uncertainty estimates of the retrievals are analyzed and the interannual variability in average global and regional fluxes shows the impact of distinct climatic events – such as the 2015 El Niño – on surface turbulent fluxes and GPP.« less
On the Performance of Alternate Conceptual Ecohydrological Models for Streamflow Prediction
NASA Astrophysics Data System (ADS)
Naseem, Bushra; Ajami, Hoori; Cordery, Ian; Sharma, Ashish
2016-04-01
A merging of a lumped conceptual hydrological model with two conceptual dynamic vegetation models is presented to assess the performance of these models for simultaneous simulations of streamflow and leaf area index (LAI). Two conceptual dynamic vegetation models with differing representation of ecological processes are merged with a lumped conceptual hydrological model (HYMOD) to predict catchment scale streamflow and LAI. The merged RR-LAI-I model computes relative leaf biomass based on transpiration rates while the RR-LAI-II model computes above ground green and dead biomass based on net primary productivity and water use efficiency in response to soil moisture dynamics. To assess the performance of these models, daily discharge and 8-day MODIS LAI product for 27 catchments of 90 - 1600km2 in size located in the Murray - Darling Basin in Australia are used. Our results illustrate that when single-objective optimisation was focussed on maximizing the objective function for streamflow or LAI, the other un-calibrated predicted outcome (LAI if streamflow is the focus) was consistently compromised. Thus, single-objective optimization cannot take into account the essence of all processes in the conceptual ecohydrological models. However, multi-objective optimisation showed great strength for streamflow and LAI predictions. Both response outputs were better simulated by RR-LAI-II than RR-LAI-I due to better representation of physical processes such as net primary productivity (NPP) in RR-LAI-II. Our results highlight that simultaneous calibration of streamflow and LAI using a multi-objective algorithm proves to be an attractive tool for improved streamflow predictions.
Alemohammad, Seyed Hamed; Fang, Bin; Konings, Alexandra G.; ...
2017-09-20
A new global estimate of surface turbulent fluxes, latent heat flux (LE) and sensible heat flux ( H), and gross primary production (GPP) is developed using a machine learning approach informed by novel remotely sensed solar-induced fluorescence (SIF) and other radiative and meteorological variables. This is the first study to jointly retrieve LE, H, and GPP using SIF observations. The approach uses an artificial neural network (ANN) with a target dataset generated from three independent data sources, weighted based on a triple collocation (TC) algorithm. The new retrieval, named Water, Energy, and Carbon with Artificial Neural Networks (WECANN), provides estimatesmore » of LE, H, and GPP from 2007 to 2015 at 1° × 1° spatial resolution and at monthly time resolution. The quality of ANN training is assessed using the target data, and the WECANN retrievals are evaluated using eddy covariance tower estimates from the FLUXNET network across various climates and conditions. When compared to eddy covariance estimates, WECANN typically outperforms other products, particularly for sensible and latent heat fluxes. Analyzing WECANN retrievals across three extreme drought and heat wave events demonstrates the capability of the retrievals to capture the extent of these events. Uncertainty estimates of the retrievals are analyzed and the interannual variability in average global and regional fluxes shows the impact of distinct climatic events – such as the 2015 El Niño – on surface turbulent fluxes and GPP.« less
NASA Astrophysics Data System (ADS)
Kalluri, S. N.; Haman, B.; Vititoe, D.
2014-12-01
The ground system under development for Geostationary Operational Environmental Satellite-R (GOES-R) series of weather satellite has completed a key milestone in implementing the science algorithms that process raw sensor data to higher level products in preparation for launch. Real time observations from GOES-R are expected to make significant contributions to Earth and space weather prediction, and there are stringent requirements to product weather products at very low latency to meet NOAA's operational needs. Simulated test data from all the six GOES-R sensors are being processed by the system to test and verify performance of the fielded system. Early results show that the system development is on track to meet functional and performance requirements to process science data. Comparison of science products generated by the ground system from simulated data with those generated by the algorithm developers show close agreement among data sets which demonstrates that the algorithms are implemented correctly. Successful delivery of products to AWIPS and the Product Distribution and Access (PDA) system from the core system demonstrate that the external interfaces are working.
Real-time path planning and autonomous control for helicopter autorotation
NASA Astrophysics Data System (ADS)
Yomchinda, Thanan
Autorotation is a descending maneuver that can be used to recover helicopters in the event of total loss of engine power; however it is an extremely difficult and complex maneuver. The objective of this work is to develop a real-time system which provides full autonomous control for autorotation landing of helicopters. The work includes the development of an autorotation path planning method and integration of the path planner with a primary flight control system. The trajectory is divided into three parts: entry, descent and flare. Three different optimization algorithms are used to generate trajectories for each of these segments. The primary flight control is designed using a linear dynamic inversion control scheme, and a path following control law is developed to track the autorotation trajectories. Details of the path planning algorithm, trajectory following control law, and autonomous autorotation system implementation are presented. The integrated system is demonstrated in real-time high fidelity simulations. Results indicate feasibility of the capability of the algorithms to operate in real-time and of the integrated systems ability to provide safe autorotation landings. Preliminary simulations of autonomous autorotation on a small UAV are presented which will lead to a final hardware demonstration of the algorithms.
NASA Astrophysics Data System (ADS)
Ise, T.; Litton, C. M.; Giardina, C. P.; Ito, A.
2009-12-01
Plant partitioning of carbon (C) to above- vs. belowground, to growth vs. respiration, and to short vs. long lived tissues exerts a large influence on ecosystem structure and function with implications for the global C budget. Importantly, outcomes of process-based terrestrial vegetation models are likely to vary substantially with different C partitioning algorithms. However, controls on C partitioning patterns remain poorly quantified, and studies have yielded variable, and at times contradictory, results. A recent meta-analysis of forest studies suggests that the ratio of net primary production (NPP) and gross primary production (GPP) is fairly conservative across large scales. To illustrate the effect of this unique meta-analysis-based partitioning scheme (MPS), we compared an application of MPS to a terrestrial satellite-based (MODIS) GPP to estimate NPP vs. two global process-based vegetation models (Biome-BGC and VISIT) to examine the influence of C partitioning on C budgets of woody plants. Due to the temperature dependence of maintenance respiration, NPP/GPP predicted by the process-based models increased with latitude while the ratio remained constant with MPS. Overall, global NPP estimated with MPS was 17 and 27% lower than the process-based models for temperate and boreal biomes, respectively, with smaller differences in the tropics. Global equilibrium biomass of woody plants was then calculated from the NPP estimates and tissue turnover rates from VISIT. Since turnover rates differed greatly across tissue types (i.e., metabolically active vs. structural), global equilibrium biomass estimates were sensitive to the partitioning scheme employed. The MPS estimate of global woody biomass was 7-21% lower than that of the process-based models. In summary, we found that model output for NPP and equilibrium biomass was quite sensitive to the choice of C partitioning schemes. Carbon use efficiency (CUE; NPP/GPP) by forest biome and the globe. Values are means for 2001-2006.
[Cost-effectiveness of the deep vein thrombosis diagnosis process in primary care].
Fuentes Camps, Eva; Luis del Val García, José; Bellmunt Montoya, Sergi; Hmimina Hmimina, Sara; Gómez Jabalera, Efren; Muñoz Pérez, Miguel Ángel
2016-04-01
To analyse the cost effectiveness of the application of diagnostic algorithms in patients with a first episode of suspected deep vein thrombosis (DVT) in Primary Care compared with systematic referral to specialised centres. Observational, cross-sectional, analytical study. Patients from hospital emergency rooms referred from Primary Care to complete clinical evaluation and diagnosis. A total of 138 patients with symptoms of a first episode of DVT were recruited; 22 were excluded (no Primary Care report, symptoms for more than 30 days, anticoagulant treatment, and previous DVT). Of the 116 patients finally included, 61% women and the mean age was 71 years. Variables from the Wells and Oudega clinical probability scales, D-dimer (portable and hospital), Doppler ultrasound, and direct costs generated by the three algorithms analysed: all patients were referred systematically, referral according to Wells and Oudega scale. DVT was confirmed in 18.9%. The two clinical probability scales showed a sensitivity of 100% (95% CI: 85.1 to 100) and a specificity of about 40%. With the application of the scales, one third of all referrals to hospital emergency rooms could have been avoided (P<.001). The diagnostic cost could have been reduced by € 8,620 according to Oudega and € 9,741 according to Wells, per 100 patients visited. The application of diagnostic algorithms when a DVT is suspected could lead to better diagnostic management by physicians, and a more cost effective process. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.
Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows
NASA Astrophysics Data System (ADS)
Jittamai, Phongchai
This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and approximately 40% of the tested problems were solved optimally by the algorithms.
NASA Technical Reports Server (NTRS)
Riggs, George A.; Hall, Dorothy K.; Roman, Miguel O.
2017-01-01
Knowledge of the distribution, extent, duration and timing of snowmelt is critical for characterizing the Earth's climate system and its changes. As a result, snow cover is one of the Global Climate Observing System (GCOS) essential climate variables (ECVs). Consistent, long-term datasets of snow cover are needed to study interannual variability and snow climatology. The NASA snow-cover datasets generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Terra and Aqua spacecraft and the Suomi National Polar-orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) are NASA Earth System Data Records (ESDR). The objective of the snow-cover detection algorithms is to optimize the accuracy of mapping snow-cover extent (SCE) and to minimize snow-cover detection errors of omission and commission using automated, globally applied algorithms to produce SCE data products. Advancements in snow-cover mapping have been made with each of the four major reprocessings of the MODIS data record, which extends from 2000 to the present. MODIS Collection 6 (C6) and VIIRS Collection 1 (C1) represent the state-of-the-art global snow cover mapping algorithms and products for NASA Earth science. There were many revisions made in the C6 algorithms which improved snow-cover detection accuracy and information content of the data products. These improvements have also been incorporated into the NASA VIIRS snow cover algorithms for C1. Both information content and usability were improved by including the Normalized Snow Difference Index (NDSI) and a quality assurance (QA) data array of algorithm processing flags in the data product, along with the SCE map.The increased data content allows flexibility in using the datasets for specific regions and end-user applications.Though there are important differences between the MODIS and VIIRS instruments (e.g., the VIIRS 375m native resolution compared to MODIS 500 m), the snow detection algorithms and data products are designed to be as similar as possible so that the 16C year MODIS ESDR of global SCE can be extended into the future with the S-NPP VIIRS snow products and with products from future Joint Polar Satellite System (JPSS) platforms.These NASA datasets are archived and accessible through the NASA Distributed Active Archive Center at the National Snow and Ice Data Center in Boulder, Colorado.
Sharpe, John P; Magnotti, Louis J; Weinberg, Jordan A; Shahan, Charles P; Cullinan, Darren R; Marino, Katy A; Fabian, Timothy C; Croce, Martin A
2014-04-01
For more than a decade, operative decisions (resection plus anastomosis vs diversion) for colon injuries, at our institution, have followed a defined management algorithm based on established risk factors (pre- or intraoperative transfusion requirements of more than 6 units packed RBCs and/or presence of significant comorbid diseases). However, this management algorithm was originally developed for patients managed with a single laparotomy. The purpose of this study was to evaluate the applicability of this algorithm to destructive colon injuries after abbreviated laparotomy (AL) and to determine whether additional risk factors should be considered. Consecutive patients over a 17-year period with colon injuries after AL were identified. Nondestructive injuries were managed with primary repair. Destructive wounds were resected at the initial laparotomy followed by either a staged diversion (SD) or a delayed anastomosis (DA) at the subsequent exploration. Outcomes were evaluated to identify additional risk factors in the setting of AL. We identified 149 patients: 33 (22%) patients underwent primary repair at initial exploration, 42 (28%) underwent DA, and 72 (49%) had SD. Two (1%) patients died before re-exploration. Of those undergoing DA, 23 (55%) patients were managed according to the algorithm and 19 (45%) were not. Adherence to the algorithm resulted in lower rates of suture line failure (4% vs 32%, p = 0.03) and colon-related morbidity (22% vs 58%, p = 0.03) for patients undergoing DA. No additional specific risk factors for suture line failure after DA were identified. Adherence to an established algorithm, originally defined for destructive colon injuries after single laparotomy, is likewise efficacious for the management of these injuries in the setting of AL. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Princic, Nicole; Gregory, Chris; Willson, Tina; Mahue, Maya; Felici, Diana; Werther, Winifred; Lenhart, Gregory; Foley, Kathleen A
2016-01-01
The objective was to expand on prior work by developing and validating a new algorithm to identify multiple myeloma (MM) patients in administrative claims. Two files were constructed to select MM cases from MarketScan Oncology Electronic Medical Records (EMR) and controls from the MarketScan Primary Care EMR during January 1, 2000-March 31, 2014. Patients were linked to MarketScan claims databases, and files were merged. Eligible cases were age ≥18, had a diagnosis and visit for MM in the Oncology EMR, and were continuously enrolled in claims for ≥90 days preceding and ≥30 days after diagnosis. Controls were age ≥18, had ≥12 months of overlap in claims enrollment (observation period) in the Primary Care EMR and ≥1 claim with an ICD-9-CM diagnosis code of MM (203.0×) during that time. Controls were excluded if they had chemotherapy; stem cell transplant; or text documentation of MM in the EMR during the observation period. A split sample was used to develop and validate algorithms. A maximum of 180 days prior to and following each MM diagnosis was used to identify events in the diagnostic process. Of 20 algorithms explored, the baseline algorithm of 2 MM diagnoses and the 3 best performing were validated. Values for sensitivity, specificity, and positive predictive value (PPV) were calculated. Three claims-based algorithms were validated with ~10% improvement in PPV (87-94%) over prior work (81%) and the baseline algorithm (76%) and can be considered for future research. Consistent with prior work, it was found that MM diagnoses before and after tests were needed.
Virag, Nathalie; Erickson, Mark; Taraborrelli, Patricia; Vetter, Rolf; Lim, Phang Boon; Sutton, Richard
2018-04-28
We developed a vasovagal syncope (VVS) prediction algorithm for use during head-up tilt with simultaneous analysis of heart rate (HR) and systolic blood pressure (SBP). We previously tested this algorithm retrospectively in 1155 subjects, showing sensitivity 95%, specificity 93% and median prediction time of 59s. This study was prospective, single center, on 140 subjects to evaluate this VVS prediction algorithm and assess if retrospective results were reproduced and clinically relevant. Primary endpoint was VVS prediction: sensitivity and specificity >80%. In subjects, referred for 60° head-up tilt (Italian protocol), non-invasive HR and SBP were supplied to the VVS prediction algorithm: simultaneous analysis of RR intervals, SBP trends and their variability represented by low-frequency power generated cumulative risk which was compared with a predetermined VVS risk threshold. When cumulative risk exceeded threshold, an alert was generated. Prediction time was duration between first alert and syncope. Of 140 subjects enrolled, data was usable for 134. Of 83 tilt+ve (61.9%), 81 VVS events were correctly predicted and of 51 tilt-ve subjects (38.1%), 45 were correctly identified as negative by the algorithm. Resulting algorithm performance was sensitivity 97.6%, specificity 88.2%, meeting primary endpoint. Mean VVS prediction time was 2min 26s±3min16s with median 1min 25s. Using only HR and HR variability (without SBP) the mean prediction time reduced to 1min34s±1min45s with median 1min13s. The VVS prediction algorithm, is clinically-relevant tool and could offer applications including providing a patient alarm, shortening tilt-test time, or triggering pacing intervention in implantable devices. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Chiu, C.; Bowling, L. C.; Podest, E.; Bohn, T. J.; Lettenmaier, D. P.; Schroeder, R.; McDonald, K. C.
2009-04-01
In recent years, there has been increasing evidence of significant alteration in the extent of lakes and wetlands in high latitude regions due in part to thawing permafrost, as well as other changes governing surface and subsurface hydrology. Methane is a 23 times more efficient greenhouse gas than carbon dioxide; changes in surface water extent, and the associated subsurface anaerobic conditions, are important controls on methane emissions in high latitude regions. Methane emissions from wetlands vary substantially in both time and space, and are influenced by plant growth, soil organic matter decomposition, methanogenesis, and methane oxidation controlled by soil temperature, water table level and net primary productivity (NPP). The understanding of spatial and temporal heterogeneity of surface saturation, thermal regime and carbon substrate in northern Eurasian wetlands from point measurements are limited. In order to better estimate the magnitude and variability of methane emissions from northern lakes and wetlands, we present an integrated assessment approach based on remote sensing image classification, land surface modeling and process-based ecosystem modeling. Wetlands classifications based on L-band JERS-1 SAR (100m) and ALOS PALSAR (~30m) are used together with topographic information to parameterize a lake and wetland algorithm in the Variable Infiltration Capacity (VIC) land surface model at 25 km resolution. The enhanced VIC algorithm allows subsurface moisture exchange between surface water and wetlands and includes a sub-grid parameterization of water table position within the wetland area using a generalized topographic index. Average methane emissions are simulated by using the Walter and Heimann methane emission model based on temporally and spatially varying soil temperature, net primary productivity and water table generated from the modified VIC model. Our five preliminary study areas include the Z. Dvina, Upper Volga, Yeloguy, Syum, and Chaya river basins. The temporally-variable inundation extent simulated by the VIC model is compared to 25 km resolution inundation products developed from combined QuikSCAT, AMSR-E and MODIS data sets covering the time period from 2002 onward. The seasonal variation in methane emissions associated with sub-grid variability in water table extent is explored between 1948 and 2006. This work was carried out at Purdue University, at the University of Washington, and at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the NASA.
Accuracy of Geophysical Parameters Derived from AIRS/AMSU as a Function of Fractional Cloud Cover
NASA Technical Reports Server (NTRS)
Susskind, Joel; Barnet, Chris; Blaisdell, John; Iredell, Lena; Keita, Fricky; Kouvaris, Lou; Molnar, Gyula; Chahine, Moustafa
2006-01-01
AIRS was launched on EOS Aqua on May 4,2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of lK, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze Atmospheric InfraRed Sounder/Advanced Microwave Sounding Unit/Humidity Sounder Brazil (AIRS/AMSU/HSB) data in the presence of clouds, called the at-launch algorithm, was described previously. Pre-launch simulation studies using this algorithm indicated that these results should be achievable. Some modifications have been made to the at-launch retrieval algorithm as described in this paper. Sample fields of parameters retrieved from AIRS/AMSU/HSB data are presented and validated as a function of retrieved fractional cloud cover. As in simulation, the degradation of retrieval accuracy with increasing cloud cover is small and the RMS accuracy of lower tropospheric temperature retrieved with 80 percent cloud cover is about 0.5 K poorer than for clear cases. HSB failed in February 2003, and consequently HSB channel radiances are not used in the results shown in this paper. The AIRS/AMSU retrieval algorithm described in this paper, called Version 4, become operational at the Goddard DAAC (Distributed Active Archive Center) in April 2003 and is being used to analyze near-real time AIRS/AMSU data. Historical AIRS/AMSU data, going backwards from March 2005 through September 2002, is also being analyzed by the DAAC using the Version 4 algorithm.
The production route selection algorithm in virtual manufacturing networks
NASA Astrophysics Data System (ADS)
Krenczyk, D.; Skolud, B.; Olender, M.
2017-08-01
The increasing requirements and competition in the global market are challenges for the companies profitability in production and supply chain management. This situation became the basis for construction of virtual organizations, which are created in response to temporary needs. The problem of the production flow planning in virtual manufacturing networks is considered. In the paper the algorithm of the production route selection from the set of admissible routes, which meets the technology and resource requirements and in the context of the criterion of minimum cost is proposed.
Development, Evaluation, and Application of a Primary Aerosol Model.
Wang, I T; Chico, T; Huang, Y H; Farber, R J
1999-09-01
The Segmented-Plume Primary Aerosol Model (SPPAM) has been developed over the past several years. The earlier model development goals were simply to generalize the widely used Industrial Source Complex Short-Term (ISCST) model to simulate plume transport and dispersion under light wind conditions and to handle a large number of roadway or line sources. The goals have been expanded to include development of improved algorithm for effective plume transport velocity, more accurate and efficient line and area source dispersion algorithms, and recently, a more realistic and computationally efficient algorithm for plume depletion due to particle dry deposition. A performance evaluation of the SPPAM has been carried out using the 1983 PNL dual tracer experimental data. The results show the model predictions to be in good agreement with observations in both plume advection-dispersion and particulate matter (PM) depletion by dry deposition. For PM 2.5 impact analysis, the SPPAM has been applied to the Rubidoux area of California. Emission sources included in the modeling analysis are: paved road dust, diesel vehicular exhaust, gasoline vehicular exhaust, and tire wear particles from a large number of roadways in Rubidoux and surrounding areas. For the selected modeling periods, the predicted primary PM 2.5 to primary PM10 concentration ratios for the Rubidoux sampling station are in the range of 0.39-0.46. The organic fractions of the primary PM 2.5 impacts are estimated to be at least 34-41%. Detailed modeling results indicate that the relatively high organic fractions are primarily due to the proximity of heavily traveled roadways north of the sampling station. The predictions are influenced by a number of factors; principal among them are the receptor locations relative to major roadways, the volume and composition of traffic on these roadways, and the prevailing meteorological conditions.
View Angle Effects on MODIS Snow Mapping in Forests
NASA Technical Reports Server (NTRS)
Xin, Qinchuan; Woodcock, Curtis E.; Liu, Jicheng; Tan, Bin; Melloh, Rae A.; Davis, Robert E.
2012-01-01
Binary snow maps and fractional snow cover data are provided routinely from MODIS (Moderate Resolution Imaging Spectroradiometer). This paper investigates how the wide observation angles of MODIS influence the current snow mapping algorithm in forested areas. Theoretical modeling results indicate that large view zenith angles (VZA) can lead to underestimation of fractional snow cover (FSC) by reducing the amount of the ground surface that is viewable through forest canopies, and by increasing uncertainties during the gridding of MODIS data. At the end of the MODIS scan line, the total modeled error can be as much as 50% for FSC. Empirical analysis of MODIS/Terra snow products in four forest sites shows high fluctuation in FSC estimates on consecutive days. In addition, the normalized difference snow index (NDSI) values, which are the primary input to the MODIS snow mapping algorithms, decrease as VZA increases at the site level. At the pixel level, NDSI values have higher variances, and are correlated with the normalized difference vegetation index (NDVI) in snow covered forests. These findings are consistent with our modeled results, and imply that consideration of view angle effects could improve MODIS snow monitoring in forested areas.
Multi-objective Optimization of Pulsed Gas Metal Arc Welding Process Using Neuro NSGA-II
NASA Astrophysics Data System (ADS)
Pal, Kamal; Pal, Surjya K.
2018-05-01
Weld quality is a critical issue in fabrication industries where products are custom-designed. Multi-objective optimization results number of solutions in the pareto-optimal front. Mathematical regression model based optimization methods are often found to be inadequate for highly non-linear arc welding processes. Thus, various global evolutionary approaches like artificial neural network, genetic algorithm (GA) have been developed. The present work attempts with elitist non-dominated sorting GA (NSGA-II) for optimization of pulsed gas metal arc welding process using back propagation neural network (BPNN) based weld quality feature models. The primary objective to maintain butt joint weld quality is the maximization of tensile strength with minimum plate distortion. BPNN has been used to compute the fitness of each solution after adequate training, whereas NSGA-II algorithm generates the optimum solutions for two conflicting objectives. Welding experiments have been conducted on low carbon steel using response surface methodology. The pareto-optimal front with three ranked solutions after 20th generations was considered as the best without further improvement. The joint strength as well as transverse shrinkage was found to be drastically improved over the design of experimental results as per validated pareto-optimal solutions obtained.
The transcultural diabetes nutrition algorithm: a canadian perspective.
Gougeon, Réjeanne; Sievenpiper, John L; Jenkins, David; Yale, Jean-François; Bell, Rhonda; Després, Jean-Pierre; Ransom, Thomas P P; Camelon, Kathryn; Dupre, John; Kendall, Cyril; Hegazi, Refaat A; Marchetti, Albert; Hamdy, Osama; Mechanick, Jeffrey I
2014-01-01
The Transcultural Diabetes Nutrition Algorithm (tDNA) is a clinical tool designed to facilitate implementation of therapeutic lifestyle recommendations for people with or at risk for type 2 diabetes. Cultural adaptation of evidence-based clinical practice guidelines (CPG) recommendations is essential to address varied patient populations within and among diverse regions worldwide. The Canadian version of tDNA supports and targets behavioural changes to improve nutritional quality and to promote regular daily physical activity consistent with Canadian Diabetes Association CPG, as well as channelling the concomitant management of obesity, hypertension, dyslipidemia, and dysglycaemia in primary care. Assessing glycaemic index (GI) (the ranking of foods by effects on postprandial blood glucose levels) and glycaemic load (GL) (the product of mean GI and the total carbohydrate content of a meal) will be a central part of the Canadian tDNA and complement nutrition therapy by facilitating glycaemic control using specific food selections. This component can also enhance other metabolic interventions, such as reducing the need for antihyperglycaemic medication and improving the effectiveness of weight loss programs. This tDNA strategy will be adapted to the cultural specificities of the Canadian population and incorporated into the tDNA validation methodology.
NASA Astrophysics Data System (ADS)
Xie, S.; Protat, A.; Zhao, C.
2013-12-01
One primary goal of the US Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program is to obtain and retrieve cloud microphysical properties from detailed cloud observations using ground-based active and passive remote sensors. However, there is large uncertainty in the retrieved cloud property products. Studies have shown that the uncertainty could arise from instrument limitations, measurement errors, sampling errors, retrieval algorithm deficiencies in assumptions, as well as inconsistent input data and constraints used by different algorithms. To quantify the uncertainty in cloud retrievals, a scientific focus group, Quantification of Uncertainties In Cloud Retrievals (QUICR), was recently created by the DOE Atmospheric System Research (ASR) program. This talk will provide an overview of the recent research activities conducted within QUICR and discuss its current collaborations with the European cloud retrieval community and future plans. The goal of QUICR is to develop a methodology for characterizing and quantifying uncertainties in current and future ARM cloud retrievals. The Work at LLNL was performed under the auspices of the U. S. Department of Energy (DOE), Office of Science, Office of Biological and Environmental Research by Lawrence Livermore National Laboratory under contract No. DE-AC52-07NA27344. LLNL-ABS-641258.
The Transcultural Diabetes Nutrition Algorithm: A Canadian Perspective
Sievenpiper, John L.; Jenkins, David; Yale, Jean-François; Bell, Rhonda; Després, Jean-Pierre; Ransom, Thomas P. P.; Dupre, John; Kendall, Cyril; Hegazi, Refaat A.; Marchetti, Albert; Hamdy, Osama; Mechanick, Jeffrey I.
2014-01-01
The Transcultural Diabetes Nutrition Algorithm (tDNA) is a clinical tool designed to facilitate implementation of therapeutic lifestyle recommendations for people with or at risk for type 2 diabetes. Cultural adaptation of evidence-based clinical practice guidelines (CPG) recommendations is essential to address varied patient populations within and among diverse regions worldwide. The Canadian version of tDNA supports and targets behavioural changes to improve nutritional quality and to promote regular daily physical activity consistent with Canadian Diabetes Association CPG, as well as channelling the concomitant management of obesity, hypertension, dyslipidemia, and dysglycaemia in primary care. Assessing glycaemic index (GI) (the ranking of foods by effects on postprandial blood glucose levels) and glycaemic load (GL) (the product of mean GI and the total carbohydrate content of a meal) will be a central part of the Canadian tDNA and complement nutrition therapy by facilitating glycaemic control using specific food selections. This component can also enhance other metabolic interventions, such as reducing the need for antihyperglycaemic medication and improving the effectiveness of weight loss programs. This tDNA strategy will be adapted to the cultural specificities of the Canadian population and incorporated into the tDNA validation methodology. PMID:24550982
NASA Astrophysics Data System (ADS)
Townsend, Philip; Kruger, Eric; Wang, Zhihui; Singh, Aditya
2017-04-01
Imaging spectroscopy exhibits great potential for mapping foliar functional traits that are impractical or expensive to regularly measure on the ground, and are essentially impossible to characterize comprehensively across space. Specifically, the high information content in spectroscopic data enables us to identify narrow spectral feature that are associated with vegetation primary and secondary biochemistry (nutrients, pigments, defensive compounds), leaf structure (e.g., leaf mass per area), canopy structure, and physiological capacity. Ultimately, knowledge of the variability in such traits is critical to understanding vegetation productivity, as well as responses to climatic variability, disturbances, pests and pathogens. The great challenge to the use of imaging spectroscopy to supplement trait databases is the development of trait retrieval approaches that are broadly applicable within and between ecosystem types. Here, we outline how we are using the US National Ecological Observatory Network (NEON) to prototype the scaling and comparison of trait distributions derived from field measurements and imagery. We find that algorithms to map traits from imagery are robust across ecosystem types, when controlling for physiognomy and vegetation percent cover, and that among all vegetation types, the chemometric algorithms utilize similar features for mapping of traits.
Recursive flexible multibody system dynamics using spatial operators
NASA Technical Reports Server (NTRS)
Jain, A.; Rodriguez, G.
1992-01-01
This paper uses spatial operators to develop new spatially recursive dynamics algorithms for flexible multibody systems. The operator description of the dynamics is identical to that for rigid multibody systems. Assumed-mode models are used for the deformation of each individual body. The algorithms are based on two spatial operator factorizations of the system mass matrix. The first (Newton-Euler) factorization of the mass matrix leads to recursive algorithms for the inverse dynamics, mass matrix evaluation, and composite-body forward dynamics for the systems. The second (innovations) factorization of the mass matrix, leads to an operator expression for the mass matrix inverse and to a recursive articulated-body forward dynamics algorithm. The primary focus is on serial chains, but extensions to general topologies are also described. A comparison of computational costs shows that the articulated-body, forward dynamics algorithm is much more efficient than the composite-body algorithm for most flexible multibody systems.
A Performance Evaluation of Lightning-NO Algorithms in CMAQ
In the Community Multiscale Air Quality (CMAQv5.2) model, we have implemented two algorithms for lightning NO production; one algorithm is based on the hourly observed cloud-to-ground lightning strike data from National Lightning Detection Network (NLDN) to replace the previous m...
Algorithmic Case Pedagogy, Learning and Gender
ERIC Educational Resources Information Center
Bromley, Robert; Huang, Zhenyu
2015-01-01
Great investment has been made in developing algorithmically-based cases within online homework management systems. This has been done because publishers are convinced that textbook adoption decisions are influenced by the incorporation of these systems within their products. These algorithmic assignments are thought to promote learning while…
NASA Astrophysics Data System (ADS)
Siegel, J.; Siegel, Edward Carl-Ludwig
2011-03-01
Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!
Assimilating bio-optical glider data during a phytoplankton bloom in the southern Ross Sea
NASA Astrophysics Data System (ADS)
Kaufman, Daniel E.; Friedrichs, Marjorie A. M.; Hemmings, John C. P.; Smith, Walker O., Jr.
2018-01-01
The Ross Sea is a region characterized by high primary productivity in comparison to other Antarctic coastal regions, and its productivity is marked by considerable variability both spatially (1-50 km) and temporally (days to weeks). This variability presents a challenge for inferring phytoplankton dynamics from observations that are limited in time or space, which is often the case due to logistical limitations of sampling. To better understand the spatiotemporal variability in Ross Sea phytoplankton dynamics and to determine how restricted sampling may skew dynamical interpretations, high-resolution bio-optical glider measurements were assimilated into a one-dimensional biogeochemical model adapted for the Ross Sea. The assimilation of data from the entire glider track using the micro-genetic and local search algorithms in the Marine Model Optimization Testbed improves the model-data fit by ˜ 50 %, generating rates of integrated primary production of 104 g C m-2 yr-1 and export at 200 m of 27 g C m-2 yr-1. Assimilating glider data from three different latitudinal bands and three different longitudinal bands results in minimal changes to the simulations, improves the model-data fit with respect to unassimilated data by ˜ 35 %, and confirms that analyzing these glider observations as a time series via a one-dimensional model is reasonable on these scales. Whereas assimilating the full glider data set produces well-constrained simulations, assimilating subsampled glider data at a frequency consistent with cruise-based sampling results in a wide range of primary production and export estimates. These estimates depend strongly on the timing of the assimilated observations, due to the presence of high mesoscale variability in this region. Assimilating surface glider data subsampled at a frequency consistent with available satellite-derived data results in 40 % lower carbon export, primarily resulting from optimized rates generating more slowly sinking diatoms. This analysis highlights the need for the strategic consideration of the impacts of data frequency, duration, and coverage when combining observations with biogeochemical modeling in regions with strong mesoscale variability.
MODIS Snow and Sea Ice Products
NASA Technical Reports Server (NTRS)
Hall, Dorothy K.; Riggs, George A.; Salomonson, Vincent V.
2004-01-01
In this chapter, we describe the suite of Earth Observing System (EOS) Moderate-Resolution Imaging Spectroradiometer (MODIS) Terra and Aqua snow and sea ice products. Global, daily products, developed at Goddard Space Flight Center, are archived and distributed through the National Snow and Ice Data Center at various resolutions and on different grids useful for different communities Snow products include binary snow cover, snow albedo, and in the near future, fraction of snow in a 5OO-m pixel. Sea ice products include ice extent determined with two different algorithms, and sea ice surface temperature. The algorithms used to develop these products are described. Both the snow and sea ice products, available since February 24,2000, are useful for modelers. Validation of the products is also discussed.
High resolution x-ray CMT: Reconstruction methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.K.
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less
Preparing for ICESat-2: Simulated Geolocated Photon Data for Cryospheric Data Products
NASA Astrophysics Data System (ADS)
Harbeck, K.; Neumann, T.; Lee, J.; Hancock, D.; Brenner, A. C.; Markus, T.
2017-12-01
ICESat-2 will carry NASA's next-generation laser altimeter, ATLAS (Advanced Topographic Laser Altimeter System), which is designed to measure changes in ice sheet height, sea ice freeboard, and vegetation canopy height. There is a critical need for data that simulate what certain ICESat-2 science data products will "look like" post-launch in order to aid the data product development process. There are several sources for simulated photon-counting lidar data, including data from NASA's MABEL (Multiple Altimeter Beam Experimental Lidar) instrument, and M-ATLAS (MABEL data that has been scaled geometrically and radiometrically to be more similar to that expected from ATLAS). From these sources, we are able to develop simulated granules of the geolocated photon cloud product; also referred to as ATL03. These simulated ATL03 granules can be further processed into the upper-level data products that report ice sheet height, sea ice freeboard, and vegetation canopy height. For ice sheet height (ATL06) and sea ice height (ATL07) simulations, both MABEL and M-ATLAS data products are used. M-ATLAS data use ATLAS engineering design cases for signal and background noise rates over certain surface types, and also provides large vertical windows of data for more accurate calculations of atmospheric background rates. MABEL data give a more accurate representation of background noise rates over areas of water (i.e., melt ponds, crevasses or sea ice leads) versus land or solid ice. Through a variety of data manipulation procedures, we provide a product that mimics the appearance and parameter characterization of ATL03 data granules. There are three primary goals for generating this simulated ATL03 dataset: (1) allowing end users to become familiar with using the large photon cloud datasets that will be the primary science data product from ICESat-2, (2) the process ensures that ATL03 data can flow seamlessly through upper-level science data product algorithms, and (3) the process ensures parameter traceability through ATL03 and upper-level data products. We will present a summary of how simulated data products are generated, the cryospheric data product applications for this simulated data (specifically ice sheet height and sea ice freeboard), and where these simulated datasets are available to the ICESat-2 data user community.
Quick Vegas: Improving Performance of TCP Vegas for High Bandwidth-Delay Product Networks
NASA Astrophysics Data System (ADS)
Chan, Yi-Cheng; Lin, Chia-Liang; Ho, Cheng-Yuan
An important issue in designing a TCP congestion control algorithm is that it should allow the protocol to quickly adjust the end-to-end communication rate to the bandwidth on the bottleneck link. However, the TCP congestion control may function poorly in high bandwidth-delay product networks because of its slow response with large congestion windows. In this paper, we propose an enhanced version of TCP Vegas called Quick Vegas, in which we present an efficient congestion window control algorithm for a TCP source. Our algorithm improves the slow-start and congestion avoidance techniques of original Vegas. Simulation results show that Quick Vegas significantly improves the performance of connections as well as remaining fair when the bandwidth-delay product increases.
Data Continuity of Aerosol Index from Suomi NPP/OMPS Observations
NASA Astrophysics Data System (ADS)
Ahn, C.; Torres, O.; Tiruchirapalli, R.; Taylor, S.; Jethva, H. T.
2017-12-01
Since the development of the Aerosol Index (AI) concept from Nimubs-7 TOMS near-UV measurements, the AI product has been widely used by the aerosol community in a variety of applications including monitoring of the sources and sinks of carbonaceous and desert dust aerosols. The AI uses a pair of near-UV radiances to detect the presence of absorbing particles even over bright backgrounds such as clouds and snow/ice covered areas. Since its inception in the mid 90's, the AI has been available as a by-product of the total ozone product. Due to the implementation of a new total ozone algorithm, the standard AI product will no longer be available starting in 2018. To assure the continuity of the AI record, we have developed an improved AI algorithm that uses a better forward modeling method of the top of atmosphere radiances. The enhanced modelling capability accounts for the scattering of clouds using Mie theory, and includes the effect of wavelength and angle dependent surface reflectance effects. By doing this, we have significantly reduced angular dependent false AI signals such as sun glint over the ocean. We will discuss the improved AI algorithm and present the long term AI record from various UV space borne sensors including TOMS, OMI, OMPS, and EPIC with consistent AI algorithms, followed by future plans for near-real time processing and operational production of a new OMPS AI product.
NASA Astrophysics Data System (ADS)
Shah, Rahul H.
Production costs account for the largest share of the overall cost of manufacturing facilities. With the U.S. industrial sector becoming more and more competitive, manufacturers are looking for more cost and resource efficient working practices. Operations management and production planning have shown their capability to dramatically reduce manufacturing costs and increase system robustness. When implementing operations related decision making and planning, two fields that have shown to be most effective are maintenance and energy. Unfortunately, the current research that integrates both is limited. Additionally, these studies fail to consider parameter domains and optimization on joint energy and maintenance driven production planning. Accordingly, production planning methodology that considers maintenance and energy is investigated. Two models are presented to achieve well-rounded operating strategy. The first is a joint energy and maintenance production scheduling model. The second is a cost per part model considering maintenance, energy, and production. The proposed methodology will involve a Time-of-Use electricity demand response program, buffer and holding capacity, station reliability, production rate, station rated power, and more. In practice, the scheduling problem can be used to determine a joint energy, maintenance, and production schedule. Meanwhile, the cost per part model can be used to: (1) test the sensitivity of the obtained optimal production schedule and its corresponding savings by varying key production system parameters; and (2) to determine optimal system parameter combinations when using the joint energy, maintenance, and production planning model. Additionally, a factor analysis on the system parameters is conducted and the corresponding performance of the production schedule under variable parameter conditions, is evaluated. Also, parameter optimization guidelines that incorporate maintenance and energy parameter decision making in the production planning framework are discussed. A modified Particle Swarm Optimization solution technique is adopted to solve the proposed scheduling problem. The algorithm is described in detail and compared to Genetic Algorithm. Case studies are presented to illustrate the benefits of using the proposed model and the effectiveness of the Particle Swarm Optimization approach. Numerical Experiments are implemented and analyzed to test the effectiveness of the proposed model. The proposed scheduling strategy can achieve savings of around 19 to 27 % in cost per part when compared to the baseline scheduling scenarios. By optimizing key production system parameters from the cost per part model, the baseline scenarios can obtain around 20 to 35 % in savings for the cost per part. These savings further increase by 42 to 55 % when system parameter optimization is integrated with the proposed scheduling problem. Using this method, the most influential parameters on the cost per part are the rated power from production, the production rate, and the initial machine reliabilities. The modified Particle Swarm Optimization algorithm adopted allows greater diversity and exploration compared to Genetic Algorithm for the proposed joint model which results in it being more computationally efficient in determining the optimal scheduling. While Genetic Algorithm could achieve a solution quality of 2,279.63 at an expense of 2,300 seconds in computational effort. In comparison, the proposed Particle Swarm Optimization algorithm achieved a solution quality of 2,167.26 in less than half the computation effort which is required by Genetic Algorithm.
Engineering tradeoff problems viewed as multiple objective optimizations and the VODCA methodology
NASA Astrophysics Data System (ADS)
Morgan, T. W.; Thurgood, R. L.
1984-05-01
This paper summarizes a rational model for making engineering tradeoff decisions. The model is a hybrid from the fields of social welfare economics, communications, and operations research. A solution methodology (Vector Optimization Decision Convergence Algorithm - VODCA) firmly grounded in the economic model is developed both conceptually and mathematically. The primary objective for developing the VODCA methodology was to improve the process for extracting relative value information about the objectives from the appropriate decision makers. This objective was accomplished by employing data filtering techniques to increase the consistency of the relative value information and decrease the amount of information required. VODCA is applied to a simplified hypothetical tradeoff decision problem. Possible use of multiple objective analysis concepts and the VODCA methodology in product-line development and market research are discussed.
NASA Astrophysics Data System (ADS)
Patrick, C. E.; Aliaga, L.; Bashyal, A.; Bellantoni, L.; Bercellie, A.; Betancourt, M.; Bodek, A.; Bravar, A.; Budd, H.; Caceres v., G. F. R.; Carneiro, M. F.; Chavarria, E.; da Motta, H.; Dytman, S. A.; Díaz, G. A.; Felix, J.; Fields, L.; Fine, R.; Gago, A. M.; Galindo, R.; Gallagher, H.; Ghosh, A.; Gran, R.; Han, J. Y.; Harris, D. A.; Henry, S.; Hurtado, K.; Jena, D.; Kleykamp, J.; Kordosky, M.; Le, T.; Lu, X.-G.; Maher, E.; Manly, S.; Mann, W. A.; Marshall, C. M.; McFarland, K. S.; McGowan, A. M.; Messerly, B.; Miller, J.; Mislivec, A.; Morfín, J. G.; Mousseau, J.; Naples, D.; Nelson, J. K.; Norrick, A.; Nowak, G. M.; Nuruzzaman, Paolone, V.; Perdue, G. N.; Peters, E.; Ramírez, M. A.; Ransome, R. D.; Ray, H.; Ren, L.; Rodrigues, P. A.; Ruterbories, D.; Schellman, H.; Solano Salinas, C. J.; Sultana, M.; Sánchez Falero, S.; Teklu, A. M.; Valencia, E.; Wolcott, J.; Wospakrik, M.; Yaeggy, B.; Zhang, D.; Miner ν A Collaboration
2018-03-01
We present double-differential measurements of antineutrino charged-current quasielastic scattering in the MINERvA detector. This study improves on a previous single-differential measurement by using updated reconstruction algorithms and interaction models and provides a complete description of observed muon kinematics in the form of a double-differential cross section with respect to muon transverse and longitudinal momentum. We include in our signal definition zero-meson final states arising from multinucleon interactions and from resonant pion production followed by pion absorption in the primary nucleus. We find that model agreement is considerably improved by a model tuned to MINERvA inclusive neutrino scattering data that incorporates nuclear effects such as weak nuclear screening and two-particle, two-hole enhancements.
The influence of grazing on surface climatological variables of tallgrass prairie
NASA Technical Reports Server (NTRS)
Seastedt, T. R.; Dyer, M. I.; Turner, Clarence L.
1992-01-01
Mass and energy exchange between most grassland canopies and the atmosphere are mediated by grazing activities. Ambient temperatures can be increased or decreased by grazers. Data have been assembled from simulated grazing experiments on Konza Prairie Research Natural Area and observations on adjacent pastures grazed by cattle show significant changes in primary production, nutrient content, and bidirectional reflectance characteristics as a function of grazing intensity. The purpose of this research was to provide algorithms that would allow incorporation of grazing effects into models of energy budgets using remote sensing procedures. The approach involved: (1) linking empirical measurements of plant biomass and grazing intensities to remotely sensed canopy reflectance, and (2) using a higher resolution, mechanistic grazing model to derive plant ecophysiological parameters that influence reflectance and other surface climatological variables.
Recent Theoretical Advances in Analysis of AIRS/AMSU Sounding Data
NASA Technical Reports Server (NTRS)
Susskind, Joel
2007-01-01
AIRS was launched on EOS Aqua on May 4,2002, together with AMSU-A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. This paper describes the AIRS Science Team Version 5.0 retrieval algorithm. Starting in early 2007, the Goddard DAAC will use this algorithm to analyze near real time AIRS/AMSU observations. These products are then made available to the scientific community for research purposes. The products include twice daily measurements of the Earth's three dimensional global temperature, water vapor, and ozone distribution as well as cloud cover. In addition, accurate twice daily measurements of the earth's land and ocean temperatures are derived and reported. Scientists use this important set of observations for two major applications. They provide important information for climate studies of global and regional variability and trends of different aspects of the earth's atmosphere. They also provide information for researchers to improve the skill of weather forecasting. A very important new product of the AIRS Version 5 algorithm is accurate case-by-case error estimates of the retrieved products. This heightens their utility for use in both weather and climate applications. These error estimates are also used directly for quality control of the retrieved products. Version 5 also allows for accurate quality controlled AIRS only retrievals, called "Version 5 AO retrievals" which can be used as a backup methodology if AMSU fails. Examples of the accuracy of error estimates and quality controlled retrieval products of the AIRS/AMSU Version 5 and Version 5 AO algorithms are given, and shown to be significantly better than the previously used Version 4 algorithm. Assimilation of Version 5 retrievals are also shown to significantly improve forecast skill, especially when the case-by-case error estimates are utilized in the data assimilation process.
Multi-robot task allocation based on two dimensional artificial fish swarm algorithm
NASA Astrophysics Data System (ADS)
Zheng, Taixiong; Li, Xueqin; Yang, Liangyi
2007-12-01
The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.
A Time Series of Sea Surface Nitrate and Nitrate based New Production in the Global Oceans
NASA Astrophysics Data System (ADS)
Goes, J. I.; Fargion, G. S.; Gomes, H. R.; Franz, B. A.
2014-12-01
With support from NASA's MEaSUREs program, we are developing algorithms for two innovative satellite-based Earth Science Data Records (ESDRs), one Sea Surface Nitrate (SSN) and the other, Nitrate based new Production (NnP). Newly developed algorithms will be applied to mature ESDRs of Chlorophyll a and SST available from NASA, to generate maps of SSN and NnP. Our proposed ESDRs offer the potential of greatly improving our understanding of the role of the oceans in global carbon cycling, earth system processes and climate change, especially for regions and seasons which are inaccessible to traditional shipboard studies. They also provide an innovative means for validating and improving coupled ecosystem models that currently rely on global maps of nitrate generated from multi-year data sets. To aid in our algorithm development efforts and to ensure that our ESDRs are truly global in nature, we are currently in the process of assembling a large database of nutrients from oceanographic institutions all over the world. Once our products are developed and our algorithms are fine-tuned, large-scale data production will be undertaken in collaboration with NASA's Ocean Biology Processing Group (OPBG), who will make the data publicly available first as evaluation products and then as mature ESDRs.
Production of τ τ jj final states at the LHC and the TauSpinner algorithm: the spin-2 case
NASA Astrophysics Data System (ADS)
Bahmani, M.; Kalinowski, J.; Kotlarski, W.; Richter-Wąs, E.; Wąs, Z.
2018-01-01
The TauSpinner algorithm is a tool that allows one to modify the physics model of the Monte Carlo generated samples due to the changed assumptions of event production dynamics, but without the need of re-generating events. With the help of weights τ -lepton production or decay processes can be modified accordingly to a new physics model. In a recent paper a new version TauSpinner ver.2.0.0 has been presented which includes a provision for introducing non-standard states and couplings and study their effects in the vector-boson-fusion processes by exploiting the spin correlations of τ -lepton pair decay products in processes where final states include also two hard jets. In the present paper we document how this can be achieved taking as an example the non-standard spin-2 state that couples to Standard Model particles and tree-level matrix elements with complete helicity information included for the parton-parton scattering amplitudes into a τ -lepton pair and two outgoing partons. This implementation is prepared as the external (user-provided) routine for the TauSpinner algorithm. It exploits amplitudes generated by MadGraph5 and adapted to the TauSpinner algorithm format. Consistency tests of the implemented matrix elements, re-weighting algorithm and numerical results for observables sensitive to τ polarisation are presented.
A biconjugate gradient type algorithm on massively parallel architectures
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Hochbruck, Marlis
1991-01-01
The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. Recently, Freund and Nachtigal have proposed a novel BCG type approach, the quasi-minimal residual method (QMR), which overcomes the problems of BCG. Here, an implementation is presented of QMR based on an s-step version of the nonsymmetric look-ahead Lanczos algorithm. The main feature of the s-step Lanczos algorithm is that, in general, all inner products, except for one, can be computed in parallel at the end of each block; this is unlike the other standard Lanczos process where inner products are generated sequentially. The resulting implementation of QMR is particularly attractive on massively parallel SIMD architectures, such as the Connection Machine.
Evaluation of SMAP Level 2 Soil Moisture Algorithms Using SMOS Data
NASA Technical Reports Server (NTRS)
Bindlish, Rajat; Jackson, Thomas J.; Zhao, Tianjie; Cosh, Michael; Chan, Steven; O'Neill, Peggy; Njoku, Eni; Colliander, Andreas; Kerr, Yann; Shi, J. C.
2011-01-01
The objectives of the SMAP (Soil Moisture Active Passive) mission are global measurements of soil moisture and land freeze/thaw state at 10 km and 3 km resolution, respectively. SMAP will provide soil moisture with a spatial resolution of 10 km with a 3-day revisit time at an accuracy of 0.04 m3/m3 [1]. In this paper we contribute to the development of the Level 2 soil moisture algorithm that is based on passive microwave observations by exploiting Soil Moisture Ocean Salinity (SMOS) satellite observations and products. SMOS brightness temperatures provide a global real-world, rather than simulated, test input for the SMAP radiometer-only soil moisture algorithm. Output of the potential SMAP algorithms will be compared to both in situ measurements and SMOS soil moisture products. The investigation will result in enhanced SMAP pre-launch algorithms for soil moisture.
Recognition of strong earthquake-prone areas with a single learning class
NASA Astrophysics Data System (ADS)
Gvishiani, A. D.; Agayan, S. M.; Dzeboev, B. A.; Belov, I. O.
2017-05-01
This article presents a new Barrier recognition algorithm with learning, designed for recognition of earthquake-prone areas. In comparison to the Crust (Kora) algorithm, used by the classical EPA approach, the Barrier algorithm proceeds with learning just on one "pure" high-seismic class. The new algorithm operates in the space of absolute values of the geological-geophysical parameters of the objects. The algorithm is used for recognition of earthquake-prone areas with M ≥ 6.0 in the Caucasus region. Comparative analysis of the Crust and Barrier algorithms justifies their productive coherence.
Cook, B.D.; Bolstad, P.V.; Naesset, E.; Anderson, R. Scott; Garrigues, S.; Morisette, J.T.; Nickeson, J.; Davis, K.J.
2009-01-01
Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30??m to 1??km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600??ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400??m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine-resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire landscape. Failure to account for wetlands had little impact on landscape-scale estimates, because vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and wetlands in this largely forested landscape were considered less important.
NASA Technical Reports Server (NTRS)
Cook, Bruce D.; Bolstad, Paul V.; Naesset, Erik; Anderson, Ryan S.; Garrigues, Sebastian; Morisette, Jeffrey T.; Nickeson, Jaime; Davis, Kenneth J.
2009-01-01
Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the MOderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30 m to 1 km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600 ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400 m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and wetlands in this largely forested landscape were considered less important.
Batch Scheduling for Hybrid Assembly Differentiation Flow Shop to Minimize Total Actual Flow Time
NASA Astrophysics Data System (ADS)
Maulidya, R.; Suprayogi; Wangsaputra, R.; Halim, A. H.
2018-03-01
A hybrid assembly differentiation flow shop is a three-stage flow shop consisting of Machining, Assembly and Differentiation Stages and producing different types of products. In the machining stage, parts are processed in batches on different (unrelated) machines. In the assembly stage, each part of the different parts is assembled into an assembly product. Finally, the assembled products will further be processed into different types of final products in the differentiation stage. In this paper, we develop a batch scheduling model for a hybrid assembly differentiation flow shop to minimize the total actual flow time defined as the total times part spent in the shop floor from the arrival times until its due date. We also proposed a heuristic algorithm for solving the problems. The proposed algorithm is tested using a set of hypothetic data. The solution shows that the algorithm can solve the problems effectively.
Reflectance model for quantifying chlorophyll a in the presence of productivity degradation products
NASA Technical Reports Server (NTRS)
Carder, K. L.; Hawes, S. K.; Steward, R. G.; Baker, K. A.; Smith, R. C.; Mitchell, B. G.
1991-01-01
A reflectance model developed to estimate chlorophyll a concentrations in the presence of marine colored dissolved organic matter, pheopigments, detritus, and bacteria is presented. Nomograms and lookup tables are generated to describe the effects of different mixtures of chlorophyll a and these degradation products on the R(412):R(443) and R(443):R(565) remote-sensing reflectance or irradiance reflectance ratios. These are used to simulate the accuracy of potential ocean color satellite algorithms, assuming that atmospheric effects have been removed. For the California Current upwelling and offshore regions, with chlorophyll a not greater than 1.3 mg/cu m, the average error for chlorophyll a retrievals derived from irradiance reflectance data for degradation product-rich areas was reduced from +/-61 percent to +/-23 percent by application of an algorithm using two reflectance ratios rather than the commonly used algorithm applying a single reflectance ratio.
Muyoyeta, Monde; Moyo, Maureen; Kasese, Nkatya; Ndhlovu, Mapopa; Milimo, Deborah; Mwanza, Winfridah; Kapata, Nathan; Schaap, Albertus; Godfrey Faussett, Peter; Ayles, Helen
2015-01-01
The current cost of Xpert MTB RIF (Xpert) consumables is such that algorithms are needed to select which patients to prioritise for testing with Xpert. To evaluate two algorithms for prioritisation of Xpert in primary health care settings in a high TB and HIV burden setting. Consecutive, presumptive TB patients with a cough of any duration were offered either Xpert or Fluorescence microscopy (FM) test depending on their CXR score or HIV status. In one facility, sputa from patients with an abnormal CXR were tested with Xpert and those with a normal CXR were tested with FM ("CXR algorithm"). CXR was scored automatically using a Computer Aided Diagnosis (CAD) program. In the other facility, patients who were HIV positive were tested using Xpert and those who were HIV negative were tested with FM ("HIV algorithm"). Of 9482 individuals pre-screened with CXR, Xpert detected TB in 2090/6568 (31.8%) with an abnormal CXR, and FM was AFB positive in 8/2455 (0.3%) with a normal CXR. Of 4444 pre-screened with HIV, Xpert detected TB in 508/2265 (22.4%) HIV positive and FM was AFB positive in 212/1920 (11.0%) in HIV negative individuals. The notification rate of new bacteriologically confirmed TB increased; from 366 to 620/ 100,000/yr and from 145 to 261/100,000/yr at the CXR and HIV algorithm sites respectively. The median time to starting TB treatment at the CXR site compared to the HIV algorithm site was; 1(IQR 1-3 days) and 3 (2-5 days) (p<0.0001) respectively. Use of Xpert in a resource-limited setting at primary care level in conjunction with pre-screening tests reduced the number of Xpert tests performed. The routine use of Xpert resulted in additional cases of confirmed TB patients starting treatment. However, there was no increase in absolute numbers of patients starting TB treatment. Same day diagnosis and treatment commencement was achieved for both bacteriologically confirmed and empirically diagnosed patients where Xpert was used in conjunction with CXR.
24 CFR 3282.362 - Production Inspection Primary Inspection Agencies (IPIAs).
Code of Federal Regulations, 2010 CFR
2010-04-01
... in production which fails to conform to the design or where the design is not specific, to the... 24 Housing and Urban Development 5 2010-04-01 2010-04-01 false Production Inspection Primary... REGULATIONS Primary Inspection Agencies § 3282.362 Production Inspection Primary Inspection Agencies (IPIAs...
Face recognition using tridiagonal matrix enhanced multivariance products representation
NASA Astrophysics Data System (ADS)
Ã-zay, Evrim Korkmaz
2017-01-01
This study aims to retrieve face images from a database according to a target face image. For this purpose, Tridiagonal Matrix Enhanced Multivariance Products Representation (TMEMPR) is taken into consideration. TMEMPR is a recursive algorithm based on Enhanced Multivariance Products Representation (EMPR). TMEMPR decomposes a matrix into three components which are a matrix of left support terms, a tridiagonal matrix of weight parameters for each recursion, and a matrix of right support terms, respectively. In this sense, there is an analogy between Singular Value Decomposition (SVD) and TMEMPR. However TMEMPR is a more flexible algorithm since its initial support terms (or vectors) can be chosen as desired. Low computational complexity is another advantage of TMEMPR because the algorithm has been constructed with recursions of certain arithmetic operations without requiring any iteration. The algorithm has been trained and tested with ORL face image database with 400 different grayscale images of 40 different people. TMEMPR's performance has been compared with SVD's performance as a result.
David P. Turner; William D. Ritts; Warren B. Cohen; Thomas K. Maeirsperger; Stith T. Gower; Al A. Kirschbaum; Steve W. Runnings; Maosheng Zhaos; Steven C. Wofsy; Allison L. Dunn; Beverly E. Law; John L. Campbell; Walter C. Oechel; Hyo Jung Kwon; Tilden P. Meyers; Eric E. Small; Shirley A. Kurc; John A. Gamon
2005-01-01
Operational monitoring of global terrestrial gross primary production (GPP) and net primary production (NPP) is now underway using imagery from the satellite-borne Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. Evaluation of MODIS GPP and NPP products will require site-level studies across a range of biomes, with close attention to numerous scaling...
Yan, Zhixiang; Li, Tianxue; Lv, Pin; Li, Xiang; Zhou, Chen; Yang, Xinghao
2013-06-01
There is a growing need both clinically and experimentally to improve the determination of the blood levels of multiple chemical constituents in herbal medicines. The conventional multiple reaction monitoring (cMRM), however, is not well suited for multi-component determination and could not provide qualitative information for identity confirmation. Here we apply a dynamic triggered MRM (DtMRM) algorithm for the quantification of 20 constituents in an herbal prescription Bu-Zhong-Yi-Qi-Tang (BZYQT) in rat plasma. Dynamic MRM (DMRM) dramatically reduced the number of concurrent MRM transitions that are monitored during each MS scan. This advantage has been enhanced with the addition of triggered MRM (tMRM) for simultaneous confirmation, which maximizes the dwell time in the primary MRM quantitation phase, and also acquires sufficient MRM data to create a composite product ion spectrum. By allowing optimized collision energy for each product ion and maximizing dwell times, tMRM is significantly more sensitive and reliable than conventional product ion scanning. The DtMRM approach provides much higher sensitivity and reproducibility than cMRM. Copyright © 2013 Elsevier B.V. All rights reserved.
Calibration of the SBUV version 8.6 ozone data product
NASA Astrophysics Data System (ADS)
DeLand, M. T.; Taylor, S. L.; Huang, L. K.; Fisher, B. L.
2012-11-01
This paper describes the calibration process for the Solar Backscatter Ultraviolet (SBUV) Version 8.6 (V8.6) ozone data product. Eight SBUV instruments have flown on NASA and NOAA satellites since 1970, and a continuous data record is available since November 1978. The accuracy of ozone trends determined from these data depends on the calibration and long-term characterization of each instrument. V8.6 calibration adjustments are determined at the radiance level, and do not rely on comparison of retrieved ozone products with other instruments. The primary SBUV instrument characterization is based on prelaunch laboratory tests and dedicated on-orbit calibration measurements. We supplement these results with "soft" calibration techniques using carefully chosen subsets of radiance data and information from the retrieval algorithm output to validate each instrument's calibration. The estimated long-term uncertainty in albedo is approximately ±0.8-1.2% (1σ) for most of the instruments. The overlap between these instruments and the Shuttle SBUV (SSBUV) data allows us to intercalibrate the SBUV instruments to produce a coherent V8.6 data set covering more than 32 yr. The estimated long-term uncertainty in albedo is less than 3% over this period.
Calibration of the SBUV version 8.6 ozone data product
NASA Astrophysics Data System (ADS)
DeLand, M. T.; Taylor, S. L.; Huang, L. K.; Fisher, B. L.
2012-07-01
This paper describes the calibration process for the Solar Backscatter Ultraviolet (SBUV) Version 8.6 (V8.6) ozone data product. Eight SBUV instruments have flown on NASA and NOAA satellites since 1970, and a continuous data record is available since November 1978. The accuracy of ozone trends determined from these data depends on the calibration and long-term characterization of each instrument. V8.6 calibration adjustments are determined at the radiance level, and do not rely on comparison of retrieved ozone products with other instruments. The primary SBUV instrument characterization is based on prelaunch laboratory tests and dedicated on-orbit calibration measurements. We supplement these results with "soft" calibration techniques using carefully chosen subsets of radiance data and information from the retrieval algorithm output to validate each instrument's calibration. The estimated long-term uncertainty in albedo is approximately ±0.8-1.2% (1σ) for most of the instruments. The overlap between these instruments and the Shuttle SBUV (SSBUV) data allows us to intercalibrate the SBUV instruments to produce a coherent V8.6 data set covering more than 32 yr. The estimated long-term uncertainty in albedo is less than 3% over this period.
Liu, R; Chen, J M; Liu, J; Deng, F; Sun, R
2007-11-01
An operational system was developed for mapping the leaf area index (LAI) for carbon cycle models from the moderate resolution imaging spectroradiometer (MODIS) data. The LAI retrieval algorithm is based on Deng et al. [2006. Algorithm for global leaf area index retrieval using satellite imagery. IEEE Transactions on Geoscience and Remote Sensing, 44, 2219-2229], which uses the 4-scale radiative transfer model [Chen, J.M., Leblancs, 1997. A 4-scale bidirectional reflection model based on canopy architecture. IEEE Transactions on Geoscience and Remote Sensing, 35, 1316-1337] to simulate the relationship of LAI with vegetated surface reflectance measured from space for various spectral bands and solar and view angles. This algorithm has been integrated to the MODISoft platform, a software system designed for processing MODIS data, to generate 250 m, 500 m and 1 km resolution LAI products covering all of China from MODIS MOD02 or MOD09 products. The multi-temporal interpolation method was implemented to remove the residual cloud and other noise in the final LAI product so that it can be directly used in carbon models without further processing. The retrieval uncertainties from land cover data were evaluated using five different data sets available in China. The results showed that mean LAI discrepancies can reach 27%. The current product was also compared with the NASA MODIS MOD15 LAI product to determine the agreement and disagreement of two different product series. LAI values in the MODIS product were found to be 21% larger than those in the new product. These LAI products were compared against ground TRAC measurements in forests in Qilian Mountain and Changbaishan. On average, the new LAI product agrees with the field measurement in Changbaishan within 2%, but the MODIS product is positively biased by about 20%. In Qilian Mountain, where forests are sparse, the new product is lower than field measurements by about 38%, while the MODIS product is larger by about 65%.
Expanding Metabolic Engineering Algorithms Using Feasible Space and Shadow Price Constraint Modules
Tervo, Christopher J.; Reed, Jennifer L.
2014-01-01
While numerous computational methods have been developed that use genome-scale models to propose mutants for the purpose of metabolic engineering, they generally compare mutants based on a single criteria (e.g., production rate at a mutant’s maximum growth rate). As such, these approaches remain limited in their ability to include multiple complex engineering constraints. To address this shortcoming, we have developed feasible space and shadow price constraint (FaceCon and ShadowCon) modules that can be added to existing mixed integer linear adaptive evolution metabolic engineering algorithms, such as OptKnock and OptORF. These modules allow strain designs to be identified amongst a set of multiple metabolic engineering algorithm solutions that are capable of high chemical production while also satisfying additional design criteria. We describe the various module implementations and their potential applications to the field of metabolic engineering. We then incorporated these modules into the OptORF metabolic engineering algorithm. Using an Escherichia coli genome-scale model (iJO1366), we generated different strain designs for the anaerobic production of ethanol from glucose, thus demonstrating the tractability and potential utility of these modules in metabolic engineering algorithms. PMID:25478320
NASA Astrophysics Data System (ADS)
Baldwin, Daniel; Tschudi, Mark; Pacifici, Fabio; Liu, Yinghui
2017-08-01
Two independent VIIRS-based Sea Ice Concentration (SIC) products are validated against SIC as estimated from Very High Spatial Resolution Imagery for several VIIRS overpasses. The 375 m resolution VIIRS SIC from the Interface Data Processing Segment (IDPS) SIC algorithm is compared against estimates made from 2 m DigitalGlobe (DG) WorldView-2 imagery and also against estimates created from 10 cm Digital Mapping System (DMS) camera imagery. The 750 m VIIRS SIC from the Enterprise SIC algorithm is compared against DG imagery. The IDPS vs. DG comparisons reveal that, due to algorithm issues, many of the IDPS SIC retrievals were falsely assigned ice-free values when the pixel was clearly over ice. These false values increased the validation bias and RMS statistics. The IDPS vs. DMS comparisons were largely over ice-covered regions and did not demonstrate the false retrieval issue. The validation results show that products from both the IDPS and Enterprise algorithms were within or very close to the 10% accuracy (bias) specifications in both the non-melting and melting conditions, but only products from the Enterprise algorithm met the 25% specifications for the uncertainty (RMS).
NASA Astrophysics Data System (ADS)
Lee, Kwon-Ho; Kim, Wonkook
2017-04-01
The geostationary ocean color imager-II (GOCI-II), designed to be focused on the ocean environmental monitoring with better spatial (250m for local and 1km for full disk) and spectral resolution (13 bands) then the current operational mission of the GOCI-I. GOCI-II will be launched in 2018. This study presents currently developing algorithm for atmospheric correction and retrieval of surface reflectance over land to be optimized with the sensor's characteristics. We first derived the top-of-atmosphere radiances as the proxy data derived from the parameterized radiative transfer code in the 13 bands of GOCI-II. Based on the proxy data, the algorithm has been made with cloud masking, gas absorption correction, aerosol inversion, computation of aerosol extinction correction. The retrieved surface reflectances are evaluated by the MODIS level 2 surface reflectance products (MOD09). For the initial test period, the algorithm gave error of within 0.05 compared to MOD09. Further work will be progressed to fully implement the GOCI-II Ground Segment system (G2GS) algorithm development environment. These atmospherically corrected surface reflectance product will be the standard GOCI-II product after launch.
Matsuzaki, Shin-Ichiro S; Suzuki, Kenta; Kadoya, Taku; Nakagawa, Megumi; Takamura, Noriko
2018-06-09
Nutrient supply is a key bottom-up control of phytoplankton primary production in lake ecosystems. Top-down control via grazing pressure by zooplankton also constrains primary production, and primary production may simultaneously affect zooplankton. Few studies have addressed these bidirectional interactions. We used convergent cross-mapping (CCM), a numerical test of causal associations, to quantify the presence and direction of the causal relationships among environmental variables (light availability, surface water temperature, NO 3 -N, and PO 4 -P), phytoplankton community composition, primary production, and the abundances of five functional zooplankton groups (large-cladocerans, small-cladocerans, rotifers, calanoids, and cyclopoids) in Lake Kasumigaura, a shallow, hypereutrophic lake in Japan. CCM suggested that primary production was causally influenced by NO 3 -N and phytoplankton community composition; there was no detectable evidence of a causal effect of zooplankton on primary production. Our results also suggest that rotifers and cyclopoids were forced by primary production, and cyclopoids were further influenced by rotifers. However, our CCM suggested that primary production was weakly influenced by rotifers (i.e., bidirectional interaction). These findings may suggest complex linkages between nutrients, primary production, and rotifers and cyclopoids, a pattern that has not been previously detected or has been neglected. We used linear regression analysis to examine the relationships between the zooplankton community and pond smelt (Hypomesus nipponensis), the most abundant planktivore and the most important commercial fish species in Lake Kasumigaura. The relative abundance of pond smelt was significantly and positively correlated with the abundances of rotifers and cyclopoids, which were causally influenced by primary production. This finding suggests that bottom-up linkages between nutrient, primary production, and zooplankton abundance might be a key mechanism supporting high planktivore abundance in eutrophic lakes. Because increases in primary production and cyanobacteria blooms are likely to occur simultaneously in hypereutrophic lakes, our study highlights the need for ecosystem management to resolve the conflict between good water quality and high fishery production. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Interannual Variation in Phytoplankton Class-Specific Primary Production at a Global Scale
NASA Technical Reports Server (NTRS)
Rousseaux, Cecile Severine; Gregg, Watson W.
2014-01-01
We used the NASA Ocean Biogeochemical Model (NOBM) combined with remote sensing data via assimilation to evaluate the contribution of 4 phytoplankton groups to the total primary production. First we assessed the contribution of each phytoplankton groups to the total primary production at a global scale for the period 1998-2011. Globally, diatoms were the group that contributed the most to the total phytoplankton production (50, the equivalent of 20 PgC y-1. Coccolithophores and chlorophytes each contributed to 20 (7 PgC y-1 of the total primary production and cyanobacteria represented about 10 (4 PgC y(sub-1) of the total primary production. Primary production by diatoms was highest in high latitude (45) and in major upwelling systems (Equatorial Pacific and Benguela system). We then assessed interannual variability of this group-specific primary production over the period 1998-2011. Globally the annual relative contribution of each phytoplankton groups to the total primary production varied by maximum 4 (1-2 PgC y-1. We assessed the effects of climate variability on the class-specific primary production using global (i.e. Multivariate El Nio Index, MEI) and regional climate indices (e.g. Southern Annular Mode (SAM), Pacific Decadal Oscillation (PDO) and North Atlantic Oscillation (NAO)). Most interannual variability occurred in the Equatorial Pacific and was associated with climate variability as indicated by significant correlation (p 0.05) between the MEI and the class-specific primary production from all groups except coccolithophores. In the Atlantic, climate variability as indicated by NAO was significantly correlated to the primary production of 2 out of the 4 groups in the North Central Atlantic (diatomscyanobacteria) and in the North Atlantic (chlorophytes and coccolithophores). We found that climate variability as indicated by SAM had only a limited effect on the class-specific primary production in the Southern Ocean. These results provide a modeling and data assimilation perspective to phytoplankton partitioning of primary production and contribute to our understanding of the dynamics of the carbon cycle in the oceans at a global scale.
Interannual Variation in Phytoplankton Primary Production at a Global Scale
NASA Technical Reports Server (NTRS)
Rousseaux, Cecile Severine; Gregg, Watson W.
2013-01-01
We used the NASA Ocean Biogeochemical Model (NOBM) combined with remote sensing data via assimilation to evaluate the contribution of four phytoplankton groups to the total primary production. First, we assessed the contribution of each phytoplankton groups to the total primary production at a global scale for the period 1998-2011. Globally, diatoms contributed the most to the total phytoplankton production ((is)approximately 50%, the equivalent of 20 PgC·y1). Coccolithophores and chlorophytes each contributed approximately 20% ((is) approximately 7 PgC·y1) of the total primary production and cyanobacteria represented about 10% ((is) approximately 4 PgC·y1) of the total primary production. Primary production by diatoms was highest in the high latitudes ((is) greater than 40 deg) and in major upwelling systems (Equatorial Pacific and Benguela system). We then assessed interannual variability of this group-specific primary production over the period 1998-2011. Globally the annual relative contribution of each phytoplankton groups to the total primary production varied by maximum 4% (1-2 PgC·y1). We assessed the effects of climate variability on group-specific primary production using global (i.e., Multivariate El Niño Index, MEI) and "regional" climate indices (e.g., Southern Annular Mode (SAM), Pacific Decadal Oscillation (PDO) and North Atlantic Oscillation (NAO)). Most interannual variability occurred in the Equatorial Pacific and was associated with climate variability as indicated by significant correlation (p (is) less than 0.05) between the MEI and the group-specific primary production from all groups except coccolithophores. In the Atlantic, climate variability as indicated by NAO was significantly correlated to the primary production of 2 out of the 4 groups in the North Central Atlantic (diatoms/cyanobacteria) and in the North Atlantic (chlorophytes and coccolithophores). We found that climate variability as indicated by SAM had only a limited effect on group-specific primary production in the Southern Ocean. These results provide a modeling and data assimilation perspective to phytoplankton partitioning of primary production and contribute to our understanding of the dynamics of the carbon cycle in the oceans at a global scale.
NASA Astrophysics Data System (ADS)
Yusupov, L. R.; Klochkova, K. V.; Simonova, L. A.
2017-09-01
The paper presents a methodology of modeling the chemical composition of the composite material via genetic algorithm for optimization of the manufacturing process of products. The paper presents algorithms of methods based on intelligent system of vermicular graphite iron design
The Algorithm Theoretical Basis Document for Level 1A Processing
NASA Technical Reports Server (NTRS)
Jester, Peggy L.; Hancock, David W., III
2012-01-01
The first process of the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software converts the Level 0 data into the Level 1A Data Products. The Level 1A Data Products are the time ordered instrument data converted from counts to engineering units. This document defines the equations that convert the raw instrument data into engineering units. Required scale factors, bias values, and coefficients are defined in this document. Additionally, required quality assurance and browse products are defined in this document.
Yu, Yang; Wang, Sihan; Tang, Jiafu; Kaku, Ikou; Sun, Wei
2016-01-01
Productivity can be greatly improved by converting the traditional assembly line to a seru system, especially in the business environment with short product life cycles, uncertain product types and fluctuating production volumes. Line-seru conversion includes two decision processes, i.e., seru formation and seru load. For simplicity, however, previous studies focus on the seru formation with a given scheduling rule in seru load. We select ten scheduling rules usually used in seru load to investigate the influence of different scheduling rules on the performance of line-seru conversion. Moreover, we clarify the complexities of line-seru conversion for ten different scheduling rules from the theoretical perspective. In addition, multi-objective decisions are often used in line-seru conversion. To obtain Pareto-optimal solutions of multi-objective line-seru conversion, we develop two improved exact algorithms based on reducing time complexity and space complexity respectively. Compared with the enumeration based on non-dominated sorting to solve multi-objective problem, the two improved exact algorithms saves computation time greatly. Several numerical simulation experiments are performed to show the performance improvement brought by the two proposed exact algorithms.
NASA Astrophysics Data System (ADS)
Paksi, A. B. N.; Ma'ruf, A.
2016-02-01
In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.
NASA GPM GV Science Implementation
NASA Technical Reports Server (NTRS)
Petersen, W. A.
2009-01-01
Pre-launch algorithm development & post-launch product evaluation: The GPM GV paradigm moves beyond traditional direct validation/comparison activities by incorporating improved algorithm physics & model applications (end-to-end validation) in the validation process. Three approaches: 1) National Network (surface): Operational networks to identify and resolve first order discrepancies (e.g., bias) between satellite and ground-based precipitation estimates. 2) Physical Process (vertical column): Cloud system and microphysical studies geared toward testing and refinement of physically-based retrieval algorithms. 3) Integrated (4-dimensional): Integration of satellite precipitation products into coupled prediction models to evaluate strengths/limitations of satellite precipitation producers.
9 CFR 113.51 - Requirements for primary cells used for production of biologics.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Requirements for primary cells used... VECTORS STANDARD REQUIREMENTS Ingredient Requirements § 113.51 Requirements for primary cells used for production of biologics. Primary cells used to prepare biological products shall be derived from normal...
9 CFR 113.51 - Requirements for primary cells used for production of biologics.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 9 Animals and Animal Products 1 2012-01-01 2012-01-01 false Requirements for primary cells used... VECTORS STANDARD REQUIREMENTS Ingredient Requirements § 113.51 Requirements for primary cells used for production of biologics. Primary cells used to prepare biological products shall be derived from normal...
9 CFR 113.51 - Requirements for primary cells used for production of biologics.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 1 2011-01-01 2011-01-01 false Requirements for primary cells used... VECTORS STANDARD REQUIREMENTS Ingredient Requirements § 113.51 Requirements for primary cells used for production of biologics. Primary cells used to prepare biological products shall be derived from normal...
9 CFR 113.51 - Requirements for primary cells used for production of biologics.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 9 Animals and Animal Products 1 2014-01-01 2014-01-01 false Requirements for primary cells used... VECTORS STANDARD REQUIREMENTS Ingredient Requirements § 113.51 Requirements for primary cells used for production of biologics. Primary cells used to prepare biological products shall be derived from normal...
9 CFR 113.51 - Requirements for primary cells used for production of biologics.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 9 Animals and Animal Products 1 2013-01-01 2013-01-01 false Requirements for primary cells used... VECTORS STANDARD REQUIREMENTS Ingredient Requirements § 113.51 Requirements for primary cells used for production of biologics. Primary cells used to prepare biological products shall be derived from normal...
Energy-saving EPON Bandwidth Allocation Algorithm Supporting ONU's Sleep Mode
NASA Astrophysics Data System (ADS)
Zhang, Yinfa; Ren, Shuai; Liao, Xiaomin; Fang, Yuanyuan
2014-09-01
A new bandwidth allocation algorithm was presented by combining merits of the IPACT algorithm and the cyclic DBA algorithm based on the DBA algorithm for ONU's sleep mode. Simulation results indicate that compared with the normal mode ONU, the ONU's sleep mode can save about 74% of energy. The new algorithm has a smaller average packet delay and queue length in the upstream direction. While in the downstream direction, the average packet delay of the new algorithm is less than polling cycle Tcycle and the average queue length is less than the product of Tcycle and the maximum link rate. The new algorithm achieves a better compromise between energy-saving and ensuring quality of service.
Land Surface Temperature Measurements from EOS MODIS Data
NASA Technical Reports Server (NTRS)
Wan, Zheng-Ming
2004-01-01
This report summarizes the accomplishments made by the MODIS LST (Land-Surface Temperature) group at University of California, Santa Barbara, under NASA Contract. Version 1 of the MODIS Land-Surface Temperature Algorithm Theoretical Basis Document (ATBD) was reviewed in June 1994, version 2 reviewed in November 1994, version 3.1 in August 1996, and version 3.3 updated in April 1999. Based on the ATBD, two LST algorithms were developed, one is the generalized split-window algorithm and another is the physics-based day/night LST algorithm. These two LST algorithms were implemented into the production generation executive code (PGE 16) for the daily standard MODIS LST products at level-2 (MODII-L2) and level-3 (MODIIA1 at 1 km resolution and MODIIB1 at 5km resolution). PGE codes for 8-day 1 km LST product (MODIIA2) and the daily, 8-day and monthly LST products at 0.05 degree latitude/longitude climate model grids (CMG) were also delivered. Four to six field campaigns were conducted each year since 2000 to validate the daily LST products generated by PGE16 and the calibration accuracies of the MODIS TIR bands used for the LST/emissivity retrieval from versions 2-4 of Terra MODIS data and versions 3-4 of Aqua MODIS data. Validation results from temperature-based and radiance-based methods indicate that the MODIS LST accuracy is better than 1 C in most clear-sky cases in the range from -10 to 58 C. One of the major lessons learn from multi- year temporal analysis of the consistent V4 daily Terra MODIS LST products in 2000-2003 over some selected target areas including lakes, snow/ice fields, and semi-arid sites is that there are variable numbers of cloud-contaminated LSTs in the MODIS LST products depending on surface elevation, land cover types, and atmospheric conditions. A cloud-screen scheme with constraints on spatial and temporal variations in LSTs was developed to remove cloud-contaminated LSTs. The 5km LST product was indirectly validated through comparisons to the 1 km LST product. Twenty three papers related to the LST research work were published in journals over the last decade.
NASA Technical Reports Server (NTRS)
Sparrow, Victor W.; Gionfriddo, Thomas A.
1994-01-01
In this study there were two primary tasks. The first was to develop an algorithm for quantifying the distortion in a sonic boom. Such an algorithm should be somewhat automatic, with minimal human intervention. Once the algorithm was developed, it was used to test the hypothesis that the cause of a sonic boom distortion was due to atmospheric turbulence. This hypothesis testing was the second task. Using readily available sonic boom data, we statistically tested whether there was a correlation between the sonic boom distortion and the distance a boom traveled through atmospheric turbulence.
Preliminary flight evaluation of an engine performance optimization algorithm
NASA Technical Reports Server (NTRS)
Lambert, H. H.; Gilyard, G. B.; Chisholm, J. D.; Kerr, L. J.
1991-01-01
A performance seeking control (PSC) algorithm has undergone initial flight test evaluation in subsonic operation of a PW 1128 engined F-15. This algorithm is designed to optimize the quasi-steady performance of an engine for three primary modes: (1) minimum fuel consumption; (2) minimum fan turbine inlet temperature (FTIT); and (3) maximum thrust. The flight test results have verified a thrust specific fuel consumption reduction of 1 pct., up to 100 R decreases in FTIT, and increases of as much as 12 pct. in maximum thrust. PSC technology promises to be of value in next generation tactical and transport aircraft.
Combined approach to the Hubble Space Telescope wave-front distortion analysis
NASA Astrophysics Data System (ADS)
Roddier, Claude; Roddier, Francois
1993-06-01
Stellar images taken by the HST at various focus positions have been analyzed to estimate wave-front distortion. Rather than using a single algorithm, we found that better results were obtained by combining the advantages of various algorithms. For the planetary camera, the most accurate algorithms consistently gave a spherical aberration of -0.290-micron rms with a maximum deviation of 0.005 micron. Evidence was found that the spherical aberration is essentially produced by the primary mirror. The illumination in the telescope pupil plane was reconstructed and evidence was found for a slight camera misalignment.
Faster fourier transformation: The algorithm of S. Winograd
NASA Technical Reports Server (NTRS)
Zohar, S.
1979-01-01
The new DFT algorithm of S. Winograd is developed and presented in detail. This is an algorithm which uses about 1/5 of the number of multiplications used by the Cooley-Tukey algorithm and is applicable to any order which is a product of relatively prime factors from the following list: 2,3,4,5,7,8,9,16. The algorithm is presented in terms of a series of tableaus which are convenient, compact, graphical representations of the sequence of arithmetic operations in the corresponding parts of the algorithm. Using these in conjunction with included Tables makes it relatively easy to apply the algorithm and evaluate its performance.
Model of a coral reef ecosystem
NASA Astrophysics Data System (ADS)
Atkinson, Marlin J.; Grigg, Richard W.
1984-08-01
The ECOPATH model for French Frigate Shoals estimates the benthic plant production (net primary production in kg wet weight) required to support the atoll food chain. In this section we estimate the benthic net primary production and net community production of the atoll based on metabolism studies of reef flat, knolls, and lagoon communities at French Frigate Shoals Hawaii. Community metabolism was measured during winter and summer. The reef communities at French Frigate Shoals exhibited patterns and rates of organic carbon production and calcification similar to other reefs in the world. The estimate of net primary production is 6.1·106 kg wet weight km-2 year-1±50%, a value remarkably close to the estimate by the ECOPATH model of 4.3·106 kg wet weight km-2 year-1. Our estimate of net community production or the amount of carbon not consumed by the benthos was high; approximately 15% of the net primary production. Model results indicate that about 5% of net primary production is passed up the food chain to mobile predators. This suggests about 10% of net primary production (˜6% of gross primary production) may be permanently lost to the system via sediment burial or export offshore.
The EB Factory: Fundamental Stellar Astrophysics with Eclipsing Binary Stars Discovered by Kepler
NASA Astrophysics Data System (ADS)
Stassun, Keivan
Eclipsing binaries (EBs) are key laboratories for determining the fundamental properties of stars. EBs are therefore foundational objects for constraining stellar evolution models, which in turn are central to determinations of stellar mass functions, of exoplanet properties, and many other areas. The primary goal of this proposal is to mine the Kepler mission light curves for: (1) EBs that include a subgiant star, from which precise ages can be derived and which can thus serve as critically needed age benchmarks; and within these, (2) long-period EBs that include low-mass M stars or brown dwarfs, which are increa-singly becoming the focus of exoplanet searches, but for which there are the fewest available fundamental mass- radius-age benchmarks. A secondary goal of this proposal is to develop an end-to-end computational pipeline -- the Kepler EB Factory -- that allows automatic processing of Kepler light curves for EBs, from period finding, to object classification, to determination of EB physical properties for the most scientifically interesting EBs, and finally to accurate modeling of these EBs for detailed tests and benchmarking of theoretical stellar evolution models. We will integrate the most successful algorithms into a single, cohesive workflow environment, and apply this 'Kepler EB Factory' to the full public Kepler dataset to find and characterize new "benchmark grade" EBs, and will disseminate both the enhanced data products from this pipeline and the pipeline itself to the broader NASA science community. The proposed work responds directly to two of the defined Research Areas of the NASA Astrophysics Data Analysis Program (ADAP), specifically Research Area #2 (Stellar Astrophysics) and Research Area #9 (Astrophysical Databases). To be clear, our primary goal is the fundamental stellar astrophysics that will be enabled by the discovery and analysis of relatively rare, benchmark-grade EBs in the Kepler dataset. At the same time, to enable this goal will require bringing a suite of extant and new custom algorithms to bear on the Kepler data, and thus our development of the Kepler EB Factory represents a value-added product that will allow the widest scientific impact of the in-formation locked within the vast reservoir of the Kepler light curves.
NASA Technical Reports Server (NTRS)
Pagnutti, Mary
2006-01-01
This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.
New Operational Algorithms for Particle Data from Low-Altitude Polar-Orbiting Satellites
NASA Astrophysics Data System (ADS)
Machol, J. L.; Green, J. C.; Rodriguez, J. V.; Onsager, T. G.; Denig, W. F.
2010-12-01
As part of the algorithm development effort started under the former National Polar-orbiting Operational Environmental Satellite System (NPOESS) program, the NOAA Space Weather Prediction Center (SWPC) is developing operational algorithms for the next generation of low-altitude polar-orbiting weather satellites. This presentation reviews the two new algorithms on which SWPC has focused: Energetic Ions (EI) and Auroral Energy Deposition (AED). Both algorithms take advantage of the improved performance of the Space Environment Monitor - Next (SEM-N) sensors over earlier SEM instruments flown on NOAA Polar Orbiting Environmental Satellites (POES). The EI algorithm iterates a piecewise power law fit in order to derive a differential energy flux spectrum for protons with energies from 10-250 MeV. The algorithm provides the data in physical units (MeV/cm2-s-str-keV) instead of just counts/s as was done in the past, making the data generally more useful and easier to integrate into higher level products. The AED algorithm estimates the energy flux deposited into the atmosphere by precipitating low- and medium-energy charged particles. The AED calculations include particle pitch-angle distributions, information that was not available from POES. This presentation also describes methods that we are evaluating for creating higher level products that would specify the global particle environment based on real time measurements.
A novel dynamic wavelength bandwidth allocation scheme over OFDMA PONs
NASA Astrophysics Data System (ADS)
Yan, Bo; Guo, Wei; Jin, Yaohui; Hu, Weisheng
2011-12-01
With rapid growth of Internet applications, supporting differentiated service and enlarging system capacity have been new tasks for next generation access system. In recent years, research in OFDMA Passive Optical Networks (PON) has experienced extraordinary development as for its large capacity and flexibility in scheduling. Although much work has been done to solve hardware layer obstacles for OFDMA PON, scheduling algorithm on OFDMA PON system is still under primary discussion. In order to support QoS service on OFDMA PON system, a novel dynamic wavelength bandwidth allocation (DWBA) algorithm is proposed in this paper. Per-stream QoS service is supported in this algorithm. Through simulation, we proved our bandwidth allocation algorithm performs better in bandwidth utilization and differentiate service support.
NASA Technical Reports Server (NTRS)
Fromm, Michael; Pitts, Michael; Alfred, Jerome
2000-01-01
This report summarizes the project team's activity and accomplishments during the period 12 February, 1999 - 12 February, 2000. The primary objective of this project was to create and test a generic algorithm for detecting polar stratospheric clouds (PSC), an algorithm that would permit creation of a unified, long term PSC database from a variety of solar occultation instruments that measure aerosol extinction near 1000 nm The second objective was to make a database of PSC observations and certain relevant related datasets. In this report we describe the algorithm, the data we are making available, and user access options. The remainder of this document provides the details of the algorithm and the database offering.
Laser Remote Sensing From ISS: CATS Cloud and Aerosol Level 2 Data Products (Heritage Edition)
NASA Technical Reports Server (NTRS)
Rodier, Sharon; Vaughan, Mark; Palm, Steve; Jensen, Mike; Yorks, John; McGill, Matt; Trepte, Chip; Murray, Tim; Lee, Kam-Pui
2015-01-01
The Cloud-Aerosol Transport System (CATS) instrument was developed at NASA's Goddard Space Flight Center (GSFC) and deployed to the International Space Station (ISS) on 10 January 2015. CATS is mounted on the Japanese Experiment Module's Exposed Facility (JEM_EF) and will provide near-continuous, altitude-resolved measurements of clouds and aerosols in the Earth's atmosphere. The CATS ISS orbit path provides a unique opportunity to capture the full diurnal cycle of cloud and aerosol development and transport, allowing for studies that are not possible with the lidar aboard the CALIPSO platform, which flies in the sun-synchronous A-Train orbit." " One of the primary science objectives of CATS is to continue the CALIPSO aerosol and cloud profile data record to provide continuity of lidar climate observations during the transition from CALIPSO to EarthCARE. To accomplish this, the CATS project at NASA's Goddard Space Flight Center (GSFC) and the CALIPSO project at NASA's Langley Research Center (LaRC) are closely collaborating to develop and deliver a full suite of CALIPSO-like level 2 data products that will be produced using the newly acquired CATS level 1B data whenever CATS is operating in science modes 1. The CALIPSO mission is now well into its ninth year of on-orbit operations, and has developed a robust set of mature and well-validated science algorithms to retrieve the spatial and optical properties of clouds and aerosols from multi-wavelength lidar backscatter signals. By leveraging both new and existing NASA technical resources, this joint effort by the CATS and CALIPSO teams will deliver validated lidar data sets to the user community at the earliest possible opportunity. The science community will have access to two sets of CATS Level 2 data products. The "Operational" data products will be produced by the GSFC CATS team utilizing the new instrument capabilities (e.g., multiple FOVs and 1064 nm depolarization), while the "Heritage" data products created using the existing CALIPSO algorithms and the CATS 532 nm channels and the total 1064 nm channel. " Below is the development of the CATS "Heritage" level 2 software and data along with some initial results with operational data."
NASA Astrophysics Data System (ADS)
Riggs, George A.; Hall, Dorothy K.; Román, Miguel O.
2017-10-01
Knowledge of the distribution, extent, duration and timing of snowmelt is critical for characterizing the Earth's climate system and its changes. As a result, snow cover is one of the Global Climate Observing System (GCOS) essential climate variables (ECVs). Consistent, long-term datasets of snow cover are needed to study interannual variability and snow climatology. The NASA snow-cover datasets generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Terra and Aqua spacecraft and the Suomi National Polar-orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) are NASA Earth System Data Records (ESDR). The objective of the snow-cover detection algorithms is to optimize the accuracy of mapping snow-cover extent (SCE) and to minimize snow-cover detection errors of omission and commission using automated, globally applied algorithms to produce SCE data products. Advancements in snow-cover mapping have been made with each of the four major reprocessings of the MODIS data record, which extends from 2000 to the present. MODIS Collection 6 (C6; https://nsidc.org/data/modis/data_summaries) and VIIRS Collection 1 (C1; https://doi.org/10.5067/VIIRS/VNP10.001) represent the state-of-the-art global snow-cover mapping algorithms and products for NASA Earth science. There were many revisions made in the C6 algorithms which improved snow-cover detection accuracy and information content of the data products. These improvements have also been incorporated into the NASA VIIRS snow-cover algorithms for C1. Both information content and usability were improved by including the Normalized Snow Difference Index (NDSI) and a quality assurance (QA) data array of algorithm processing flags in the data product, along with the SCE map. The increased data content allows flexibility in using the datasets for specific regions and end-user applications. Though there are important differences between the MODIS and VIIRS instruments (e.g., the VIIRS 375 m native resolution compared to MODIS 500 m), the snow detection algorithms and data products are designed to be as similar as possible so that the 16+ year MODIS ESDR of global SCE can be extended into the future with the S-NPP VIIRS snow products and with products from future Joint Polar Satellite System (JPSS) platforms. These NASA datasets are archived and accessible through the NASA Distributed Active Archive Center at the National Snow and Ice Data Center in Boulder, Colorado.
Adapting MODIS Dust Mask Algorithm to Suomi NPP VIIRS for Air Quality Applications
NASA Astrophysics Data System (ADS)
Ciren, P.; Liu, H.; Kondragunta, S.; Laszlo, I.
2012-12-01
Despite pollution reduction control strategies enforced by the Environmental Protection Agency (EPA), large regions of the United States are often under exceptional events such as biomass burning and dust outbreaks that lead to non-attainment of particulate matter standards. This has warranted the National Weather Service (NWS) to provide smoke and dust forecast guidance to the general public. The monitoring and forecasting of dust outbreaks relies on satellite data. Currently, Aqua/MODIS (MODerate resolution Imaging Spectrometer) and Terra/MODIS provide measurements needed to derive dust mask and Aerosol Optical Thickness (AOT) products. The newly launched Suomi NPP VIIRS (Visible/Infrared Imaging Radiometer Suite) instrument has a Suspended Matter (SM) product that indicates the presence of dust, smoke, volcanic ash, sea salt, and unknown aerosol types in a given pixel. The algorithm to identify dust is different over land and ocean but for both, the information comes from AOT retrieval algorithm. Over land, the selection of dust aerosol model in the AOT retrieval algorithm indicates the presence of dust and over ocean a fine mode fraction smaller than 20% indicates dust. Preliminary comparisons of VIIRS SM to CALIPSO Vertical Feature Mask (VFM) aerosol type product indicate that the Probability of Detection (POD) is at ~10% and the product is not mature for operational use. As an alternate approach, NESDIS dust mask algorithm developed for NWS dust forecast verification that uses MODIS deep blue, visible, and mid-IR channels using spectral differencing techniques and spatial variability tests was applied to VIIRS radiances. This algorithm relies on the spectral contrast of dust absorption at 412 and 440 nm and an increase in reflectivity at 2.13 μm when dust is present in the atmosphere compared to a clear sky. To avoid detecting bright desert surface as airborne dust, the algorithm uses the reflectances at 1.24 μm and 2.25 μm to flag bright pixels. The algorithm flags pixels that fall into the glint region so sun glint is not picked up as dust. The algorithm also has a spatial variability test that uses reflectances at 0.86 μm to screen for clouds over water. Analysis of one granule for a known dust event on May 2, 2012 shows that the agreement between VIIRS and MODIS is 82% and VIIRS and CALIPSO is 71%. The probability of detection for VIIRS when compared to MODIS and CALIPSO is 53% and 45% respectively whereas the false alarm ratio for VIIRS when compared to MODIS and CALIPSO is 20% and 37% respectively. The algorithm details, results from the test cases, and the use of the dust flag product in NWS applications will be presented.