-
Cloud microphysics modification with an online coupled COSMO-MUSCAT regional model
NASA Astrophysics Data System (ADS)
Sudhakar, D.; Quaas, J.; Wolke, R.; Stoll, J.; Muehlbauer, A. D.; Tegen, I.
2015-12-01
Abstract: The quantification of clouds, aerosols, and aerosol-cloud interactions in models, continues to be a challenge (IPCC, 2013). In this scenario two-moment bulk microphysical scheme is used to understand the aerosol-cloud interactions in the regional model COSMO (Consortium for Small Scale Modeling). The two-moment scheme in COSMO has been especially designed to represent aerosol effects on the microphysics of mixed-phase clouds (Seifert et al., 2006). To improve the model predictability, the radiation scheme has been coupled with two-moment microphysical scheme. Further, the cloud microphysics parameterization has been modified via coupling COSMO with MUSCAT (MultiScale Chemistry Aerosol Transport model, Wolke et al., 2004). In this study, we will be discussing the initial result from the online-coupled COSMO-MUSCAT model system with modified two-moment parameterization scheme along with COSP (CFMIP Observational Simulator Package) satellite simulator. This online coupled model system aims to improve the sub-grid scale process in the regional weather prediction scenario. The constant aerosol concentration used in the Seifert and Beheng, (2006) parameterizations in COSMO model has been replaced by aerosol concentration derived from MUSCAT model. The cloud microphysical process from the modified two-moment scheme is compared with stand-alone COSMO model. To validate the robustness of the model simulation, the coupled model system is integrated with COSP satellite simulator (Muhlbauer et al., 2012). Further, the simulations are compared with MODIS (Moderate Resolution Imaging Spectroradiometer) and ISCCP (International Satellite Cloud Climatology Project) satellite products.
-
an aerosol climatology optical properties and its associated direct radiative forcing
NASA Astrophysics Data System (ADS)
Kinne, Stefan
2010-05-01
Aerosol particles are quite complex in nature. Aerosol impacts on the distribution of radiative energy and on cloud microphysics have been debated climate impact issues. Here, a new aerosol-climatology is presented, combining the consistency and completeness of global modelling with quality data by ground-monitoring. It provides global monthly maps for spectral aerosol optical properties and for concentrations of CCN and IN. Based on the optical properties the aerosol direct forcing is determined. And with environmental data for clouds and estimates on the anthropogenic fraction from emission experiments with global modelling even the climate relevant aerosol direct forcing at the top of the atmosphere (ToA) is determined. This value is rather small near -0.2W/m2 with limited uncertainty estimated at (+/-0.3) due to uncertainties in aerosol absorption and underlying surface conditions or clouds.
-
Information content and sensitivity of the 3β + 2α lidar measurement system for aerosol microphysical retrievals
NASA Astrophysics Data System (ADS)
Burton, Sharon P.; Chemyakin, Eduard; Liu, Xu; Knobelspiesse, Kirk; Stamnes, Snorre; Sawamura, Patricia; Moore, Richard H.; Hostetler, Chris A.; Ferrare, Richard A.
2016-11-01
There is considerable interest in retrieving profiles of aerosol effective radius, total number concentration, and complex refractive index from lidar measurements of extinction and backscatter at several wavelengths. The combination of three backscatter channels plus two extinction channels (3β + 2α) is particularly important since it is believed to be the minimum configuration necessary for the retrieval of aerosol microphysical properties and because the technological readiness of lidar systems permits this configuration on both an airborne and future spaceborne instrument. The second-generation NASA Langley airborne High Spectral Resolution Lidar (HSRL-2) has been making 3β + 2α measurements since 2012. The planned NASA Aerosol/Clouds/Ecosystems (ACE) satellite mission also recommends the 3β + 2α combination.Here we develop a deeper understanding of the information content and sensitivities of the 3β + 2α system in terms of aerosol microphysical parameters of interest. We use a retrieval-free methodology to determine the basic sensitivities of the measurements independent of retrieval assumptions and constraints. We calculate information content and uncertainty metrics using tools borrowed from the optimal estimation methodology based on Bayes' theorem, using a simplified forward model look-up table, with no explicit inversion. The forward model is simplified to represent spherical particles, monomodal log-normal size distributions, and wavelength-independent refractive indices. Since we only use the forward model with no retrieval, the given simplified aerosol scenario is applicable as a best case for all existing retrievals in the absence of additional constraints. Retrieval-dependent errors due to mismatch between retrieval assumptions and true atmospheric aerosols are not included in this sensitivity study, and neither are retrieval errors that may be introduced in the inversion process. The choice of a simplified model adds clarity to the understanding of the uncertainties in such retrievals, since it allows for separately assessing the sensitivities and uncertainties of the measurements alone that cannot be corrected by any potential or theoretical improvements to retrieval methodology but must instead be addressed by adding information content.The sensitivity metrics allow for identifying (1) information content of the measurements vs. a priori information; (2) error bars on the retrieved parameters; and (3) potential sources of cross-talk or "compensating" errors wherein different retrieval parameters are not independently captured by the measurements. The results suggest that the 3β + 2α measurement system is underdetermined with respect to the full suite of microphysical parameters considered in this study and that additional information is required, in the form of additional coincident measurements (e.g., sun-photometer or polarimeter) or a priori retrieval constraints. A specific recommendation is given for addressing cross-talk between effective radius and total number concentration.
-
Studying Precipitation Processes in WRF with Goddard Bulk Microphysics in Comparison with Other Microphysical Schemes
NASA Technical Reports Server (NTRS)
Tao, W.K.; Shi, J.J.; Braun, S.; Simpson, J.; Chen, S.S.; Lang, S.; Hong, S.Y.; Thompson, G.; Peters-Lidard, C.
2009-01-01
A Goddard bulk microphysical parameterization is implemented into the Weather Research and Forecasting (WRF) model. This bulk microphysical scheme has three different options, 2ICE (cloud ice & snow), 3ICE-graupel (cloud ice, snow & graupel) and 3ICE-hail (cloud ice, snow & hail). High-resolution model simulations are conducted to examine the impact of microphysical schemes on different weather events: a midlatitude linear convective system and an Atlantic hurricane. The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The Goddard 3ICE scheme with the cloud ice-snow-hail configuration agreed better with observations ill of rainfall intensity and having a narrow convective line than did simulations with the cloud ice-snow-graupel and cloud ice-snow (i.e., 2ICE) configurations. This is because the Goddard 3ICE-hail configuration has denser precipitating ice particles (hail) with very fast fall speeds (over 10 m/s) For an Atlantic hurricane case, the Goddard microphysical scheme (with 3ICE-hail, 3ICE-graupel and 2ICE configurations) had no significant impact on the track forecast but did affect the intensity slightly. The Goddard scheme is also compared with WRF's three other 3ICE bulk microphysical schemes: WSM6, Purdue-Lin and Thompson. For the summer midlatitude convective line system, all of the schemes resulted in simulated precipitation events that were elongated in southwest-northeast direction in qualitative agreement with the observed feature. However, the Goddard 3ICE-hail and Thompson schemes were closest to the observed rainfall intensities although the Goddard scheme simulated more heavy rainfall (over 48 mm/h). For the Atlantic hurricane case, none of the schemes had a significant impact on the track forecast; however, the simulated intensity using the Purdue-Lin scheme was much stronger than the other schemes. The vertical distributions of model-simulated cloud species (e.g., snow) are quite sensitive to the microphysical schemes, which is an issue for future verification against satellite retrievals. Both the Purdue-Lin and WSM6 schemes simulated very little snow compared to the other schemes for both the midlatitude convective line and hurricane case. Sensitivity tests with these two schemes showed that increasing the snow intercept, turning off the auto-conversion from snow to graupel, eliminating dry growth, and reducing the transfer processes from cloud-sized particles to precipitation-sized ice collectively resulted in a net increase in those schemes' snow amounts.
-
Using Instrument Simulators and a Satellite Database to Evaluate Microphysical Assumptions in High-Resolution Simulations of Hurricane Rita
NASA Astrophysics Data System (ADS)
Hristova-Veleva, S. M.; Chao, Y.; Chau, A. H.; Haddad, Z. S.; Knosp, B.; Lambrigtsen, B.; Li, P.; Martin, J. M.; Poulsen, W. L.; Rodriguez, E.; Stiles, B. W.; Turk, J.; Vu, Q.
2009-12-01
Improving forecasting of hurricane intensity remains a significant challenge for the research and operational communities. Many factors determine a tropical cyclone’s intensity. Ultimately, though, intensity is dependent on the magnitude and distribution of the latent heating that accompanies the hydrometeor production during the convective process. Hence, the microphysical processes and their representation in hurricane models are of crucial importance for accurately simulating hurricane intensity and evolution. The accurate modeling of the microphysical processes becomes increasingly important when running high-resolution models that should properly reflect the convective processes in the hurricane eyewall. There are many microphysical parameterizations available today. However, evaluating their performance and selecting the most representative ones remains a challenge. Several field campaigns were focused on collecting in situ microphysical observations to help distinguish between different modeling approaches and improve on the most promising ones. However, these point measurements cannot adequately reflect the space and time correlations characteristic of the convective processes. An alternative approach to evaluating microphysical assumptions is to use multi-parameter remote sensing observations of the 3D storm structure and evolution. In doing so, we could compare modeled to retrieved geophysical parameters. The satellite retrievals, however, carry their own uncertainty. To increase the fidelity of the microphysical evaluation results, we can use instrument simulators to produce satellite observables from the model fields and compare to the observed. This presentation will illustrate how instrument simulators can be used to discriminate between different microphysical assumptions. We will compare and contrast the members of high-resolution ensemble WRF model simulations of Hurricane Rita (2005), each member reflecting different microphysical assumptions. We will use the geophysical model fields as input to instrument simulators to produce microwave brightness temperatures and radar reflectivity at the TRMM (TMI and PR) frequencies and polarizations. We will also simulate the surface backscattering cross-section at the QuikSCAT frequency, polarizations and viewing geometry. We will use satellite observations from TRMM and QuikSCAT to determine those parameterizations that yield a realistic forecast and those parameterizations that do not. To facilitate hurricane research, we have developed the JPL Tropical Cyclone Information System (TCIS), which includes a comprehensive set of multi-sensor observations relevant to large-scale and storm-scale processes in the atmosphere and the ocean. In this presentation, we will illustrate how the TCIS can be used for hurricane research. The work described here was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
-
Effect of various binning methods and ROI sizes on the accuracy of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of texture features at HRCT
NASA Astrophysics Data System (ADS)
Kim, Namkug; Seo, Joon Beom; Sung, Yu Sub; Park, Bum-Woo; Lee, Youngjoo; Park, Seong Hoon; Lee, Young Kyung; Kang, Suk-Ho
2008-03-01
To find optimal binning, variable binning size linear binning (LB) and non-linear binning (NLB) methods were tested. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. To find optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of textural analysis at HRCT Six-hundred circular regions of interest (ROI) with 10, 20, and 30 pixel diameter, comprising of each 100 ROIs representing six regional disease patterns (normal, NL; ground-glass opacity, GGO; reticular opacity, RO; honeycombing, HC; emphysema, EMPH; and consolidation, CONS) were marked by an experienced radiologist from HRCT images. Histogram (mean) and co-occurrence matrix (mean and SD of angular second moment, contrast, correlation, entropy, and inverse difference momentum) features were employed to test binning and ROI effects. To find optimal binning, variable binning size LB (bin size Q: 4~30, 32, 64, 128, 144, 196, 256, 384) and NLB (Q: 4~30) methods (K-means, and Fuzzy C-means clustering) were tested. For automated classification, a SVM classifier was implemented. To assess cross-validation of the system, a five-folding method was used. Each test was repeatedly performed twenty times. Overall accuracies with every combination of variable ROIs, and binning sizes were statistically compared. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. In case of 30x30 ROI size and most of binning size, the K-means method showed better than other NLB and LB methods. When optimal binning and other parameters were set, overall sensitivity of the classifier was 92.85%. The sensitivity and specificity of the system for each class were as follows: NL, 95%, 97.9%; GGO, 80%, 98.9%; RO 85%, 96.9%; HC, 94.7%, 97%; EMPH, 100%, 100%; and CONS, 100%, 100%, respectively. We determined the optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of texture features at HRCT.
-
CoMet: a workflow using contig coverage and composition for binning a metagenomic sample with high precision.
PubMed
Herath, Damayanthi; Tang, Sen-Lin; Tandon, Kshitij; Ackland, David; Halgamuge, Saman Kumara
2017-12-28
In metagenomics, the separation of nucleotide sequences belonging to an individual or closely matched populations is termed binning. Binning helps the evaluation of underlying microbial population structure as well as the recovery of individual genomes from a sample of uncultivable microbial organisms. Both supervised and unsupervised learning methods have been employed in binning; however, characterizing a metagenomic sample containing multiple strains remains a significant challenge. In this study, we designed and implemented a new workflow, Coverage and composition based binning of Metagenomes (CoMet), for binning contigs in a single metagenomic sample. CoMet utilizes coverage values and the compositional features of metagenomic contigs. The binning strategy in CoMet includes the initial grouping of contigs in guanine-cytosine (GC) content-coverage space and refinement of bins in tetranucleotide frequencies space in a purely unsupervised manner. With CoMet, the clustering algorithm DBSCAN is employed for binning contigs. The performances of CoMet were compared against four existing approaches for binning a single metagenomic sample, including MaxBin, Metawatt, MyCC (default) and MyCC (coverage) using multiple datasets including a sample comprised of multiple strains. Binning methods based on both compositional features and coverages of contigs had higher performances than the method which is based only on compositional features of contigs. CoMet yielded higher or comparable precision in comparison to the existing binning methods on benchmark datasets of varying complexities. MyCC (coverage) had the highest ranking score in F1-score. However, the performances of CoMet were higher than MyCC (coverage) on the dataset containing multiple strains. Furthermore, CoMet recovered contigs of more species and was 18 - 39% higher in precision than the compared existing methods in discriminating species from the sample of multiple strains. CoMet resulted in higher precision than MyCC (default) and MyCC (coverage) on a real metagenome. The approach proposed with CoMet for binning contigs, improves the precision of binning while characterizing more species in a single metagenomic sample and in a sample containing multiple strains. The F1-scores obtained from different binning strategies vary with different datasets; however, CoMet yields the highest F1-score with a sample comprised of multiple strains.
-
Comparative evaluation of polarimetric and bi-spectral cloud microphysics retrievals: Retrieval closure experiments and comparisons based on idealized and LES case studies
NASA Astrophysics Data System (ADS)
Miller, D. J.; Zhang, Z.; Ackerman, A. S.; Platnick, S. E.; Cornet, C.
2016-12-01
A remote sensing cloud retrieval simulator, created by coupling an LES cloud model with vector radiative transfer (RT) models is the ideal framework for assessing cloud remote sensing techniques. This simulator serves as a tool for understanding bi-spectral and polarimetric retrievals by comparing them directly to LES cloud properties (retrieval closure comparison) and for comparing the retrieval techniques to one another. Our simulator utilizes the DHARMA LES [Ackerman et al., 2004] with cloud properties based on marine boundary layer (MBL) clouds observed during the DYCOMS-II and ATEX field campaigns. The cloud reflectances are produced by the vectorized RT models based on polarized doubling adding and monte carlo techniques (PDA, MCPOL). Retrievals are performed utilizing techniques as similar as possible to those implemented on their corresponding well known instruments; polarimetric retrievals are based on techniques implemented for polarimeters (POLDER, AirMSPI, and RSP) and bi-spectral retrievals are performed using the Nakajima-King LUT method utilized on a number of spectral instruments (MODIS and VIIRS). Retrieval comparisons focus on cloud droplet effective radius (re), effective variance (ve), and cloud optical thickness (τ). This work explores the sensitivities of these two retrieval techniques to various observation limitations, such as spatial resolution/cloud inhomogeneity, impact of 3D radiative effects, and angular resolution requirements. With future remote sensing missions like NASA's Aerosols/Clouds/Ecosystems (ACE) planning to feature advanced polarimetric instruments it is important to understand how these retrieval techniques compare to one another. The cloud retrieval simulator we've developed allows us to probe these important questions in a realistically relevant test bed.
-
[Detecting fire smoke based on the multispectral image].
PubMed
Wei, Ying-Zhuo; Zhang, Shao-Wu; Liu, Yan-Wei
2010-04-01
Smoke detection is very important for preventing forest-fire in the fire early process. Because the traditional technologies based on video and image processing are easily affected by the background dynamic information, three limitations exist in these technologies, i. e. lower anti-interference ability, higher false detection rate and the fire smoke and water fog being not easily distinguished. A novel detection method for detecting smoke based on the multispectral image was proposed in the present paper. Using the multispectral digital imaging technique, the multispectral image series of fire smoke and water fog were obtained in the band scope of 400 to 720 nm, and the images were divided into bins. The Euclidian distance among the bins was taken as a measurement for showing the difference of spectrogram. After obtaining the spectral feature vectors of dynamic region, the regions of fire smoke and water fog were extracted according to the spectrogram feature difference between target and background. The indoor and outdoor experiments show that the smoke detection method based on multispectral image can be applied to the smoke detection, which can effectively distinguish the fire smoke and water fog. Combined with video image processing method, the multispectral image detection method can also be applied to the forest fire surveillance, reducing the false alarm rate in forest fire detection.
-
Indian Summer Monsoon Drought 2009: Role of Aerosol and Cloud Microphysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hazra, Anupam; Taraphdar, Sourav; Halder, Madhuparna
2013-07-01
Cloud dynamics played a fundamental role in defining Indian summer monsoon (ISM) rainfall during drought in 2009. The anomalously negative precipitation was consistent with cloud properties. Although, aerosols inhibited the growth of cloud effective radius in the background of sparse water vapor, their role is secondary. The primary role, however, is played by the interactive feedback between cloud microphysics and dynamics owing to reduced efficient cloud droplet growth, lesser latent heating release and shortage of water content. Cloud microphysical processes were instrumental for the occurrence of ISM drought 2009.
-
Cirrus Cloud Optical and Microphysical Property Retrievals from eMAS During SEAC4RS Using Bi-Spectral Reflectance Measurements Within the 1.88 micron Water Vapor Absorption Band
NASA Technical Reports Server (NTRS)
Meyer, K.; Platnick, S.; Arnold, G. T.; Holz, R. E.; Veglio, P.; Yorks, J.; Wang, C.
2016-01-01
Previous bi-spectral imager retrievals of cloud optical thickness (COT) and effective particle radius (CER) based on the Nakajima and King (1990) approach, such as those of the operational MODIS cloud optical property retrieval product (MOD06), have typically paired a non-absorbing visible or near-infrared wavelength, sensitive to COT, with an absorbing shortwave or midwave infrared wavelength sensitive to CER. However, in practice it is only necessary to select two spectral channels that exhibit a strong contrast in cloud particle absorption. Here it is shown, using eMAS observations obtained during NASAs SEAC4RS field campaign, that selecting two absorbing wavelength channels within the broader 1.88 micron water vapor absorption band, namely the 1.83 and 1.93 micron channels that have sufficient differences in ice crystal single scattering albedo, can yield COT and CER retrievals for thin to moderately thick single-layer cirrus that are reasonably consistent with other solar and IR imager-based and lidar-based retrievals. A distinct advantage of this channel selection for cirrus cloud retrievals is that the below cloud water vapor absorption minimizes the surface contribution to measured cloudy TOA reflectance, in particular compared to the solar window channels used in heritage retrievals such as MOD06. This reduces retrieval uncertainty resulting from errors in the surface reflectance assumption, as well as reduces the frequency of retrieval failures for thin cirrus clouds.
-
Cirrus cloud optical and microphysical property retrievals from eMAS during SEAC4RS using bi-spectral reflectance measurements within the 1.88 µm water vapor absorption band
NASA Astrophysics Data System (ADS)
Meyer, Kerry; Platnick, Steven; Arnold, G. Thomas; Holz, Robert E.; Veglio, Paolo; Yorks, John; Wang, Chenxi
2016-04-01
Previous bi-spectral imager retrievals of cloud optical thickness (COT) and effective particle radius (CER) based on the Nakajima and King (1990) approach, such as those of the operational MODIS cloud optical property retrieval product (MOD06), have typically paired a non-absorbing visible or near-infrared wavelength, sensitive to COT, with an absorbing shortwave or mid-wave infrared wavelength sensitive to CER. However, in practice it is only necessary to select two spectral channels that exhibit a strong contrast in cloud particle absorption. Here it is shown, using eMAS observations obtained during NASA's SEAC4RS field campaign, that selecting two absorbing wavelength channels within the broader 1.88 µm water vapor absorption band, namely the 1.83 and 1.93 µm channels that have sufficient differences in ice crystal single scattering albedo, can yield COT and CER retrievals for thin to moderately thick single-layer cirrus that are reasonably consistent with other solar and IR imager-based and lidar-based retrievals. A distinct advantage of this channel selection for cirrus cloud retrievals is that the below-cloud water vapor absorption minimizes the surface contribution to measured cloudy top-of-atmosphere reflectance, in particular compared to the solar window channels used in heritage retrievals such as MOD06. This reduces retrieval uncertainty resulting from errors in the surface reflectance assumption and reduces the frequency of retrieval failures for thin cirrus clouds.
-
Subcellular Changes in Bridging Integrator 1 Protein Expression in the Cerebral Cortex During the Progression of Alzheimer Disease Pathology.
PubMed
Adams, Stephanie L; Tilton, Kathy; Kozubek, James A; Seshadri, Sudha; Delalle, Ivana
2016-08-01
Genome-wide association studies have established BIN1 (Bridging Integrator 1) as the most significant late-onset Alzheimer disease (AD) susceptibility locus after APOE We analyzed BIN1 protein expression using automated immunohistochemistry on the hippocampal CA1 region in 19 patients with either no, mild, or moderate-to-marked AD pathology, who had been assessed by Clinical Dementia Rating and CERAD scores. We also examined the amygdala, prefrontal, temporal, and occipital regions in a subset of these patients. In non-demented controls without AD pathology, BIN1 protein was expressed in white matter, glia, particularly oligodendrocytes, and in the neuropil in which the BIN1 signal decorated axons. With increasing severity of AD, BIN1 in the CA1 region showed: 1) sustained expression in glial cells, 2) decreased areas of neuropil expression, and 3) increased cytoplasmic neuronal expression that did not correlate with neurofibrillary tangle load. In patients with AD, both the prefrontal cortex and CA1 showed a decrease in BIN1-immunoreactive (BIN1-ir) neuropil areas and increases in numbers of BIN1-ir neurons. The numbers of CA1 BIN1-ir pyramidal neurons correlated with hippocampal CERAD neuritic plaque scores; BIN1 neuropil signal was absent in neuritic plaques. Our data provide novel insight into the relationship between BIN1 protein expression and the progression of AD-associated pathology and its diagnostic hallmarks. © 2016 American Association of Neuropathologists, Inc. All rights reserved.
-
Bin Ratio-Based Histogram Distances and Their Application to Image Classification.
PubMed
Hu, Weiming; Xie, Nianhua; Hu, Ruiguang; Ling, Haibin; Chen, Qiang; Yan, Shuicheng; Maybank, Stephen
2014-12-01
Large variations in image background may cause partial matching and normalization problems for histogram-based representations, i.e., the histograms of the same category may have bins which are significantly different, and normalization may produce large changes in the differences between corresponding bins. In this paper, we deal with this problem by using the ratios between bin values of histograms, rather than bin values' differences which are used in the traditional histogram distances. We propose a bin ratio-based histogram distance (BRD), which is an intra-cross-bin distance, in contrast with previous bin-to-bin distances and cross-bin distances. The BRD is robust to partial matching and histogram normalization, and captures correlations between bins with only a linear computational complexity. We combine the BRD with the ℓ1 histogram distance and the χ(2) histogram distance to generate the ℓ1 BRD and the χ(2) BRD, respectively. These combinations exploit and benefit from the robustness of the BRD under partial matching and the robustness of the ℓ1 and χ(2) distances to small noise. We propose a method for assessing the robustness of histogram distances to partial matching. The BRDs and logistic regression-based histogram fusion are applied to image classification. The experimental results on synthetic data sets show the robustness of the BRDs to partial matching, and the experiments on seven benchmark data sets demonstrate promising results of the BRDs for image classification.
-
Basis material decomposition in spectral CT using a semi-empirical, polychromatic adaption of the Beer-Lambert model.
PubMed
Ehn, S; Sellerer, T; Mechlem, K; Fehringer, A; Epple, M; Herzen, J; Pfeiffer, F; Noël, P B
2017-01-07
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
-
Basis material decomposition in spectral CT using a semi-empirical, polychromatic adaption of the Beer-Lambert model
NASA Astrophysics Data System (ADS)
Ehn, S.; Sellerer, T.; Mechlem, K.; Fehringer, A.; Epple, M.; Herzen, J.; Pfeiffer, F.; Noël, P. B.
2017-01-01
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
-
Colour variations in the GRB 120327A afterglow
NASA Astrophysics Data System (ADS)
Melandri, A.; Covino, S.; Zaninoni, E.; Campana, S.; Bolmer, J.; Cobb, B. E.; Gorosabel, J.; Kim, J.-W.; Kuin, P.; Kuroda, D.; Malesani, D.; Mundell, C. G.; Nappo, F.; Sbarufatti, B.; Smith, R. J.; Steele, I. A.; Topinka, M.; Trotter, A. S.; Virgili, F. J.; Bernardini, M. G.; D'Avanzo, P.; D'Elia, V.; Fugazza, D.; Ghirlanda, G.; Gomboc, A.; Greiner, J.; Guidorzi, C.; Haislip, J. B.; Hanayama, H.; Hanlon, L.; Im, M.; Ivarsen, K. M.; Japelj, J.; Jelínek, M.; Kawai, N.; Kobayashi, S.; Kopac, D.; LaCluyzé, A. P.; Martin-Carrillo, A.; Murphy, D.; Reichart, D. E.; Salvaterra, R.; Salafia, O. S.; Tagliaferri, G.; Vergani, S. D.
2017-10-01
Aims: We present a comprehensive temporal and spectral analysis of the long Swift GRB 120327A afterglow data to investigate possible causes of the observed early-time colour variations. Methods: We collected data from various instruments and telescopes in X-ray, ultraviolet, optical, and near-infrared bands, and determined the shapes of the afterglow early-time light curves. We studied the overall temporal behaviour and the spectral energy distributions from early to late times. Results: The ultraviolet, optical, and near-infrared light curves can be modelled with a single power-law component between 200 and 2 × 104 s after the burst event. The X-ray light curve shows a canonical steep-shallow-steep behaviour that is typical of long gamma-ray bursts. At early times a colour variation is observed in the ultraviolet/optical bands, while at very late times a hint of a re-brightening is visible. The observed early-time colour change can be explained as a variation in the intrinsic optical spectral index, rather than an evolution of the optical extinction. Table 2 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A29
-
Calibration methods influence quantitative material decomposition in photon-counting spectral CT
NASA Astrophysics Data System (ADS)
Curtis, Tyler E.; Roeder, Ryan K.
2017-03-01
Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.
-
The Impact of Microphysical Schemes on Hurricane Intensity and Track
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Shi, Jainn Jong; Chen, Shuyi S.; Lang, Stephen; Lin, Pay-Liam; Hong, Song-You; Peters-Lidard, Christa; Hou, Arthur
2011-01-01
During the past decade, both research and operational numerical weather prediction models [e.g. the Weather Research and Forecasting Model (WRF)] have started using more complex microphysical schemes originally developed for high-resolution cloud resolving models (CRMs) with 1-2 km or less horizontal resolutions. WRF is a next-generation meso-scale forecast model and assimilation system. It incorporates a modern software framework, advanced dynamics, numerics and data assimilation techniques, a multiple moveable nesting capability, and improved physical packages. WRF can be used for a wide range of applications, from idealized research to operational forecasting, with an emphasis on horizontal grid sizes in the range of 1-10 km. The current WRF includes several different microphysics options. At NASA Goddard, four different cloud microphysics options have been implemented into WRF. The performance of these schemes is compared to those of the other microphysics schemes available in WRF for an Atlantic hurricane case (Katrina). In addition, a brief review of previous modeling studies on the impact of microphysics schemes and processes on the intensity and track of hurricanes is presented and compared against the current Katrina study. In general, all of the studies show that microphysics schemes do not have a major impact on track forecasts but do have more of an effect on the simulated intensity. Also, nearly all of the previous studies found that simulated hurricanes had the strongest deepening or intensification when using only warm rain physics. This is because all of the simulated precipitating hydrometeors are large raindrops that quickly fall out near the eye-wall region, which would hydrostatically produce the lowest pressure. In addition, these studies suggested that intensities become unrealistically strong when evaporative cooling from cloud droplets and melting from ice particles are removed as this results in much weaker downdrafts in the simulated storms. However, there are many differences between the different modeling studies, which are identified and discussed.
-
Potential fitting biases resulting from grouping data into variable width bins
NASA Astrophysics Data System (ADS)
Towers, S.
2014-07-01
When reading peer-reviewed scientific literature describing any analysis of empirical data, it is natural and correct to proceed with the underlying assumption that experiments have made good faith efforts to ensure that their analyses yield unbiased results. However, particle physics experiments are expensive and time consuming to carry out, thus if an analysis has inherent bias (even if unintentional), much money and effort can be wasted trying to replicate or understand the results, particularly if the analysis is fundamental to our understanding of the universe. In this note we discuss the significant biases that can result from data binning schemes. As we will show, if data are binned such that they provide the best comparison to a particular (but incorrect) model, the resulting model parameter estimates when fitting to the binned data can be significantly biased, leading us to too often accept the model hypothesis when it is not in fact true. When using binned likelihood or least squares methods there is of course no a priori requirement that data bin sizes need to be constant, but we show that fitting to data grouped into variable width bins is particularly prone to produce biased results if the bin boundaries are chosen to optimize the comparison of the binned data to a wrong model. The degree of bias that can be achieved simply with variable binning can be surprisingly large. Fitting the data with an unbinned likelihood method, when possible to do so, is the best way for researchers to show that their analyses are not biased by binning effects. Failing that, equal bin widths should be employed as a cross-check of the fitting analysis whenever possible.