Quantitative Compactness Estimates for Hamilton-Jacobi Equations
NASA Astrophysics Data System (ADS)
Ancona, Fabio; Cannarsa, Piermarco; Nguyen, Khai T.
2016-02-01
We study quantitative compactness estimates in {W^{1,1}_{loc}} for the map {S_t}, {t > 0} that is associated with the given initial data {u_0in Lip (R^N)} for the corresponding solution {S_t u_0} of a Hamilton-Jacobi equation u_t+Hbig(nabla_{x} ubig)=0, qquad t≥ 0,quad xinR^N, with a uniformly convex Hamiltonian {H=H(p)}. We provide upper and lower estimates of order {1/\\varepsilon^N} on the Kolmogorov {\\varepsilon}-entropy in {W^{1,1}} of the image through the map S t of sets of bounded, compactly supported initial data. Estimates of this type are inspired by a question posed by Lax (Course on Hyperbolic Systems of Conservation Laws. XXVII Scuola Estiva di Fisica Matematica, Ravello, 2002) within the context of conservation laws, and could provide a measure of the order of "resolution" of a numerical method implemented for this equation.
Growth and yield predictions for upland oak stands. 10 years after initial thinning
Martin E. Dale; Martin E. Dale
1972-01-01
The purpose of this paper is to furnish part of the needed information, that is, quantitative estimates of growth and yield 10 years after initial thinning of upland oak stands. All estimates are computed from a system of equations. These predictions are presented here in tabular form for convenient visual inspection of growth and yield trends. The tables show growth...
Mapping of quantitative trait loci controlling adaptive traits in coastal Douglas-fir. III
Kathleen D. Jermstad; Daniel L. Bassoni; Keith S. Jech; Gary A. Ritchie; Nicholas C. Wheeler; David B. Neale
2003-01-01
Quantitative trait loci (QTL) were mapped in the woody perennial Douglas fir (Pseudotsuga menziesii var. menziesii [Mirb.] Franco) for complex traits controlling the timing of growth initiation and growth cessation. QTL were estimated under controlled environmental conditions to identify QTL interactions with photoperiod, moisture stress, winter chilling, and spring...
Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y
Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less
USDA-ARS?s Scientific Manuscript database
The U.S. National Beef Cattle Evaluation Consortium (NBCEC) has been involved in the validation of commercial DNA tests for quantitative beef quality traits since their first appearance on the U.S. market in the early 2000s. The NBCEC Advisory Council initially requested that the NBCEC set up a syst...
Quantitative endoscopy: initial accuracy measurements.
Truitt, T O; Adelman, R A; Kelly, D H; Willging, J P
2000-02-01
The geometric optics of an endoscope can be used to determine the absolute size of an object in an endoscopic field without knowing the actual distance from the object. This study explores the accuracy of a technique that estimates absolute object size from endoscopic images. Quantitative endoscopy involves calibrating a rigid endoscope to produce size estimates from 2 images taken with a known traveled distance between the images. The heights of 12 samples, ranging in size from 0.78 to 11.80 mm, were estimated with this calibrated endoscope. Backup distances of 5 mm and 10 mm were used for comparison. The mean percent error for all estimated measurements when compared with the actual object sizes was 1.12%. The mean errors for 5-mm and 10-mm backup distances were 0.76% and 1.65%, respectively. The mean errors for objects <2 mm and > or =2 mm were 0.94% and 1.18%, respectively. Quantitative endoscopy estimates endoscopic image size to within 5% of the actual object size. This method remains promising for quantitatively evaluating object size from endoscopic images. It does not require knowledge of the absolute distance of the endoscope from the object, rather, only the distance traveled by the endoscope between images.
NASA Astrophysics Data System (ADS)
Goldar, A.; Arneodo, A.; Audit, B.; Argoul, F.; Rappailles, A.; Guilbaud, G.; Petryk, N.; Kahli, M.; Hyrien, O.
2016-03-01
We propose a non-local model of DNA replication that takes into account the observed uncertainty on the position and time of replication initiation in eukaryote cell populations. By picturing replication initiation as a two-state system and considering all possible transition configurations, and by taking into account the chromatin’s fractal dimension, we derive an analytical expression for the rate of replication initiation. This model predicts with no free parameter the temporal profiles of initiation rate, replication fork density and fraction of replicated DNA, in quantitative agreement with corresponding experimental data from both S. cerevisiae and human cells and provides a quantitative estimate of initiation site redundancy. This study shows that, to a large extent, the program that regulates the dynamics of eukaryotic DNA replication is a collective phenomenon that emerges from the stochastic nature of replication origins initiation.
Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.
Cobbs, Gary
2012-08-16
Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of initial target concentration. Model 1 was found to be slightly more robust than model 2 giving better estimates of initial target concentration when estimation of parameters was done for qPCR curves with very different initial target concentration. Both models may be used to estimate the initial absolute concentration of target sequence when a standard curve is not available. It is argued that the kinetic approach to modeling and interpreting quantitative PCR data has the potential to give more precise estimates of the true initial target concentrations than other methods currently used for analysis of qPCR data. The two models presented here give a unified model of the qPCR process in that they explain the shape of the qPCR curve for a wide variety of initial target concentrations.
Novel Sessile Drop Software for Quantitative Estimation of Slag Foaming in Carbon/Slag Interactions
NASA Astrophysics Data System (ADS)
Khanna, Rita; Rahman, Mahfuzur; Leow, Richard; Sahajwalla, Veena
2007-08-01
Novel video-processing software has been developed for the sessile drop technique for a rapid and quantitative estimation of slag foaming. The data processing was carried out in two stages: the first stage involved the initial transformation of digital video/audio signals into a format compatible with computing software, and the second stage involved the computation of slag droplet volume and area of contact in a chosen video frame. Experimental results are presented on slag foaming from synthetic graphite/slag system at 1550 °C. This technique can be used for determining the extent and stability of foam as a function of time.
Pulkkinen, Aki; Cox, Ben T; Arridge, Simon R; Goh, Hwan; Kaipio, Jari P; Tarvainen, Tanja
2016-11-01
Estimation of optical absorption and scattering of a target is an inverse problem associated with quantitative photoacoustic tomography. Conventionally, the problem is expressed as two folded. First, images of initial pressure distribution created by absorption of a light pulse are formed based on acoustic boundary measurements. Then, the optical properties are determined based on these photoacoustic images. The optical stage of the inverse problem can thus suffer from, for example, artefacts caused by the acoustic stage. These could be caused by imperfections in the acoustic measurement setting, of which an example is a limited view acoustic measurement geometry. In this work, the forward model of quantitative photoacoustic tomography is treated as a coupled acoustic and optical model and the inverse problem is solved by using a Bayesian approach. Spatial distribution of the optical properties of the imaged target are estimated directly from the photoacoustic time series in varying acoustic detection and optical illumination configurations. It is numerically demonstrated, that estimation of optical properties of the imaged target is feasible in limited view acoustic detection setting.
Linearization improves the repeatability of quantitative dynamic contrast-enhanced MRI.
Jones, Kyle M; Pagel, Mark D; Cárdenas-Rodríguez, Julio
2018-04-01
The purpose of this study was to compare the repeatabilities of the linear and nonlinear Tofts and reference region models (RRM) for dynamic contrast-enhanced MRI (DCE-MRI). Simulated and experimental DCE-MRI data from 12 rats with a flank tumor of C6 glioma acquired over three consecutive days were analyzed using four quantitative and semi-quantitative DCE-MRI metrics. The quantitative methods used were: 1) linear Tofts model (LTM), 2) non-linear Tofts model (NTM), 3) linear RRM (LRRM), and 4) non-linear RRM (NRRM). The following semi-quantitative metrics were used: 1) maximum enhancement ratio (MER), 2) time to peak (TTP), 3) initial area under the curve (iauc64), and 4) slope. LTM and NTM were used to estimate K trans , while LRRM and NRRM were used to estimate K trans relative to muscle (R Ktrans ). Repeatability was assessed by calculating the within-subject coefficient of variation (wSCV) and the percent intra-subject variation (iSV) determined with the Gage R&R analysis. The iSV for R Ktrans using LRRM was two-fold lower compared to NRRM at all simulated and experimental conditions. A similar trend was observed for the Tofts model, where LTM was at least 50% more repeatable than the NTM under all experimental and simulated conditions. The semi-quantitative metrics iauc64 and MER were as equally repeatable as K trans and R Ktrans estimated by LTM and LRRM respectively. The iSV for iauc64 and MER were significantly lower than the iSV for slope and TTP. In simulations and experimental results, linearization improves the repeatability of quantitative DCE-MRI by at least 30%, making it as repeatable as semi-quantitative metrics. Copyright © 2017 Elsevier Inc. All rights reserved.
Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation
NASA Astrophysics Data System (ADS)
Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien
2018-04-01
We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.
Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation
NASA Astrophysics Data System (ADS)
Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien
2018-06-01
We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.
Development of an agricultural job-exposure matrix for British Columbia, Canada.
Wood, David; Astrakianakis, George; Lang, Barbara; Le, Nhu; Bert, Joel
2002-09-01
Farmers in British Columbia (BC), Canada have been shown to have unexplained elevated proportional mortality rates for several cancers. Because agricultural exposures have never been documented systematically in BC, a quantitative agricultural Job-exposure matrix (JEM) was developed containing exposure assessments from 1950 to 1998. This JEM was developed to document historical exposures and to facilitate future epidemiological studies. Available information regarding BC farming practices was compiled and checklists of potential exposures were produced for each crop. Exposures identified included chemical, biological, and physical agents. Interviews with farmers and agricultural experts were conducted using the checklists as a starting point. This allowed the creation of an initial or 'potential' JEM based on three axes: exposure agent, 'type of work' and time. The 'type of work' axis was determined by combining several variables: region, crop, job title and task. This allowed for a complete description of exposures. Exposure assessments were made quantitatively, where data allowed, or by a dichotomous variable (exposed/unexposed). Quantitative calculations were divided into re-entry and application scenarios. 'Re-entry' exposures were quantified using a standard exposure model with some modification while application exposure estimates were derived using data from the North American Pesticide Handlers Exposure Database (PHED). As expected, exposures differed between crops and job titles both quantitatively and qualitatively. Of the 290 agents included in the exposure axis; 180 were pesticides. Over 3000 estimates of exposure were conducted; 50% of these were quantitative. Each quantitative estimate was at the daily absorbed dose level. Exposure estimates were then rated as high, medium, or low based on comparing them with their respective oral chemical reference dose (RfD) or Acceptable Daily Intake (ADI). This data was mainly obtained from the US Environmental Protection Agency (EPA) Integrated Risk Information System database. Of the quantitative estimates, 74% were rated as low (< 100%) and only 10% were rated as high (>500%). The JEM resulting from this study fills a void concerning exposures for BC farmers and farm workers. While only limited validation of assessments were possible, this JEM can serve as a benchmark for future studies. Preliminary analysis at the BC Cancer Agency (BCCA) using the JEM with prostate cancer records from a large cancer and occupation study/survey has already shown promising results. Development of this JEM provides a useful model for developing historical quantitative exposure estimates where is very little documented information available.
End-to-end deep neural network for optical inversion in quantitative photoacoustic imaging.
Cai, Chuangjian; Deng, Kexin; Ma, Cheng; Luo, Jianwen
2018-06-15
An end-to-end deep neural network, ResU-net, is developed for quantitative photoacoustic imaging. A residual learning framework is used to facilitate optimization and to gain better accuracy from considerably increased network depth. The contracting and expanding paths enable ResU-net to extract comprehensive context information from multispectral initial pressure images and, subsequently, to infer a quantitative image of chromophore concentration or oxygen saturation (sO 2 ). According to our numerical experiments, the estimations of sO 2 and indocyanine green concentration are accurate and robust against variations in both optical property and object geometry. An extremely short reconstruction time of 22 ms is achieved.
Li, Chunhui; Guan, Guangying; Zhang, Fan; Song, Shaozhen; Wang, Ruikang K; Huang, Zhihong; Nabi, Ghulam
2014-12-01
The maintenance of urinary bladder elasticity is essential to its functions, including the storage and voiding phases of the micturition cycle. The bladder stiffness can be changed by various pathophysiological conditions. Quantitative measurement of bladder elasticity is an essential step toward understanding various urinary bladder disease processes and improving patient care. As a nondestructive, and noncontact method, laser-induced surface acoustic waves (SAWs) can accurately characterize the elastic properties of different layers of organs such as the urinary bladder. This initial investigation evaluates the feasibility of a noncontact, all-optical method of generating and measuring the elasticity of the urinary bladder. Quantitative elasticity measurements of ex vivo porcine urinary bladder were made using the laser-induced SAW technique. A pulsed laser was used to excite SAWs that propagated on the bladder wall surface. A dedicated phase-sensitive optical coherence tomography (PhS-OCT) system remotely recorded the SAWs, from which the elasticity properties of different layers of the bladder were estimated. During the experiments, series of measurements were performed under five precisely controlled bladder volumes using water to estimate changes in the elasticity in relation to various urinary bladder contents. The results, validated by optical coherence elastography, show that the laser-induced SAW technique combined with PhS-OCT can be a feasible method of quantitative estimation of biomechanical properties.
Quantitative PCR Method for Diagnosis of Citrus Bacterial Canker†
Cubero, J.; Graham, J. H.; Gottwald, T. R.
2001-01-01
For diagnosis of citrus bacterial canker by PCR, an internal standard is employed to ensure the quality of the DNA extraction and that proper requisites exist for the amplification reaction. The ratio of PCR products from the internal standard and bacterial target is used to estimate the initial bacterial concentration in citrus tissues with lesions. PMID:11375206
Impact of TRMM and SSM/I Rainfall Assimilation on Global Analysis and QPF
NASA Technical Reports Server (NTRS)
Hou, Arthur; Zhang, Sara; Reale, Oreste
2002-01-01
Evaluation of QPF skills requires quantitatively accurate precipitation analyses. We show that assimilation of surface rain rates derived from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager and Special Sensor Microwave/Imager (SSM/I) improves quantitative precipitation estimates (QPE) and many aspects of global analyses. Short-range forecasts initialized with analyses with satellite rainfall data generally yield significantly higher QPF threat scores and better storm track predictions. These results were obtained using a variational procedure that minimizes the difference between the observed and model rain rates by correcting the moist physics tendency of the forecast model over a 6h assimilation window. In two case studies of Hurricanes Bonnie and Floyd, synoptic analysis shows that this procedure produces initial conditions with better-defined tropical storm features and stronger precipitation intensity associated with the storm.
Raju, Valivarthi S R; Kannababu, S; Subbaraju, Gottumukkala V
2006-01-01
An improved high-performance thin-layer chromatographic (HPTLC) method for the standardisation of Gymnema sylvestre is reported. The method involves the initial hydrolysis of gymnemic acids, the active ingredients, to a common aglycone followed by the quantitative estimation of gymnemagenin. The present method rectifies an error found in an HPTLC method reported recently.
Convergence optimization of parametric MLEM reconstruction for estimation of Patlak plot parameters.
Angelis, Georgios I; Thielemans, Kris; Tziortzi, Andri C; Turkheimer, Federico E; Tsoumpas, Charalampos
2011-07-01
In dynamic positron emission tomography data many researchers have attempted to exploit kinetic models within reconstruction such that parametric images are estimated directly from measurements. This work studies a direct parametric maximum likelihood expectation maximization algorithm applied to [(18)F]DOPA data using reference-tissue input function. We use a modified version for direct reconstruction with a gradually descending scheme of subsets (i.e. 18-6-1) initialized with the FBP parametric image for faster convergence and higher accuracy. The results compared with analytic reconstructions show quantitative robustness (i.e. minimal bias) and clinical reproducibility within six human acquisitions in the region of clinical interest. Bland-Altman plots for all the studies showed sufficient quantitative agreement between the direct reconstructed parametric maps and the indirect FBP (--0.035x+0.48E--5). Copyright © 2011 Elsevier Ltd. All rights reserved.
Weak Value Amplification is Suboptimal for Estimation and Detection
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-01-01
We show by using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection. Specifically, we prove that postselection, a necessary ingredient for weak value amplification, decreases estimation accuracy and, moreover, arranging for anomalously large weak values is a suboptimal strategy. In doing so, we explicitly provide the optimal estimator, which in turn allows us to identify the optimal experimental arrangement to be the one in which all outcomes have equal weak values (all as small as possible) and the initial state of the meter is the maximal eigenvalue of the square of the system observable. Finally, we give precise quantitative conditions for when weak measurement (measurements without postselection or anomalously large weak values) can mitigate the effect of uncharacterized technical noise in estimation.
Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib
2016-01-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate Ki as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting Ki images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit Ki bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source Software for Tomographic Image Reconstruction (STIR) platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced Ki target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D vs. the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10–20 sub-iterations. Moreover, systematic reduction in Ki % bias and improved TBR were observed for gPatlak vs. sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior Ki CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging. PMID:27383991
NASA Astrophysics Data System (ADS)
Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib
2016-08-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were observed for gPatlak versus sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior K i CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging.
Forecasting seasonal outbreaks of influenza.
Shaman, Jeffrey; Karspeck, Alicia
2012-12-11
Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003-2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza.
Forecasting seasonal outbreaks of influenza
Shaman, Jeffrey; Karspeck, Alicia
2012-01-01
Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003–2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza. PMID:23184969
Quantitative estimation of Nipah virus replication kinetics in vitro
Chang, Li-Yen; Ali, AR Mohd; Hassan, Sharifah Syed; AbuBakar, Sazaly
2006-01-01
Background Nipah virus is a zoonotic virus isolated from an outbreak in Malaysia in 1998. The virus causes infections in humans, pigs, and several other domestic animals. It has also been isolated from fruit bats. The pathogenesis of Nipah virus infection is still not well described. In the present study, Nipah virus replication kinetics were estimated from infection of African green monkey kidney cells (Vero) using the one-step SYBR® Green I-based quantitative real-time reverse transcriptase-polymerase chain reaction (qRT-PCR) assay. Results The qRT-PCR had a dynamic range of at least seven orders of magnitude and can detect Nipah virus from as low as one PFU/μL. Following initiation of infection, it was estimated that Nipah virus RNA doubles at every ~40 minutes and attained peak intracellular virus RNA level of ~8.4 log PFU/μL at about 32 hours post-infection (PI). Significant extracellular Nipah virus RNA release occurred only after 8 hours PI and the level peaked at ~7.9 log PFU/μL at 64 hours PI. The estimated rate of Nipah virus RNA released into the cell culture medium was ~0.07 log PFU/μL per hour and less than 10% of the released Nipah virus RNA was infectious. Conclusion The SYBR® Green I-based qRT-PCR assay enabled quantitative assessment of Nipah virus RNA synthesis in Vero cells. A low rate of Nipah virus extracellular RNA release and low infectious virus yield together with extensive syncytial formation during the infection support a cell-to-cell spread mechanism for Nipah virus infection. PMID:16784519
Effect of Stress State on Fracture Features
NASA Astrophysics Data System (ADS)
Das, Arpan
2018-02-01
Present article comprehensively explores the influence of specimen thickness on the quantitative estimates of different ductile fractographic features in two dimensions, correlating tensile properties of a reactor pressure vessel steel tested under ambient temperature where the initial crystallographic texture, inclusion content, and their distribution are kept unaltered. It has been investigated that the changes in tensile fracture morphology of these steels are directly attributable to the resulting stress-state history under tension for given specimen dimensions.
Monitoring vegetation conditions from LANDSAT for use in range management
NASA Technical Reports Server (NTRS)
Haas, R. H.; Deering, D. W.; Rouse, J. W., Jr.; Schell, J. A.
1975-01-01
A summary of the LANDSAT Great Plains Corridor projects and the principal results are presented. Emphasis is given to the use of satellite acquired phenological data for range management and agri-business activities. A convenient method of reducing LANDSAT MSS data to provide quantitative estimates of green biomass on rangelands in the Great Plains is explained. Suggestions for the use of this approach for evaluating range feed conditions are presented. A LANDSAT Follow-on project has been initiated which will employ the green biomass estimation method in a quasi-operational monitoring of range readiness and range feed conditions on a regional scale.
Time-to-contact estimation of accelerated stimuli is based on first-order information.
Benguigui, Nicolas; Ripoll, Hubert; Broderick, Michael P
2003-12-01
The goal of this study was to test whether 1st-order information, which does not account for acceleration, is used (a) to estimate the time to contact (TTC) of an accelerated stimulus after the occlusion of a final part of its trajectory and (b) to indirectly intercept an accelerated stimulus with a thrown projectile. Both tasks require the production of an action on the basis of predictive information acquired before the arrival of the stimulus at the target and allow the experimenter to make quantitative predictions about the participants' use (or nonuse) of 1st-order information. The results show that participants do not use information about acceleration and that they commit errors that rely quantitatively on 1st-order information even when acceleration is psychophysically detectable. In the indirect interceptive task, action is planned about 200 ms before the initiation of the movement, at which time the 1st-order TTC attains a critical value. ((c) 2003 APA, all rights reserved)
On A Problem Of Propagation Of Shock Waves Generated By Explosive Volcanic Eruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gusev, V. A.; Sobissevitch, A. L.
2008-06-24
Interdisciplinary study of flows of matter and energy in geospheres has become one of the most significant advances in Earth sciences. It is carried out by means of direct quantitative estimations based on detailed analysis of geological and geophysical observations and experimental data. The actual contribution is the interdisciplinary study of nonlinear acoustics and physical volcanology dedicated to shock wave propagation in a viscous and inhomogeneous medium. The equations governing evolution of shock waves with an arbitrary initial profile and an arbitrary cross-section of a beam are obtained. For the case of low viscous medium, the asymptotic solution meant tomore » calculate a profile of a shock wave in an arbitrary point has been derived. The analytical solution of the problem on propagation of shock pulses from atmosphere into a two-phase fluid-saturated geophysical medium is analysed. Quantitative estimations were carried out with respect to experimental results obtained in the course of real explosive volcanic eruptions.« less
2018-01-01
Propensity score methods are increasingly being used to estimate the effects of treatments and exposures when using observational data. The propensity score was initially developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (eg, dose or quantity of medication, income, or years of education). We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of continuous exposures on binary outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. We examined both the use of ordinary least squares to estimate the propensity function and the use of the covariate balancing propensity score algorithm. The use of methods based on the GPS was compared with the use of G‐computation. All methods resulted in essentially unbiased estimation of the population dose‐response function. However, GPS‐based weighting tended to result in estimates that displayed greater variability and had higher mean squared error when the magnitude of confounding was strong. Of the methods based on the GPS, covariate adjustment using the GPS tended to result in estimates with lower variability and mean squared error when the magnitude of confounding was strong. We illustrate the application of these methods by estimating the effect of average neighborhood income on the probability of death within 1 year of hospitalization for an acute myocardial infarction. PMID:29508424
Austin, Peter C
2018-05-20
Propensity score methods are increasingly being used to estimate the effects of treatments and exposures when using observational data. The propensity score was initially developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (eg, dose or quantity of medication, income, or years of education). We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of continuous exposures on binary outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. We examined both the use of ordinary least squares to estimate the propensity function and the use of the covariate balancing propensity score algorithm. The use of methods based on the GPS was compared with the use of G-computation. All methods resulted in essentially unbiased estimation of the population dose-response function. However, GPS-based weighting tended to result in estimates that displayed greater variability and had higher mean squared error when the magnitude of confounding was strong. Of the methods based on the GPS, covariate adjustment using the GPS tended to result in estimates with lower variability and mean squared error when the magnitude of confounding was strong. We illustrate the application of these methods by estimating the effect of average neighborhood income on the probability of death within 1 year of hospitalization for an acute myocardial infarction. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Global, quantitative and dynamic mapping of protein subcellular localization.
Itzhak, Daniel N; Tyanova, Stefka; Cox, Jürgen; Borner, Georg Hh
2016-06-09
Subcellular localization critically influences protein function, and cells control protein localization to regulate biological processes. We have developed and applied Dynamic Organellar Maps, a proteomic method that allows global mapping of protein translocation events. We initially used maps statically to generate a database with localization and absolute copy number information for over 8700 proteins from HeLa cells, approaching comprehensive coverage. All major organelles were resolved, with exceptional prediction accuracy (estimated at >92%). Combining spatial and abundance information yielded an unprecedented quantitative view of HeLa cell anatomy and organellar composition, at the protein level. We subsequently demonstrated the dynamic capabilities of the approach by capturing translocation events following EGF stimulation, which we integrated into a quantitative model. Dynamic Organellar Maps enable the proteome-wide analysis of physiological protein movements, without requiring any reagents specific to the investigated process, and will thus be widely applicable in cell biology.
McGarry, Bryony L; Rogers, Harriet J; Knight, Michael J; Jokivarsi, Kimmo T; Sierra, Alejandra; Gröhn, Olli Hj; Kauppinen, Risto A
2016-08-01
Quantitative T2 relaxation magnetic resonance imaging allows estimation of stroke onset time. We aimed to examine the accuracy of quantitative T1 and quantitative T2 relaxation times alone and in combination to provide estimates of stroke onset time in a rat model of permanent focal cerebral ischemia and map the spatial distribution of elevated quantitative T1 and quantitative T2 to assess tissue status. Permanent middle cerebral artery occlusion was induced in Wistar rats. Animals were scanned at 9.4T for quantitative T1, quantitative T2, and Trace of Diffusion Tensor (Dav) up to 4 h post-middle cerebral artery occlusion. Time courses of differentials of quantitative T1 and quantitative T2 in ischemic and non-ischemic contralateral brain tissue (ΔT1, ΔT2) and volumes of tissue with elevated T1 and T2 relaxation times (f1, f2) were determined. TTC staining was used to highlight permanent ischemic damage. ΔT1, ΔT2, f1, f2, and the volume of tissue with both elevated quantitative T1 and quantitative T2 (V(Overlap)) increased with time post-middle cerebral artery occlusion allowing stroke onset time to be estimated. V(Overlap) provided the most accurate estimate with an uncertainty of ±25 min. At all times-points regions with elevated relaxation times were smaller than areas with Dav defined ischemia. Stroke onset time can be determined by quantitative T1 and quantitative T2 relaxation times and tissue volumes. Combining quantitative T1 and quantitative T2 provides the most accurate estimate and potentially identifies irreversibly damaged brain tissue. © 2016 World Stroke Organization.
Austin, Peter C
2018-01-01
Propensity score methods are increasingly being used to estimate the effects of treatments and exposures when using observational data. The propensity score was initially developed for use with binary exposures (e.g., active treatment vs. control). The generalized propensity score is an extension of the propensity score for use with quantitative exposures (e.g., dose or quantity of medication, income, years of education). A crucial component of any propensity score analysis is that of balance assessment. This entails assessing the degree to which conditioning on the propensity score (via matching, weighting, or stratification) has balanced measured baseline covariates between exposure groups. Methods for balance assessment have been well described and are frequently implemented when using the propensity score with binary exposures. However, there is a paucity of information on how to assess baseline covariate balance when using the generalized propensity score. We describe how methods based on the standardized difference can be adapted for use with quantitative exposures when using the generalized propensity score. We also describe a method based on assessing the correlation between the quantitative exposure and each covariate in the sample when weighted using generalized propensity score -based weights. We conducted a series of Monte Carlo simulations to evaluate the performance of these methods. We also compared two different methods of estimating the generalized propensity score: ordinary least squared regression and the covariate balancing propensity score method. We illustrate the application of these methods using data on patients hospitalized with a heart attack with the quantitative exposure being creatinine level.
Maximum current density and beam brightness achievable by laser-driven electron sources
NASA Astrophysics Data System (ADS)
Filippetto, D.; Musumeci, P.; Zolotorev, M.; Stupakov, G.
2014-02-01
This paper discusses the extension to different electron beam aspect ratio of the Child-Langmuir law for the maximum achievable current density in electron guns. Using a simple model, we derive quantitative formulas in good agreement with simulation codes. The new scaling laws for the peak current density of temporally long and transversely narrow initial beam distributions can be used to estimate the maximum beam brightness and suggest new paths for injector optimization.
Linking interseismic deformation with coseismic slip using dynamic rupture simulations
NASA Astrophysics Data System (ADS)
Yang, H.; He, B.; Weng, H.
2017-12-01
The largest earthquakes on earth occur at subduction zones, sometimes accompanied by devastating tsunamis. Reducing losses from megathrust earthquakes and tsunami demands accurate estimate of rupture scenarios for future earthquakes. Interseismic locking distribution derived from geodetic observations is often used to qualitatively evaluate future earthquake potential. However, how to quantitatively estimate the coseismic slip from the locking distribution remains challenging. Here we derive the coseismic rupture process of the 2012 Mw 7.6 Nicoya, Costa Rica, earthquake from interseismic locking distribution using spontaneous rupture simulation. We construct a three-dimensional elastic medium with a curved fault, which is governed by the linear slip-weakening law. The initial stress on the fault is set based on the build-up stress inferred from locking and the dynamic friction coefficient from fast-speed sliding experiments. Our numerical results of coseismic slip distribution, moment rate function and final earthquake moment are well consistent with those derived from seismic and geodetic observations. Furthermore, we find that the epicentral locations affect rupture scenarios and may lead to various sizes of earthquakes given the heterogeneous stress distribution. In the Nicoya region, less than half of rupture initiation regions where the locking degree is greater than 0.6 can develop into large earthquakes (Mw > 7.2). The results of location-dependent earthquake magnitudes underscore the necessity of conducting a large number of simulations to quantitatively evaluate seismic hazard from the interseismic locking models.
NASA Astrophysics Data System (ADS)
Voloshin, A. E.; Prostomolotov, A. I.; Verezub, N. A.
2016-11-01
The paper deals with the analysis of the accuracy of some one-dimensional (1D) analytical models of the axial distribution of impurities in the crystal grown from a melt. The models proposed by Burton-Prim-Slichter, Ostrogorsky-Muller and Garandet with co-authors are considered, these models are compared to the results of a two-dimensional (2D) numerical simulation. Stationary solutions as well as solutions for the initial transient regime obtained using these models are considered. The sources of errors are analyzed, a conclusion is made about the applicability of 1D analytical models for quantitative estimates of impurity incorporation into the crystal sample as well as for the solution of the inverse problems.
Real-Time PCR Quantification Using A Variable Reaction Efficiency Model
Platts, Adrian E.; Johnson, Graham D.; Linnemann, Amelia K.; Krawetz, Stephen A.
2008-01-01
Quantitative real-time PCR remains a cornerstone technique in gene expression analysis and sequence characterization. Despite the importance of the approach to experimental biology the confident assignment of reaction efficiency to the early cycles of real-time PCR reactions remains problematic. Considerable noise may be generated where few cycles in the amplification are available to estimate peak efficiency. An alternate approach that uses data from beyond the log-linear amplification phase is explored with the aim of reducing noise and adding confidence to efficiency estimates. PCR reaction efficiency is regressed to estimate the per-cycle profile of an asymptotically departed peak efficiency, even when this is not closely approximated in the measurable cycles. The process can be repeated over replicates to develop a robust estimate of peak reaction efficiency. This leads to an estimate of the maximum reaction efficiency that may be considered primer-design specific. Using a series of biological scenarios we demonstrate that this approach can provide an accurate estimate of initial template concentration. PMID:18570886
NASA Astrophysics Data System (ADS)
Koshimizu, K.; Uchida, T.
2015-12-01
Initial large-scale sediment yield caused by heavy rainfall or major storms have made a strong impression on us. Previous studies focusing on landslide management investigated the initial sediment movement and its mechanism. However, integrated management of catchment-scale sediment movements requires estimating the sediment yield, which is produced by the subsequent expanded landslides due to rainfall, in addition to the initial landslide movement. This study presents a quantitative analysis of expanded landslides by surveying the Shukushubetsu River basin, at the foot of the Hidaka mountain range in central Hokkaido, Japan. This area recorded heavy rainfall in 2003, reaching a maximum daily precipitation of 388 mm. We extracted the expanded landslides from 2003 to 2008 using aerial photographs taken over the river area. In particular, we calculated the probability of expansion for each landslide, the ratio of the landslide area in 2008 as compared with that in 2003, and the amount of the expanded landslide area corresponding to the initial landslide area. As a result, it is estimated 24% about probability of expansion for each landslide. In addition, each expanded landslide area is smaller than the initial landslide area. Furthermore, the amount of each expanded landslide area in 2008 is approximately 7% of their landslide area in 2003. Therefore, the sediment yield from subsequent expanded landslides is equal to or slightly greater than the sediment yield in a typical base flow. Thus, we concluded that the amount of sediment yield from subsequent expanded landslides is lower than that of initial large-scale sediment yield caused by a heavy rainfall in terms of effect on management of catchment-scale sediment movement.
NASA Astrophysics Data System (ADS)
Escuder-Bueno, I.; Castillo-Rodríguez, J. T.; Zechner, S.; Jöbstl, C.; Perales-Momparler, S.; Petaccia, G.
2012-09-01
Risk analysis has become a top priority for authorities and stakeholders in many European countries, with the aim of reducing flooding risk, considering the population's needs and improving risk awareness. Within this context, two methodological pieces have been developed in the period 2009-2011 within the SUFRI project (Sustainable Strategies of Urban Flood Risk Management with non-structural measures to cope with the residual risk, 2nd ERA-Net CRUE Funding Initiative). First, the "SUFRI Methodology for pluvial and river flooding risk assessment in urban areas to inform decision-making" provides a comprehensive and quantitative tool for flood risk analysis. Second, the "Methodology for investigation of risk awareness of the population concerned" presents the basis to estimate current risk from a social perspective and identify tendencies in the way floods are understood by citizens. Outcomes of both methods are integrated in this paper with the aim of informing decision making on non-structural protection measures. The results of two case studies are shown to illustrate practical applications of this developed approach. The main advantage of applying the methodology herein presented consists in providing a quantitative estimation of flooding risk before and after investing in non-structural risk mitigation measures. It can be of great interest for decision makers as it provides rational and solid information.
Ice nucleation by particles immersed in supercooled cloud droplets.
Murray, B J; O'Sullivan, D; Atkinson, J D; Webb, M E
2012-10-07
The formation of ice particles in the Earth's atmosphere strongly affects the properties of clouds and their impact on climate. Despite the importance of ice formation in determining the properties of clouds, the Intergovernmental Panel on Climate Change (IPCC, 2007) was unable to assess the impact of atmospheric ice formation in their most recent report because our basic knowledge is insufficient. Part of the problem is the paucity of quantitative information on the ability of various atmospheric aerosol species to initiate ice formation. Here we review and assess the existing quantitative knowledge of ice nucleation by particles immersed within supercooled water droplets. We introduce aerosol species which have been identified in the past as potentially important ice nuclei and address their ice-nucleating ability when immersed in a supercooled droplet. We focus on mineral dusts, biological species (pollen, bacteria, fungal spores and plankton), carbonaceous combustion products and volcanic ash. In order to make a quantitative comparison we first introduce several ways of describing ice nucleation and then summarise the existing information according to the time-independent (singular) approximation. Using this approximation in combination with typical atmospheric loadings, we estimate the importance of ice nucleation by different aerosol types. According to these estimates we find that ice nucleation below about -15 °C is dominated by soot and mineral dusts. Above this temperature the only materials known to nucleate ice are biological, with quantitative data for other materials absent from the literature. We conclude with a summary of the challenges our community faces.
NASA Technical Reports Server (NTRS)
Golombek, M. P.; Banerdt, W. B.
1985-01-01
While it is generally agreed that the strength of a planet's lithosphere is controlled by a combination of brittle sliding and ductile flow laws, predicting the geometry and initial characteristics of faults due to failure from stresses imposed on the lithospheric strength envelope has not been thoroughly explored. Researchers used lithospheric strength envelopes to analyze the extensional features found on Ganymede. This application provides a quantitative means of estimating early thermal profiles on Ganymede, thereby constraining its early thermal evolution.
Blood flow estimation in gastroscopic true-color images
NASA Astrophysics Data System (ADS)
Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans
1995-05-01
The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.
Porter, K.A.; Jaiswal, K.S.; Wald, D.J.; Greene, M.; Comartin, Craig
2008-01-01
The U.S. Geological Survey’s Prompt Assessment of Global Earthquake’s Response (PAGER) Project and the Earthquake Engineering Research Institute’s World Housing Encyclopedia (WHE) are creating a global database of building stocks and their earthquake vulnerability. The WHE already represents a growing, community-developed public database of global housing and its detailed structural characteristics. It currently contains more than 135 reports on particular housing types in 40 countries. The WHE-PAGER effort extends the WHE in several ways: (1) by addressing non-residential construction; (2) by quantifying the prevalence of each building type in both rural and urban areas; (3) by addressing day and night occupancy patterns, (4) by adding quantitative vulnerability estimates from judgment or statistical observation; and (5) by analytically deriving alternative vulnerability estimates using in part laboratory testing.
Evaluation of disease progression in INCL by MR spectroscopy
Baker, Eva H; Levin, Sondra W; Zhang, Zhongjian; Mukherjee, Anil B
2015-01-01
Objective Infantile neuronal ceroid lipofuscinosis (INCL) is a devastating neurodegenerative storage disease caused by palmitoyl-protein thioesterase-1 deficiency, which impairs degradation of palmitoylated proteins (constituents of ceroid) by lysosomal hydrolases. Consequent lysosomal ceroid accumulation leads to neuronal injury. As part of a pilot study to evaluate treatment benefits of cysteamine bitartrate and N-acetylcysteine, we quantitatively measured brain metabolite levels using magnetic resonance spectroscopy (MRS). Methods A subset of two patients from a larger treatment and follow-up study underwent serial quantitative single-voxel MRS examinations of five anatomical sites. Three echo times were acquired in order to estimate metabolite T2. Measured metabolite levels included correction for partial volume of cerebrospinal fluid. Comparison of INCL patients was made to a reference group composed of asymptomatic and minimally symptomatic Niemann-Pick disease type C patients. Results In INCL patients, N-acetylaspartate (NAA) was abnormally low at all locations upon initial measurement, and further declined throughout the follow-up period. In the cerebrum (affected early in the disease course), choline and myo-inositol were initially elevated and fell during the follow-up period, whereas in the cerebellum and brainstem (affected later), choline and myo-inositol were initially normal and rose subsequently. Interpretation Choline and myo-inositol levels in our patients are consistent with patterns of neuroinflammation observed in two INCL mouse models. Low, persistently declining NAA was expected based on the progressive, irreversible nature of the disease. Progression of metabolite levels in INCL has not been previously quantified; therefore the results of this study serve as a reference for quantitative evaluation of future therapeutic interventions. PMID:26339674
Evaluation of disease progression in INCL by MR spectroscopy.
Baker, Eva H; Levin, Sondra W; Zhang, Zhongjian; Mukherjee, Anil B
2015-08-01
Infantile neuronal ceroid lipofuscinosis (INCL) is a devastating neurodegenerative storage disease caused by palmitoyl-protein thioesterase-1 deficiency, which impairs degradation of palmitoylated proteins (constituents of ceroid) by lysosomal hydrolases. Consequent lysosomal ceroid accumulation leads to neuronal injury. As part of a pilot study to evaluate treatment benefits of cysteamine bitartrate and N-acetylcysteine, we quantitatively measured brain metabolite levels using magnetic resonance spectroscopy (MRS). A subset of two patients from a larger treatment and follow-up study underwent serial quantitative single-voxel MRS examinations of five anatomical sites. Three echo times were acquired in order to estimate metabolite T2. Measured metabolite levels included correction for partial volume of cerebrospinal fluid. Comparison of INCL patients was made to a reference group composed of asymptomatic and minimally symptomatic Niemann-Pick disease type C patients. In INCL patients, N-acetylaspartate (NAA) was abnormally low at all locations upon initial measurement, and further declined throughout the follow-up period. In the cerebrum (affected early in the disease course), choline and myo-inositol were initially elevated and fell during the follow-up period, whereas in the cerebellum and brainstem (affected later), choline and myo-inositol were initially normal and rose subsequently. Choline and myo-inositol levels in our patients are consistent with patterns of neuroinflammation observed in two INCL mouse models. Low, persistently declining NAA was expected based on the progressive, irreversible nature of the disease. Progression of metabolite levels in INCL has not been previously quantified; therefore the results of this study serve as a reference for quantitative evaluation of future therapeutic interventions.
Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches
Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe
2016-01-01
ABSTRACT Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations. PMID:26932506
Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches.
Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe
2016-01-01
Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations.
An improved level set method for brain MR images segmentation and bias correction.
Chen, Yunjie; Zhang, Jianwei; Macione, Jim
2009-10-01
Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.
Global, quantitative and dynamic mapping of protein subcellular localization
Itzhak, Daniel N; Tyanova, Stefka; Cox, Jürgen; Borner, Georg HH
2016-01-01
Subcellular localization critically influences protein function, and cells control protein localization to regulate biological processes. We have developed and applied Dynamic Organellar Maps, a proteomic method that allows global mapping of protein translocation events. We initially used maps statically to generate a database with localization and absolute copy number information for over 8700 proteins from HeLa cells, approaching comprehensive coverage. All major organelles were resolved, with exceptional prediction accuracy (estimated at >92%). Combining spatial and abundance information yielded an unprecedented quantitative view of HeLa cell anatomy and organellar composition, at the protein level. We subsequently demonstrated the dynamic capabilities of the approach by capturing translocation events following EGF stimulation, which we integrated into a quantitative model. Dynamic Organellar Maps enable the proteome-wide analysis of physiological protein movements, without requiring any reagents specific to the investigated process, and will thus be widely applicable in cell biology. DOI: http://dx.doi.org/10.7554/eLife.16950.001 PMID:27278775
Automatic vasculature identification in coronary angiograms by adaptive geometrical tracking.
Xiao, Ruoxiu; Yang, Jian; Goyal, Mahima; Liu, Yue; Wang, Yongtian
2013-01-01
As the uneven distribution of contrast agents and the perspective projection principle of X-ray, the vasculatures in angiographic image are with low contrast and are generally superposed with other organic tissues; therefore, it is very difficult to identify the vasculature and quantitatively estimate the blood flow directly from angiographic images. In this paper, we propose a fully automatic algorithm named adaptive geometrical vessel tracking (AGVT) for coronary artery identification in X-ray angiograms. Initially, the ridge enhancement (RE) image is obtained utilizing multiscale Hessian information. Then, automatic initialization procedures including seed points detection, and initial directions determination are performed on the RE image. The extracted ridge points can be adjusted to the geometrical centerline points adaptively through diameter estimation. Bifurcations are identified by discriminating connecting relationship of the tracked ridge points. Finally, all the tracked centerlines are merged and smoothed by classifying the connecting components on the vascular structures. Synthetic angiographic images and clinical angiograms are used to evaluate the performance of the proposed algorithm. The proposed algorithm is compared with other two vascular tracking techniques in terms of the efficiency and accuracy, which demonstrate successful applications of the proposed segmentation and extraction scheme in vasculature identification.
Predicting future protection of respirator users: Statistical approaches and practical implications.
Hu, Chengcheng; Harber, Philip; Su, Jing
2016-01-01
The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing.
Wang, Xin; Wu, Linhui; Yi, Xi; Zhang, Yanqi; Zhang, Limin; Zhao, Huijuan; Gao, Feng
2015-01-01
Due to both the physiological and morphological differences in the vascularization between healthy and diseased tissues, pharmacokinetic diffuse fluorescence tomography (DFT) can provide contrast-enhanced and comprehensive information for tumor diagnosis and staging. In this regime, the extended Kalman filtering (EKF) based method shows numerous advantages including accurate modeling, online estimation of multiparameters, and universal applicability to any optical fluorophore. Nevertheless the performance of the conventional EKF highly hinges on the exact and inaccessible prior knowledge about the initial values. To address the above issues, an adaptive-EKF scheme is proposed based on a two-compartmental model for the enhancement, which utilizes a variable forgetting-factor to compensate the inaccuracy of the initial states and emphasize the effect of the current data. It is demonstrated using two-dimensional simulative investigations on a circular domain that the proposed adaptive-EKF can obtain preferable estimation of the pharmacokinetic-rates to the conventional-EKF and the enhanced-EKF in terms of quantitativeness, noise robustness, and initialization independence. Further three-dimensional numerical experiments on a digital mouse model validate the efficacy of the method as applied in realistic biological systems.
NASA Technical Reports Server (NTRS)
Brown, G. S.; Curry, W. J.
1977-01-01
The statistical error of the pointing angle estimation technique is determined as a function of the effective receiver signal to noise ratio. Other sources of error are addressed and evaluated with inadequate calibration being of major concern. The impact of pointing error on the computation of normalized surface scattering cross section (sigma) from radar and the waveform attitude induced altitude bias is considered and quantitative results are presented. Pointing angle and sigma processing algorithms are presented along with some initial data. The intensive mode clean vs. clutter AGC calibration problem is analytically resolved. The use clutter AGC data in the intensive mode is confirmed as the correct calibration set for the sigma computations.
Gao, Ying; Chen, Yong; Ma, Dan; Jiang, Yun; Herrmann, Kelsey A.; Vincent, Jason A.; Dell, Katherine M.; Drumm, Mitchell L.; Brady-Kalnay, Susann M.; Griswold, Mark A.; Flask, Chris A.; Lu, Lan
2015-01-01
High field, preclinical magnetic resonance imaging (MRI) scanners are now commonly used to quantitatively assess disease status and efficacy of novel therapies in a wide variety of rodent models. Unfortunately, conventional MRI methods are highly susceptible to respiratory and cardiac motion artifacts resulting in potentially inaccurate and misleading data. We have developed an initial preclinical, 7.0 T MRI implementation of the highly novel Magnetic Resonance Fingerprinting (MRF) methodology that has been previously described for clinical imaging applications. The MRF technology combines a priori variation in the MRI acquisition parameters with dictionary-based matching of acquired signal evolution profiles to simultaneously generate quantitative maps of T1 and T2 relaxation times and proton density. This preclinical MRF acquisition was constructed from a Fast Imaging with Steady-state Free Precession (FISP) MRI pulse sequence to acquire 600 MRF images with both evolving T1 and T2 weighting in approximately 30 minutes. This initial high field preclinical MRF investigation demonstrated reproducible and differentiated estimates of in vitro phantoms with different relaxation times. In vivo preclinical MRF results in mouse kidneys and brain tumor models demonstrated an inherent resistance to respiratory motion artifacts as well as sensitivity to known pathology. These results suggest that MRF methodology may offer the opportunity for quantification of numerous MRI parameters for a wide variety of preclinical imaging applications. PMID:25639694
Gao, Ying; Chen, Yong; Ma, Dan; Jiang, Yun; Herrmann, Kelsey A; Vincent, Jason A; Dell, Katherine M; Drumm, Mitchell L; Brady-Kalnay, Susann M; Griswold, Mark A; Flask, Chris A; Lu, Lan
2015-03-01
High-field preclinical MRI scanners are now commonly used to quantitatively assess disease status and the efficacy of novel therapies in a wide variety of rodent models. Unfortunately, conventional MRI methods are highly susceptible to respiratory and cardiac motion artifacts resulting in potentially inaccurate and misleading data. We have developed an initial preclinical 7.0-T MRI implementation of the highly novel MR fingerprinting (MRF) methodology which has been described previously for clinical imaging applications. The MRF technology combines a priori variation in the MRI acquisition parameters with dictionary-based matching of acquired signal evolution profiles to simultaneously generate quantitative maps of T1 and T2 relaxation times and proton density. This preclinical MRF acquisition was constructed from a fast imaging with steady-state free precession (FISP) MRI pulse sequence to acquire 600 MRF images with both evolving T1 and T2 weighting in approximately 30 min. This initial high-field preclinical MRF investigation demonstrated reproducible and differentiated estimates of in vitro phantoms with different relaxation times. In vivo preclinical MRF results in mouse kidneys and brain tumor models demonstrated an inherent resistance to respiratory motion artifacts as well as sensitivity to known pathology. These results suggest that MRF methodology may offer the opportunity for the quantification of numerous MRI parameters for a wide variety of preclinical imaging applications. Copyright © 2015 John Wiley & Sons, Ltd.
Julkunen, Petro; Kiviranta, Panu; Wilson, Wouter; Jurvelin, Jukka S; Korhonen, Rami K
2007-01-01
Load-bearing characteristics of articular cartilage are impaired during tissue degeneration. Quantitative microscopy enables in vitro investigation of cartilage structure but determination of tissue functional properties necessitates experimental mechanical testing. The fibril-reinforced poroviscoelastic (FRPVE) model has been used successfully for estimation of cartilage mechanical properties. The model includes realistic collagen network architecture, as shown by microscopic imaging techniques. The aim of the present study was to investigate the relationships between the cartilage proteoglycan (PG) and collagen content as assessed by quantitative microscopic findings, and model-based mechanical parameters of the tissue. Site-specific variation of the collagen network moduli, PG matrix modulus and permeability was analyzed. Cylindrical cartilage samples (n=22) were harvested from various sites of the bovine knee and shoulder joints. Collagen orientation, as quantitated by polarized light microscopy, was incorporated into the finite-element model. Stepwise stress-relaxation experiments in unconfined compression were conducted for the samples, and sample-specific models were fitted to the experimental data in order to determine values of the model parameters. For comparison, Fourier transform infrared imaging and digital densitometry were used for the determination of collagen and PG content in the same samples, respectively. The initial and strain-dependent fibril network moduli as well as the initial permeability correlated significantly with the tissue collagen content. The equilibrium Young's modulus of the nonfibrillar matrix and the strain dependency of permeability were significantly associated with the tissue PG content. The present study demonstrates that modern quantitative microscopic methods in combination with the FRPVE model are feasible methods to characterize the structure-function relationships of articular cartilage.
Murumkar, Prashant R; Giridhar, Rajani; Yadav, Mange Ram
2008-04-01
A set of 29 benzothiadiazepine hydroxamates having selective tumor necrosis factor-alpha converting enzyme inhibitory activity were used to compare the quality and predictive power of 3D-quantitative structure-activity relationship, comparative molecular field analysis, and comparative molecular similarity indices models for the atom-based, centroid/atom-based, data-based, and docked conformer-based alignment. Removal of two outliers from the initial training set of molecules improved the predictivity of models. Among the 3D-quantitative structure-activity relationship models developed using the above four alignments, the database alignment provided the optimal predictive comparative molecular field analysis model for the training set with cross-validated r(2) (q(2)) = 0.510, non-cross-validated r(2) = 0.972, standard error of estimates (s) = 0.098, and F = 215.44 and the optimal comparative molecular similarity indices model with cross-validated r(2) (q(2)) = 0.556, non-cross-validated r(2) = 0.946, standard error of estimates (s) = 0.163, and F = 99.785. These models also showed the best test set prediction for six compounds with predictive r(2) values of 0.460 and 0.535, respectively. The contour maps obtained from 3D-quantitative structure-activity relationship studies were appraised for activity trends for the molecules analyzed. The comparative molecular similarity indices models exhibited good external predictivity as compared with that of comparative molecular field analysis models. The data generated from the present study helped us to further design and report some novel and potent tumor necrosis factor-alpha converting enzyme inhibitors.
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.
2017-01-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
The Mapping Model: A Cognitive Theory of Quantitative Estimation
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2008-01-01
How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…
NASA Astrophysics Data System (ADS)
Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.
2014-04-01
Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.
Danyluk, Michelle D; Schaffner, Donald W
2011-05-01
This project was undertaken to relate what is known about the behavior of Escherichia coli O157:H7 under laboratory conditions and integrate this information to what is known regarding the 2006 E. coli O157:H7 spinach outbreak in the context of a quantitative microbial risk assessment. The risk model explicitly assumes that all contamination arises from exposure in the field. Extracted data, models, and user inputs were entered into an Excel spreadsheet, and the modeling software @RISK was used to perform Monte Carlo simulations. The model predicts that cut leafy greens that are temperature abused will support the growth of E. coli O157:H7, and populations of the organism may increase by as much a 1 log CFU/day under optimal temperature conditions. When the risk model used a starting level of -1 log CFU/g, with 0.1% of incoming servings contaminated, the predicted numbers of cells per serving were within the range of best available estimates of pathogen levels during the outbreak. The model predicts that levels in the field of -1 log CFU/g and 0.1% prevalence could have resulted in an outbreak approximately the size of the 2006 E. coli O157:H7 outbreak. This quantitative microbial risk assessment model represents a preliminary framework that identifies available data and provides initial risk estimates for pathogenic E. coli in leafy greens. Data gaps include retail storage times, correlations between storage time and temperature, determining the importance of E. coli O157:H7 in leafy greens lag time models, and validation of the importance of cross-contamination during the washing process.
3D motion and strain estimation of the heart: initial clinical findings
NASA Astrophysics Data System (ADS)
Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan
2010-03-01
The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.
Interspecific competition in plants: how well do current methods answer fundamental questions?
Connolly, J; Wayne, P; Bazzaz, F A
2001-02-01
Accurately quantifying and interpreting the processes and outcomes of competition among plants is essential for evaluating theories of plant community organization and evolution. We argue that many current experimental approaches to quantifying competitive interactions introduce size bias, which may significantly impact the quantitative and qualitative conclusions drawn from studies. Size bias generally arises when estimates of competitive ability are erroneously influenced by the initial size of competing individuals. We employ a series of quantitative thought experiments to demonstrate the potential for size bias in analysis of four traditional experimental designs (pairwise, replacement series, additive series, and response surfaces) either when only final measurements are available or when both initial and final measurements are collected. We distinguish three questions relevant to describing competitive interactions: Which species dominates? Which species gains? and How do species affect each other? The choice of experimental design and measurements greatly influences the scope of inference permitted. Conditions under which the latter two questions can give biased information are tabulated. We outline a new approach to characterizing competition that avoids size bias and that improves the concordance between research question and experimental design. The implications of the choice of size metrics used to quantify both the initial state and the responses of elements in interspecific mixtures are discussed. The relevance of size bias in competition studies with organisms other than plants is also discussed.
Michael J. Firko; Jane Leslie Hayes
1990-01-01
Quantitative genetic studies of resistance can provide estimates of genetic parameters not available with other types of genetic analyses. Three methods are discussed for estimating the amount of additive genetic variation in resistance to individual insecticides and subsequent estimation of heritability (h2) of resistance. Sibling analysis and...
Marine, Rachel; McCarren, Coleen; Vorrasane, Vansay; Nasko, Dan; Crowgey, Erin; Polson, Shawn W; Wommack, K Eric
2014-01-30
Shotgun metagenomics has become an important tool for investigating the ecology of microorganisms. Underlying these investigations is the assumption that metagenome sequence data accurately estimates the census of microbial populations. Multiple displacement amplification (MDA) of microbial community DNA is often used in cases where it is difficult to obtain enough DNA for sequencing; however, MDA can result in amplification biases that may impact subsequent estimates of population census from metagenome data. Some have posited that pooling replicate MDA reactions negates these biases and restores the accuracy of population analyses. This assumption has not been empirically tested. Using mock viral communities, we examined the influence of pooling on population-scale analyses. In pooled and single reaction MDA treatments, sequence coverage of viral populations was highly variable and coverage patterns across viral genomes were nearly identical, indicating that initial priming biases were reproducible and that pooling did not alleviate biases. In contrast, control unamplified sequence libraries showed relatively even coverage across phage genomes. MDA should be avoided for metagenomic investigations that require quantitative estimates of microbial taxa and gene functional groups. While MDA is an indispensable technique in applications such as single-cell genomics, amplification biases cannot be overcome by combining replicate MDA reactions. Alternative library preparation techniques should be utilized for quantitative microbial ecology studies utilizing metagenomic sequencing approaches.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-10
... sale and offering of securities. In addition, BX would not list any company that meets the quantitative... listing. BX has proposed the following quantitative listing standards for the initial listing of... of listing. BX has proposed the following quantitative listing standards for the initial listing of...
The Global Precipitation Mission
NASA Technical Reports Server (NTRS)
Braun, Scott; Kummerow, Christian
2000-01-01
The Global Precipitation Mission (GPM), expected to begin around 2006, is a follow-up to the Tropical Rainfall Measuring Mission (TRMM). Unlike TRMM, which primarily samples the tropics, GPM will sample both the tropics and mid-latitudes. The primary, or core, satellite will be a single, enhanced TRMM satellite that can quantify the 3-D spatial distributions of precipitation and its associated latent heat release. The core satellite will be complemented by a constellation of very small and inexpensive drones with passive microwave instruments that will sample the rainfall with sufficient frequency to be not only of climate interest, but also have local, short-term impacts by providing global rainfall coverage at approx. 3 h intervals. The data is expected to have substantial impact upon quantitative precipitation estimation/forecasting and data assimilation into global and mesoscale numerical models. Based upon previous studies of rainfall data assimilation, GPM is expected to lead to significant improvements in forecasts of extratropical and tropical cyclones. For example, GPM rainfall data can provide improved initialization of frontal systems over the Pacific and Atlantic Oceans. The purpose of this talk is to provide information about GPM to the USWRP (U.S. Weather Research Program) community and to discuss impacts on quantitative precipitation estimation/forecasting and data assimilation.
Predicting low-temperature free energy landscapes with flat-histogram Monte Carlo methods
NASA Astrophysics Data System (ADS)
Mahynski, Nathan A.; Blanco, Marco A.; Errington, Jeffrey R.; Shen, Vincent K.
2017-02-01
We present a method for predicting the free energy landscape of fluids at low temperatures from flat-histogram grand canonical Monte Carlo simulations performed at higher ones. We illustrate our approach for both pure and multicomponent systems using two different sampling methods as a demonstration. This allows us to predict the thermodynamic behavior of systems which undergo both first order and continuous phase transitions upon cooling using simulations performed only at higher temperatures. After surveying a variety of different systems, we identify a range of temperature differences over which the extrapolation of high temperature simulations tends to quantitatively predict the thermodynamic properties of fluids at lower ones. Beyond this range, extrapolation still provides a reasonably well-informed estimate of the free energy landscape; this prediction then requires less computational effort to refine with an additional simulation at the desired temperature than reconstruction of the surface without any initial estimate. In either case, this method significantly increases the computational efficiency of these flat-histogram methods when investigating thermodynamic properties of fluids over a wide range of temperatures. For example, we demonstrate how a binary fluid phase diagram may be quantitatively predicted for many temperatures using only information obtained from a single supercritical state.
A NOVEL TECHNIQUE FOR QUANTITATIVE ESTIMATION OF UPTAKE OF DIESEL EXHAUST PARTICLES BY LUNG CELLS
While airborne particulates like diesel exhaust particulates (DEP) exert significant toxicological effects on lungs, quantitative estimation of accumulation of DEP inside lung cells has not been reported due to a lack of an accurate and quantitative technique for this purpose. I...
Imbibition of hydraulic fracturing fluids into partially saturated shale
NASA Astrophysics Data System (ADS)
Birdsell, Daniel T.; Rajaram, Harihar; Lackey, Greg
2015-08-01
Recent studies suggest that imbibition of hydraulic fracturing fluids into partially saturated shale is an important mechanism that restricts their migration, thus reducing the risk of groundwater contamination. We present computations of imbibition based on an exact semianalytical solution for spontaneous imbibition. These computations lead to quantitative estimates of an imbibition rate parameter (A) with units of LT-1/2 for shale, which is related to porous medium and fluid properties, and the initial water saturation. Our calculations suggest that significant fractions of injected fluid volumes (15-95%) can be imbibed in shale gas systems, whereas imbibition volumes in shale oil systems is much lower (3-27%). We present a nondimensionalization of A, which provides insights into the critical factors controlling imbibition, and facilitates the estimation of A based on readily measured porous medium and fluid properties. For a given set of medium and fluid properties, A varies by less than factors of ˜1.8 (gas nonwetting phase) and ˜3.4 (oil nonwetting phase) over the range of initial water saturations reported for the Marcellus shale (0.05-0.6). However, for higher initial water saturations, A decreases significantly. The intrinsic permeability of the shale and the viscosity of the fluids are the most important properties controlling the imbibition rate.
Darrington, Richard T; Jiao, Jim
2004-04-01
Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.
Mannetje, Andrea 't; Steenland, Kyle; Checkoway, Harvey; Koskela, Riitta-Sisko; Koponen, Matti; Attfield, Michael; Chen, Jingqiong; Hnizdo, Eva; DeKlerk, Nicholas; Dosemeci, Mustafa
2002-08-01
Comprehensive quantitative silica exposure estimates over time, measured in the same units across a number of cohorts, would make possible a pooled exposure-response analysis for lung cancer. Such an analysis would help clarify the continuing controversy regarding whether silica causes lung cancer. Existing quantitative exposure data for 10 silica-exposed cohorts were retrieved from the original investigators. Occupation- and time-specific exposure estimates were either adopted/adapted or developed for each cohort, and converted to milligram per cubic meter (mg/m(3)) respirable crystalline silica. Quantitative exposure assignments were typically based on a large number (thousands) of raw measurements, or otherwise consisted of exposure estimates by experts (for two cohorts). Median exposure level of the cohorts ranged between 0.04 and 0.59 mg/m(3) respirable crystalline silica. Exposure estimates were partially validated via their successful prediction of silicosis in these cohorts. Existing data were successfully adopted or modified to create comparable quantitative exposure estimates over time for 10 silica-exposed cohorts, permitting a pooled exposure-response analysis. The difficulties encountered in deriving common exposure estimates across cohorts are discussed. Copyright 2002 Wiley-Liss, Inc.
Wengert, G J; Helbich, T H; Woitek, R; Kapetas, P; Clauser, P; Baltzer, P A; Vogl, W-D; Weber, M; Meyer-Baese, A; Pinker, Katja
2016-11-01
To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. • Subjective FGT estimation with MRI shows moderate intra-/inter-observer agreement in inexperienced readers. • Inter-observer agreement can be improved by practice and experience. • Automated observer-independent quantitative measurements can provide reliable and standardized assessment of FGT with MRI.
Winzer, Eva; Luger, Maria; Schindler, Karin
2018-06-01
Regular monitoring of food intake is hardly integrated in clinical routine. Therefore, the aim was to examine the validity, accuracy, and applicability of an appropriate and also quick and easy-to-use tool for recording food intake in a clinical setting. Two digital photography methods, the postMeal method with a picture after the meal, the pre-postMeal method with a picture before and after the meal, and the visual estimation method (plate diagram; PD) were compared against the reference method (weighed food records; WFR). A total of 420 dishes from lunch (7 weeks) were estimated with both photography methods and the visual method. Validity, applicability, accuracy, and precision of the estimation methods, and additionally food waste, macronutrient composition, and energy content were examined. Tests of validity revealed stronger correlations for photography methods (postMeal: r = 0.971, p < 0.001; pre-postMeal: r = 0.995, p < 0.001) compared to the visual estimation method (r = 0.810; p < 0.001). The pre-postMeal method showed smaller variability (bias < 1 g) and also smaller overestimation and underestimation. This method accurately and precisely estimated portion sizes in all food items. Furthermore, the total food waste was 22% for lunch over the study period. The highest food waste was observed in salads and the lowest in desserts. The pre-postMeal digital photography method is valid, accurate, and applicable in monitoring food intake in clinical setting, which enables a quantitative and qualitative dietary assessment. Thus, nutritional care might be initiated earlier. This method might be also advantageous for quantitative and qualitative evaluation of food waste, with a resultantly reduction in costs.
Lagares, Alfonso; Jiménez-Roldán, Luis; Gomez, Pedro A; Munarriz, Pablo M; Castaño-León, Ana M; Cepeda, Santiago; Alén, José F
2015-12-01
Quantitative estimation of the hemorrhage volume associated with aneurysm rupture is a new tool of assessing prognosis. To determine the prognostic value of the quantitative estimation of the amount of bleeding after aneurysmal subarachnoid hemorrhage, as well the relative importance of this factor related to other prognostic indicators, and to establish a possible cut-off value of volume of bleeding related to poor outcome. A prospective cohort of 206 patients consecutively admitted with the diagnosis of aneurysmal subarachnoid hemorrhage to Hospital 12 de Octubre were included in the study. Subarachnoid, intraventricular, intracerebral, and total bleeding volumes were calculated using analytic software. For assessing factors related to prognosis, univariate and multivariate analysis (logistic regression) were performed. The relative importance of factors in determining prognosis was established by calculating their proportion of explained variation. Maximum Youden index was calculated to determine the optimal cut point for subarachnoid and total bleeding volume. Variables independently related to prognosis were clinical grade at admission, age, and the different bleeding volumes. The proportion of variance explained is higher for subarachnoid bleeding. The optimal cut point related to poor prognosis is a volume of 20 mL both for subarachnoid and total bleeding. Volumetric measurement of subarachnoid or total bleeding volume are both independent prognostic factors in patients with aneurysmal subarachnoid hemorrhage. A volume of more than 20 mL of blood in the initial noncontrast computed tomography is related to a clear increase in poor outcome risk. : aSAH, aneurysmal subarachnoid hemorrhage.
Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.
Obuchowski, Nancy A; Bullen, Jennifer
2017-01-01
Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.
33 CFR 154.804 - Review, certification, and initial inspection.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., property, and the environment if an accident were to occur; and (4) If a quantitative failure analysis is... quantitative failure analysis. (e) The certifying entity must conduct all initial inspections and witness all...
33 CFR 154.804 - Review, certification, and initial inspection.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., property, and the environment if an accident were to occur; and (4) If a quantitative failure analysis is... quantitative failure analysis. (e) The certifying entity must conduct all initial inspections and witness all...
33 CFR 154.804 - Review, certification, and initial inspection.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., property, and the environment if an accident were to occur; and (4) If a quantitative failure analysis is... quantitative failure analysis. (e) The certifying entity must conduct all initial inspections and witness all...
33 CFR 154.804 - Review, certification, and initial inspection.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., property, and the environment if an accident were to occur; and (4) If a quantitative failure analysis is... quantitative failure analysis. (e) The certifying entity must conduct all initial inspections and witness all...
A simple estimate of funneling-assisted charge collection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edmonds, L.D.
In this paper, funneling is qualitatively discussed in detail and quantitative analysis is given for the total (time-integrated) collected charge. It is shown that for an n{sup +}/p junction, the total collected charge Q{sub T} is given by Q{sub R} = (1 + {mu}{sub n}/{mu}{sub p})Q{sub D} + 2Q{sub diff} where Q{degrees}D is the charge initially liberated in the depletion region and Q{sub diff} is charge collected by diffusion. This equation does not apply to very short ion tracks or to device having a thin epilayer.
Measuring and managing risk improves strategic financial planning.
Kleinmuntz, D N; Kleinmuntz, C E; Stephen, R G; Nordlund, D S
1999-06-01
Strategic financial risk assessment is a practical technique that can enable healthcare strategic decision makers to perform quantitative analyses of the financial risks associated with a given strategic initiative. The technique comprises six steps: (1) list risk factors that might significantly influence the outcomes, (2) establish best-guess estimates for assumptions regarding how each risk factor will affect its financial outcomes, (3) identify risk factors that are likely to have the greatest impact, (4) assign probabilities to assumptions, (5) determine potential scenarios associated with combined assumptions, and (6) determine the probability-weighted average of the potential scenarios.
New EVSE Analytical Tools/Models: Electric Vehicle Infrastructure Projection Tool (EVI-Pro)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Eric W; Rames, Clement L; Muratori, Matteo
This presentation addresses the fundamental question of how much charging infrastructure is needed in the United States to support PEVs. It complements ongoing EVSE initiatives by providing a comprehensive analysis of national PEV charging infrastructure requirements. The result is a quantitative estimate for a U.S. network of non-residential (public and workplace) EVSE that would be needed to support broader PEV adoption. The analysis provides guidance to public and private stakeholders who are seeking to provide nationwide charging coverage, improve the EVSE business case by maximizing station utilization, and promote effective use of private/public infrastructure investments.
TOXNET: Toxicology Data Network
... 4. Supporting Data for Carcinogenicity Expand II.B. Quantitative Estimate of Carcinogenic Risk from Oral Exposure II. ... of Confidence (Carcinogenicity, Oral Exposure) Expand II.C. Quantitative Estimate of Carcinogenic Risk from Inhalation Exposure II. ...
Evolutionary Technique for Automated Synthesis of Electronic Circuits
NASA Technical Reports Server (NTRS)
Stoica, Adrian (Inventor); Salazar-Lazaro, Carlos Harold (Inventor)
2003-01-01
A method for evolving a circuit comprising configuring a plurality of transistors using a plurality of reconfigurable switches so that each of the plurality of transistors has a terminal coupled to a terminal of another of the plurality of transistors that is controllable by a single reconfigurable switch. The plurality of reconfigurable switches being controlled in response to a chromosome pattern. The plurality of reconfigurable switches may be controlled using an annealing function. As such, the plurality of reconfigurable switches may be controlled by selecting qualitative values for the plurality of reconfigurable switches in response to the chromosomal pattern, selecting initial quantitative values for the selected qualitative values, and morphing the initial quantitative values. Typically, subsequent quantitative values will be selected more divergent than the initial quantitative values. The morphing process may continue to partially or to completely polarize the quantitative values.
Internal and External Crisis Early Warning and Monitoring.
1980-12-01
refining EWAMS. Initial EWAMS research revolved around the testing of quantitative political indicators, the development of general scans, and the...Initial Research ...................27 3.1.1 Quantitative indicators .......... 28 03.1.2 General scans.................34 3.1.3 Computer base...generalizations reinforce the desirability of the research from the vantage point of the I&W thrust. One is the proliferation of quantitative and
Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S
2015-01-16
Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.
Reactivation of Herpes Simplex Virus Type 2 After Initiation of Antiretroviral Therapy
Tobian, Aaron A. R.; Grabowski, Mary K.; Serwadda, David; Newell, Kevin; Ssebbowa, Paschal; Franco, Veronica; Nalugoda, Fred; Wawer, Maria J.; Gray, Ronald H.; Quinn, Thomas C.; Reynolds, Steven J.
2013-01-01
Background. The association between initiation of antiretroviral therapy (ART) for human immunodeficiency virus (HIV) infection and possible herpes simplex virus type 2 (HSV-2) shedding and genital ulcer disease (GUD) has not been evaluated. Methods. GUD and vaginal HSV-2 shedding were evaluated among women coinfected with HIV and HSV-2 (n = 440 for GUD and n = 96 for HSV-2 shedding) who began ART while enrolled in a placebo-controlled trial of HSV-2 suppression with acyclovir in Rakai, Uganda. Monthly vaginal swabs were tested for HSV-2 shedding, using a real-time quantitative polymerase chain reaction assay. Prevalence risk ratios (PRRs) of GUD were estimated using log binomial regression. Random effects logistic regression was used to estimate odds ratios (ORs) of HSV-2 shedding. Results. Compared with pre-ART values, GUD prevalence increased significantly within the first 3 months after ART initiation (adjusted PRR, 1.94; 95% confidence interval [CI], 1.04–3.62) and returned to baseline after 6 months of ART (adjusted PRR, 0.80; 95% CI, .35–1.80). Detection of HSV-2 shedding was highest in the first 3 months after ART initiation (adjusted OR, 2.58; 95% CI, 1.48–4.49). HSV-2 shedding was significantly less common among women receiving acyclovir (adjusted OR, 0.13; 95% CI, .04–.41). Conclusions. The prevalence of HSV-2 shedding and GUD increased significantly after ART initiation, possibly because of immune reconstitution inflammatory syndrome. Acyclovir significantly reduced both GUD and HSV-2 shedding and should be considered to mitigate these effects following ART initiation. PMID:23812240
NASA Astrophysics Data System (ADS)
Zhang, J.; Fang, N. Z.
2017-12-01
A potential flood forecast system is under development for the Upper Trinity River Basin (UTRB) in North Central of Texas using the WRF-Hydro model. The Routing Application for the Parallel Computation of Discharge (RAPID) is utilized as channel routing module to simulate streamflow. Model performance analysis was conducted based on three quantitative precipitation estimates (QPE): the North Land Data Assimilation System (NLDAS) rainfall, the Multi-Radar Multi-Sensor (MRMS) QPE and the National Centers for Environmental Prediction (NCEP) quality-controlled stage IV estimates. Prior to hydrologic simulation, QPE performance is assessed on two time scales (daily and hourly) using the Community Collaborative Rain, Hail and Snow Network (CoCoRaHS) and Hydrometeorological Automated Data System (HADS) hourly products. The calibrated WRF-Hydro model was then evaluated by comparing the simulated against the USGS observed using various QPE products. The results imply that the NCEP stage IV estimates have the best accuracy among the three QPEs on both time scales, while the NLDAS rainfall performs poorly because of its coarse spatial resolution. Furthermore, precipitation bias demonstrates pronounced impact on flood forecasting skills, as the root mean squared errors are significantly reduced by replacing NLDAS rainfall with NCEP stage IV estimates. This study also demonstrates that accurate simulated results can be achieved when initial soil moisture values are well understood in the WRF-Hydro model. Future research effort will therefore be invested on incorporating data assimilation with focus on initial states of the soil properties for UTRB.
Scheper, Carsten; Wensch-Dorendorf, Monika; Yin, Tong; Dressel, Holger; Swalve, Herrmann; König, Sven
2016-06-29
Intensified selection of polled individuals has recently gained importance in predominantly horned dairy cattle breeds as an alternative to routine dehorning. The status quo of the current polled breeding pool of genetically-closely related artificial insemination sires with lower breeding values for performance traits raises questions regarding the effects of intensified selection based on this founder pool. We developed a stochastic simulation framework that combines the stochastic simulation software QMSim and a self-designed R program named QUALsim that acts as an external extension. Two traits were simulated in a dairy cattle population for 25 generations: one quantitative (QMSim) and one qualitative trait with Mendelian inheritance (i.e. polledness, QUALsim). The assignment scheme for qualitative trait genotypes initiated realistic initial breeding situations regarding allele frequencies, true breeding values for the quantitative trait and genetic relatedness. Intensified selection for polled cattle was achieved using an approach that weights estimated breeding values in the animal best linear unbiased prediction model for the quantitative trait depending on genotypes or phenotypes for the polled trait with a user-defined weighting factor. Selection response for the polled trait was highest in the selection scheme based on genotypes. Selection based on phenotypes led to significantly lower allele frequencies for polled. The male selection path played a significantly greater role for a fast dissemination of polled alleles compared to female selection strategies. Fixation of the polled allele implies selection based on polled genotypes among males. In comparison to a base breeding scenario that does not take polledness into account, intensive selection for polled substantially reduced genetic gain for this quantitative trait after 25 generations. Reducing selection intensity for polled males while maintaining strong selection intensity among females, simultaneously decreased losses in genetic gain and achieved a final allele frequency of 0.93 for polled. A fast transition to a completely polled population through intensified selection for polled was in contradiction to the preservation of high genetic gain for the quantitative trait. Selection on male polled genotypes with moderate weighting, and selection on female polled phenotypes with high weighting, could be a suitable compromise regarding all important breeding aspects.
Dynamic whole body PET parametric imaging: II. Task-oriented statistical estimation
Karakatsanis, Nicolas A.; Lodge, Martin A.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman
2013-01-01
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15–20cm) of a single bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical FDG patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection. PMID:24080994
Dynamic whole-body PET parametric imaging: II. Task-oriented statistical estimation.
Karakatsanis, Nicolas A; Lodge, Martin A; Zhou, Y; Wahl, Richard L; Rahmim, Arman
2013-10-21
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15-20 cm) of a single-bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole-body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical (18)F-deoxyglucose patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30 min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole-body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillon, Michael B.; Kane, Staci R.
A nuclear explosion has the potential to injure or kill tens to hundreds of thousands (or more) of people through exposure to fallout (external gamma) radiation. Existing buildings can protect their occupants (reducing fallout radiation exposures) by placing material and distance between fallout particles and individuals indoors. Prior efforts have determined an initial set of building attributes suitable to reasonably assess a given building’s protection against fallout radiation. The current work provides methods to determine the quantitative values for these attributes from (a) common architectural features and data and (b) buildings described using the Global Earthquake Model (GEM) taxonomy. Thesemore » methods will be used to improve estimates of fallout protection for operational US Department of Defense (DoD) and US Department of Energy (DOE) consequence assessment models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maltrud, Mathew E.; Peacock, Synte L.; Visbeck, Martin
2010-08-01
We have conducted an ensemble of 20 simulations using a high-resolution global ocean model in which dye was continuously injected at the site of the Deepwater Horizon drilling rig for two months. We then extended these simulations for another four months to track the dispersal of the dye in the model. We have also performed five simulations in which dye was continuously injected at the site of the spill for four months and then run out to one year from the initial spill date. The experiments can elucidate the time and space scales of dispersal of polluted waters and alsomore » give a quantitative estimate of dilution rate, ignoring any sink terms such as chemical or biological degradation.« less
Time-of-flight PET time calibration using data consistency
NASA Astrophysics Data System (ADS)
Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan
2018-05-01
This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.
On sweat analysis for quantitative estimation of dehydration during physical exercise.
Ring, Matthias; Lohmueller, Clemens; Rauh, Manfred; Eskofier, Bjoern M
2015-08-01
Quantitative estimation of water loss during physical exercise is of importance because dehydration can impair both muscular strength and aerobic endurance. A physiological indicator for deficit of total body water (TBW) might be the concentration of electrolytes in sweat. It has been shown that concentrations differ after physical exercise depending on whether water loss was replaced by fluid intake or not. However, to the best of our knowledge, this fact has not been examined for its potential to quantitatively estimate TBW loss. Therefore, we conducted a study in which sweat samples were collected continuously during two hours of physical exercise without fluid intake. A statistical analysis of these sweat samples revealed significant correlations between chloride concentration in sweat and TBW loss (r = 0.41, p <; 0.01), and between sweat osmolality and TBW loss (r = 0.43, p <; 0.01). A quantitative estimation of TBW loss resulted in a mean absolute error of 0.49 l per estimation. Although the precision has to be improved for practical applications, the present results suggest that TBW loss estimation could be realizable using sweat samples.
A Bayesian model for estimating population means using a link-tracing sampling design.
St Clair, Katherine; O'Connell, Daniel
2012-03-01
Link-tracing sampling designs can be used to study human populations that contain "hidden" groups who tend to be linked together by a common social trait. These links can be used to increase the sampling intensity of a hidden domain by tracing links from individuals selected in an initial wave of sampling to additional domain members. Chow and Thompson (2003, Survey Methodology 29, 197-205) derived a Bayesian model to estimate the size or proportion of individuals in the hidden population for certain link-tracing designs. We propose an addition to their model that will allow for the modeling of a quantitative response. We assess properties of our model using a constructed population and a real population of at-risk individuals, both of which contain two domains of hidden and nonhidden individuals. Our results show that our model can produce good point and interval estimates of the population mean and domain means when our population assumptions are satisfied. © 2011, The International Biometric Society.
Script-independent text line segmentation in freestyle handwritten documents.
Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi
2008-08-01
Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.
Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan
2015-06-01
Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.
Quantitative Doppler Analysis Using Conventional Color Flow Imaging Acquisitions.
Karabiyik, Yucel; Ekroll, Ingvild Kinn; Eik-Nes, Sturla H; Lovstakken, Lasse
2018-05-01
Interleaved acquisitions used in conventional triplex mode result in a tradeoff between the frame rate and the quality of velocity estimates. On the other hand, workflow becomes inefficient when the user has to switch between different modes, and measurement variability is increased. This paper investigates the use of power spectral Capon estimator in quantitative Doppler analysis using data acquired with conventional color flow imaging (CFI) schemes. To preserve the number of samples used for velocity estimation, only spatial averaging was utilized, and clutter rejection was performed after spectral estimation. The resulting velocity spectra were evaluated in terms of spectral width using a recently proposed spectral envelope estimator. The spectral envelopes were also used for Doppler index calculations using in vivo and string phantom acquisitions. In vivo results demonstrated that the Capon estimator can provide spectral estimates with sufficient quality for quantitative analysis using packet-based CFI acquisitions. The calculated Doppler indices were similar to the values calculated using spectrograms estimated on a commercial ultrasound scanner.
Smile line assessment comparing quantitative measurement and visual estimation.
Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie
2011-02-01
Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Quantitative three-dimensional transrectal ultrasound (TRUS) for prostate imaging
NASA Astrophysics Data System (ADS)
Pathak, Sayan D.; Aarnink, Rene G.; de la Rosette, Jean J.; Chalana, Vikram; Wijkstra, Hessel; Haynor, David R.; Debruyne, Frans M. J.; Kim, Yongmin
1998-06-01
With the number of men seeking medical care for prostate diseases rising steadily, the need of a fast and accurate prostate boundary detection and volume estimation tool is being increasingly experienced by the clinicians. Currently, these measurements are made manually, which results in a large examination time. A possible solution is to improve the efficiency by automating the boundary detection and volume estimation process with minimal involvement from the human experts. In this paper, we present an algorithm based on SNAKES to detect the boundaries. Our approach is to selectively enhance the contrast along the edges using an algorithm called sticks and integrate it with a SNAKES model. This integrated algorithm requires an initial curve for each ultrasound image to initiate the boundary detection process. We have used different schemes to generate the curves with a varying degree of automation and evaluated its effects on the algorithm performance. After the boundaries are identified, the prostate volume is calculated using planimetric volumetry. We have tested our algorithm on 6 different prostate volumes and compared the performance against the volumes manually measured by 3 experts. With the increase in the user inputs, the algorithm performance improved as expected. The results demonstrate that given an initial contour reasonably close to the prostate boundaries, the algorithm successfully delineates the prostate boundaries in an image, and the resulting volume measurements are in close agreement with those made by the human experts.
Bradbury, Steven P; Russom, Christine L; Ankley, Gerald T; Schultz, T Wayne; Walker, John D
2003-08-01
The use of quantitative structure-activity relationships (QSARs) in assessing potential toxic effects of organic chemicals on aquatic organisms continues to evolve as computational efficiency and toxicological understanding advance. With the ever-increasing production of new chemicals, and the need to optimize resources to assess thousands of existing chemicals in commerce, regulatory agencies have turned to QSARs as essential tools to help prioritize tiered risk assessments when empirical data are not available to evaluate toxicological effects. Progress in designing scientifically credible QSARs is intimately associated with the development of empirically derived databases of well-defined and quantified toxicity endpoints, which are based on a strategic evaluation of diverse sets of chemical structures, modes of toxic action, and species. This review provides a brief overview of four databases created for the purpose of developing QSARs for estimating toxicity of chemicals to aquatic organisms. The evolution of QSARs based initially on general chemical classification schemes, to models founded on modes of toxic action that range from nonspecific partitioning into hydrophobic cellular membranes to receptor-mediated mechanisms is summarized. Finally, an overview of expert systems that integrate chemical-specific mode of action classification and associated QSAR selection for estimating potential toxicological effects of organic chemicals is presented.
Kimko, Holly; Berry, Seth; O'Kelly, Michael; Mehrotra, Nitin; Hutmacher, Matthew; Sethuraman, Venkat
2017-01-01
The application of modeling and simulation (M&S) methods to improve decision-making was discussed during the Trends & Innovations in Clinical Trial Statistics Conference held in Durham, North Carolina, USA on May 1-4, 2016. Uses of both pharmacometric and statistical M&S were presented during the conference, highlighting the diversity of the methods employed by pharmacometricians and statisticians to address a broad range of quantitative issues in drug development. Five presentations are summarized herein, which cover the development strategy of employing M&S to drive decision-making; European initiatives on best practice in M&S; case studies of pharmacokinetic/pharmacodynamics modeling in regulatory decisions; estimation of exposure-response relationships in the presence of confounding; and the utility of estimating the probability of a correct decision for dose selection when prior information is limited. While M&S has been widely used during the last few decades, it is expected to play an essential role as more quantitative assessments are employed in the decision-making process. By integrating M&S as a tool to compile the totality of evidence collected throughout the drug development program, more informed decisions will be made.
Zhang, Kui; Wiener, Howard; Beasley, Mark; George, Varghese; Amos, Christopher I; Allison, David B
2006-08-01
Individual genome scans for quantitative trait loci (QTL) mapping often suffer from low statistical power and imprecise estimates of QTL location and effect. This lack of precision yields large confidence intervals for QTL location, which are problematic for subsequent fine mapping and positional cloning. In prioritizing areas for follow-up after an initial genome scan and in evaluating the credibility of apparent linkage signals, investigators typically examine the results of other genome scans of the same phenotype and informally update their beliefs about which linkage signals in their scan most merit confidence and follow-up via a subjective-intuitive integration approach. A method that acknowledges the wisdom of this general paradigm but formally borrows information from other scans to increase confidence in objectivity would be a benefit. We developed an empirical Bayes analytic method to integrate information from multiple genome scans. The linkage statistic obtained from a single genome scan study is updated by incorporating statistics from other genome scans as prior information. This technique does not require that all studies have an identical marker map or a common estimated QTL effect. The updated linkage statistic can then be used for the estimation of QTL location and effect. We evaluate the performance of our method by using extensive simulations based on actual marker spacing and allele frequencies from available data. Results indicate that the empirical Bayes method can account for between-study heterogeneity, estimate the QTL location and effect more precisely, and provide narrower confidence intervals than results from any single individual study. We also compared the empirical Bayes method with a method originally developed for meta-analysis (a closely related but distinct purpose). In the face of marked heterogeneity among studies, the empirical Bayes method outperforms the comparator.
Molecular Quantification of Zooplankton Gut Content: The Case For qPCR
NASA Astrophysics Data System (ADS)
Frischer, M. E.; Walters, T. L.; Gibson, D. M.; Nejstgaard, J. C.; Troedsson, C.
2016-02-01
The ability to obtain information about feeding selectivity and rates in situ for zooplankton is vital for understanding the mechanisms structuring marine ecosystems. However, directly estimating feeding selection and rates of zooplankton, without bias, associated with culturing conditions has been notoriously difficult. A potential approach for addressing this problem is to target prey-specific DNA as a marker for prey ingestion and selection. In this study we report the development of a differential length amplification quantitative PCR (dla-qPCR) assay targeting the 18S rRNA gene to validate the use of a DNA-based approach to quantify consumption of specific plankton prey by the pelagic tunicate (doliolid) Dolioletta gegenbauri. Compared to copepods and other marine animals, the digestion of prey genomic DNA inside the gut of doliolids is low. This method minimizes potential underestimations, and therefore allows prey DNA to be used as an effective indicator of prey consumption. We also present an initial application of a qPCR-assay to estimate consumption of specific prey species on the southeastern continental shelf of the U.S., where doliolids stochastically bloom in response to upwelling events. Estimated feeding rates, based on qPCR, were in the same range as those estimated from clearance rates in laboratory feeding studies. In the field, consumption of specific prey, including the centric diatom Thalassiosira spp. was detected in the gut of wild caught D. gegenbauri at the levels consistent with their abundance in the water column at the time of collection. Thus, both experimental and field investigations support the hypothesis that a qPCR approach will be useful for the quantitative investigation of the in situ diet of D. gegenbauri without introduced bias' associated with cultivation.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-24
... this location in 2008. No quantitative estimate of the size of this remaining population is available... observed in 1998. No quantitative estimates of the size of the extant populations are available. Howarth...
Zhao, Tong; Liu, Kai; Takei, Masahiro
2016-01-01
The inertial migration of neutrally buoyant spherical particles in high particle concentration (αpi > 3%) suspension flow in a square microchannel was investigated by means of the multi-electrodes sensing method which broke through the limitation of conventional optical measurement techniques in the high particle concentration suspensions due to interference from the large particle numbers. Based on the measured particle concentrations near the wall and at the corner of the square microchannel, particle cross-sectional migration ratios are calculated to quantitatively estimate the migration degree. As a result, particle migration to four stable equilibrium positions near the centre of each face of the square microchannel is found only in the cases of low initial particle concentration up to 5.0 v/v%, while the migration phenomenon becomes partial as the initial particle concentration achieves 10.0 v/v% and disappears in the cases of the initial particle concentration αpi ≥ 15%. In order to clarify the influential mechanism of particle-particle interaction on particle migration, an Eulerian-Lagrangian numerical model was proposed by employing the Lennard-Jones potential as the inter-particle potential, while the inertial lift coefficient is calculated by a pre-processed semi-analytical simulation. Moreover, based on the experimental and simulation results, a dimensionless number named migration index was proposed to evaluate the influence of the initial particle concentration on the particle migration phenomenon. The migration index less than 0.1 is found to denote obvious particle inertial migration, while a larger migration index denotes the absence of it. This index is helpful for estimation of the maximum initial particle concentration for the design of inertial microfluidic devices. PMID:27158288
Wallace, Jack
2010-05-01
While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.
Security Events and Vulnerability Data for Cybersecurity Risk Estimation.
Allodi, Luca; Massacci, Fabio
2017-08-01
Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two-stage attacks whereby the attacker first breaches an Internet-facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of "weaponized" vulnerabilities he/she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach. © 2017 Society for Risk Analysis.
Korennoy, F I; Gulenkin, V M; Gogin, A E; Vergne, T; Karaulov, A K
2017-12-01
In 1977, Ukraine experienced a local epidemic of African swine fever (ASF) in the Odessa region. A total of 20 settlements were affected during the course of the epidemic, including both large farms and backyard households. Thanks to timely interventions, the virus circulation was successfully eradicated within 6 months, leading to no additional outbreaks. Detailed report of the outbreak's investigation has been publically available from 2014. The report contains some quantitative data that allow studying the ASF-spread dynamics in the course of the epidemic. In our study, we used this historical epidemic to estimate the basic reproductive number of the ASF virus both within and between farms. The basic reproductive number (R 0 ) represents the average number of secondary infections caused by one infectious unit during its infectious period in a susceptible population. Calculations were made under assumption of an exponential initial growth by fitting the approximating curve to the initial segments of the epidemic curves. The R 0 both within farm and between farms was estimated at 7.46 (95% confidence interval: 5.68-9.21) and 1.65 (1.42-1.88), respectively. Corresponding daily transmission rates were estimated at 1.07 (0.81-1.32) and 0.09 (0.07-0.10). These estimations based on historical data are consistent with those using data generated by the recent epidemic currently affecting eastern Europe. Such results contribute to the published knowledge on the ASF transmission dynamics under natural conditions and could be used to model and predict the spread of ASF in affected and non-affected regions and to evaluate the effectiveness of different control measures. © 2016 Blackwell Verlag GmbH.
Confidence estimation for quantitative photoacoustic imaging
NASA Astrophysics Data System (ADS)
Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena
2018-02-01
Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.
Quantitative Ultrasound: Transition from the Laboratory to the Clinic
NASA Astrophysics Data System (ADS)
Hall, Timothy
2014-03-01
There is a long history of development and testing of quantitative methods in medical ultrasound. From the initial attempts to scan breasts with ultrasound in the early 1950's, there was a simultaneous attempt to classify tissue as benign or malignant based on the appearance of the echo signal on an oscilloscope. Since that time, there has been substantial improvement in the ultrasound systems used, the models to describe wave propagation in random media, the methods of signal detection theory, and the combination of those models and methods into parameter estimation techniques. One particularly useful measure in ultrasonics is the acoustic differential scattering cross section per unit volume in the special case of the 180° (as occurs in pulse-echo ultrasound imaging) which is known as the backscatter coefficient. The backscatter coefficient, and parameters derived from it, can be used to objectively measure quantities that are used clinically to subjectively describe ultrasound images. For example, the ``echogenicity'' (relative ultrasound image brightness) of the renal cortex is commonly compared to that of the liver. Investigating the possibility of liver disease, it is assumed the renal cortex echogenicity is normal. Investigating the kidney, it is assumed the liver echogenicity is normal. Objective measures of backscatter remove these assumptions. There is a 30-year history of accurate estimates of acoustic backscatter coefficients with laboratory systems. Twenty years ago that ability was extended to clinical imaging systems with array transducers. Recent studies involving multiple laboratories and a variety of clinical imaging systems has demonstrated system-independent estimates of acoustic backscatter coefficients in well-characterized media (agreement within about 1.5dB over about a 1-decade frequency range). Advancements that made this possible, transition of this and similar capabilities into medical practice and the prospects for quantitative image-based biomarkers will be discussed. This work was supported, in part, by NIH grants R01CA140271 and R01HD072077.
Gilbert, Fabian; Böhm, Dirk; Eden, Lars; Schmalzl, Jonas; Meffert, Rainer H; Köstler, Herbert; Weng, Andreas M; Ziegler, Dirk
2016-08-22
The Goutallier Classification is a semi quantitative classification system to determine the amount of fatty degeneration in rotator cuff muscles. Although initially proposed for axial computer tomography scans it is currently applied to magnet-resonance-imaging-scans. The role for its clinical use is controversial, as the reliability of the classification has been shown to be inconsistent. The purpose of this study was to compare the semi quantitative MRI-based Goutallier Classification applied by 5 different raters to experimental MR spectroscopic quantitative fat measurement in order to determine the correlation between this classification system and the true extent of fatty degeneration shown by spectroscopy. MRI-scans of 42 patients with rotator cuff tears were examined by 5 shoulder surgeons and were graduated according to the MRI-based Goutallier Classification proposed by Fuchs et al. Additionally the fat/water ratio was measured with MR spectroscopy using the experimental SPLASH technique. The semi quantitative grading according to the Goutallier Classification was statistically correlated with the quantitative measured fat/water ratio using Spearman's rank correlation. Statistical analysis of the data revealed only fair correlation of the Goutallier Classification system and the quantitative fat/water ratio with R = 0.35 (p < 0.05). By dichotomizing the scale the correlation was 0.72. The interobserver and intraobserver reliabilities were substantial with R = 0.62 and R = 0.74 (p < 0.01). The correlation between the semi quantitative MRI based Goutallier Classification system and MR spectroscopic fat measurement is weak. As an adequate estimation of fatty degeneration based on standard MRI may not be possible, quantitative methods need to be considered in order to increase diagnostic safety and thus provide patients with ideal care in regard to the amount of fatty degeneration. Spectroscopic MR measurement may increase the accuracy of the Goutallier classification and thus improve the prediction of clinical results after rotator cuff repair. However, these techniques are currently only available in an experimental setting.
A new biodegradation prediction model specific to petroleum hydrocarbons.
Howard, Philip; Meylan, William; Aronson, Dallas; Stiteler, William; Tunkel, Jay; Comber, Michael; Parkerton, Thomas F
2005-08-01
A new predictive model for determining quantitative primary biodegradation half-lives of individual petroleum hydrocarbons has been developed. This model uses a fragment-based approach similar to that of several other biodegradation models, such as those within the Biodegradation Probability Program (BIOWIN) estimation program. In the present study, a half-life in days is estimated using multiple linear regression against counts of 31 distinct molecular fragments. The model was developed using a data set consisting of 175 compounds with environmentally relevant experimental data that was divided into training and validation sets. The original fragments from the Ministry of International Trade and Industry BIOWIN model were used initially as structural descriptors and additional fragments were then added to better describe the ring systems found in petroleum hydrocarbons and to adjust for nonlinearity within the experimental data. The training and validation sets had r2 values of 0.91 and 0.81, respectively.
Lai, Yanqing; Saridakis, George; Blackburn, Robert
2015-08-01
This paper examines the relationships between firm size and employees' experience of work stress. We used a matched employer-employee dataset (Workplace Employment Relations Survey 2011) that comprises of 7182 employees from 1210 private organizations in the United Kingdom. Initially, we find that employees in small and medium-sized enterprises experience lower level of overall job stress than those in large enterprises, although the effect disappears when we control for individual and organizational characteristics in the model. We also find that quantitative work overload, job insecurity and poor promotion opportunities, good work relationships and poor communication are strongly associated with job stress in the small and medium-sized enterprises, whereas qualitative work overload, poor job autonomy and employee engagements are more related with larger enterprises. Hence, our estimates show that the association and magnitude of estimated effects differ significantly by enterprise size. Copyright © 2013 John Wiley & Sons, Ltd.
Large-scale structure non-Gaussianities with modal methods
NASA Astrophysics Data System (ADS)
Schmittfull, Marcel
2016-10-01
Relying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ~ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).
Permittivity and conductivity parameter estimations using full waveform inversion
NASA Astrophysics Data System (ADS)
Serrano, Jheyston O.; Ramirez, Ana B.; Abreo, Sergio A.; Sadler, Brian M.
2018-04-01
Full waveform inversion of Ground Penetrating Radar (GPR) data is a promising strategy to estimate quantitative characteristics of the subsurface such as permittivity and conductivity. In this paper, we propose a methodology that uses Full Waveform Inversion (FWI) in time domain of 2D GPR data to obtain highly resolved images of the permittivity and conductivity parameters of the subsurface. FWI is an iterative method that requires a cost function to measure the misfit between observed and modeled data, a wave propagator to compute the modeled data and an initial velocity model that is updated at each iteration until an acceptable decrease of the cost function is reached. The use of FWI with GPR are expensive computationally because it is based on the computation of the electromagnetic full wave propagation. Also, the commercially available acquisition systems use only one transmitter and one receiver antenna at zero offset, requiring a large number of shots to scan a single line.
Smith, Eric G.
2015-01-01
Background: Nonrandomized studies typically cannot account for confounding from unmeasured factors. Method: A method is presented that exploits the recently-identified phenomenon of “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors. Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure. Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results: Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met. Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations: Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions: To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is straightforward. The method's routine usefulness, however, has not yet been established, nor has the method been fully validated. Rapid further investigation of this novel method is clearly indicated, given the potential value of its quantitative or qualitative output. PMID:25580226
Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine
1999-01-01
Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...
Burstyn, Igor; Boffetta, Paolo; Kauppinen, Timo; Heikkilä, Pirjo; Svane, Ole; Partanen, Timo; Stücker, Isabelle; Frentzel-Beyme, Rainer; Ahrens, Wolfgang; Merzenich, Hiltrud; Heederik, Dick; Hooiveld, Mariëtte; Langård, Sverre; Randem, Britt G; Järvholm, Bengt; Bergdahl, Ingvar; Shaham, Judith; Ribak, Joseph; Kromhout, Hans
2003-01-01
An exposure matrix (EM) for known and suspected carcinogens was required for a multicenter international cohort study of cancer risk and bitumen among asphalt workers. Production characteristics in companies enrolled in the study were ascertained through use of a company questionnaire (CQ). Exposures to coal tar, bitumen fume, organic vapor, polycyclic aromatic hydrocarbons, diesel fume, silica, and asbestos were assessed semi-quantitatively using information from CQs, expert judgment, and statistical models. Exposures of road paving workers to bitumen fume, organic vapor, and benzo(a)pyrene were estimated quantitatively by applying regression models, based on monitoring data, to exposure scenarios identified by the CQs. Exposures estimates were derived for 217 companies enrolled in the cohort, plus the Swedish asphalt paving industry in general. Most companies were engaged in road paving and asphalt mixing, but some also participated in general construction and roofing. Coal tar use was most common in Denmark and The Netherlands, but the practice is now obsolete. Quantitative estimates of exposure to bitumen fume, organic vapor, and benzo(a)pyrene for pavers, and semi-quantitative estimates of exposure to these agents among all subjects were strongly correlated. Semi-quantitative estimates of exposure to bitumen fume and coal tar exposures were only moderately correlated. EM assessed non-monotonic historical decrease in exposures to all agents assessed except silica and diesel exhaust. We produced a data-driven EM using methodology that can be adapted for other multicenter studies. Copyright 2003 Wiley-Liss, Inc.
Cancer risks after radiation exposure in middle age.
Shuryak, Igor; Sachs, Rainer K; Brenner, David J
2010-11-03
Epidemiological data show that radiation exposure during childhood is associated with larger cancer risks compared with exposure at older ages. For exposures in adulthood, however, the relative risks of radiation-induced cancer in Japanese atomic bomb survivors generally do not decrease monotonically with increasing age of adult exposure. These observations are inconsistent with most standard models of radiation-induced cancer, which predict that relative risks decrease monotonically with increasing age at exposure, at all ages. We analyzed observed cancer risk patterns as a function of age at exposure in Japanese atomic bomb survivors by using a biologically based quantitative model of radiation carcinogenesis that incorporates both radiation induction of premalignant cells (initiation) and radiation-induced promotion of premalignant damage. This approach emphasizes the kinetics of radiation-induced initiation and promotion, and tracks the yields of premalignant cells before, during, shortly after, and long after radiation exposure. Radiation risks after exposure in younger individuals are dominated by initiation processes, whereas radiation risks after exposure at later ages are more influenced by promotion of preexisting premalignant cells. Thus, the cancer site-dependent balance between initiation and promotion determines the dependence of cancer risk on age at radiation exposure. For example, in terms of radiation induction of premalignant cells, a quantitative measure of the relative contribution of initiation vs promotion is 10-fold larger for breast cancer than for lung cancer. Reflecting this difference, radiation-induced breast cancer risks decrease with age at exposure at all ages, whereas radiation-induced lung cancer risks do not. For radiation exposure in middle age, most radiation-induced cancer risks do not, as often assumed, decrease with increasing age at exposure. This observation suggests that promotional processes in radiation carcinogenesis become increasingly important as the age at exposure increases. Radiation-induced cancer risks after exposure in middle age may be up to twice as high as previously estimated, which could have implications for occupational exposure and radiological imaging.
Quantitative Assessment of Cancer Risk from Exposure to Diesel Engine Emissions
Quantitative estimates of lung cancer risk from exposure to diesel engine emissions were developed using data from three chronic bioassays with Fischer 344 rats. uman target organ dose was estimated with the aid of a comprehensive dosimetry model. This model accounted for rat-hum...
A QUANTITATIVE APPROACH FOR ESTIMATING EXPOSURE TO PESTICIDES IN THE AGRICULTURAL HEALTH STUDY
We developed a quantitative method to estimate chemical-specific pesticide exposures in a large prospective cohort study of over 58,000 pesticide applicators in North Carolina and Iowa. An enrollment questionnaire was administered to applicators to collect basic time- and inten...
Quantification of Microbial Phenotypes
Martínez, Verónica S.; Krömer, Jens O.
2016-01-01
Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694
NASA Astrophysics Data System (ADS)
Jaiswal, P.; van Westen, C. J.; Jetten, V.
2011-06-01
A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms. An inventory of 1084 landslides was prepared from historical records available for the period between 1987 and 2009. A substantially complete inventory was obtained for landslides on cut slopes (1042 landslides), while for natural slopes information on only 42 landslides was available. Most landslides were shallow translational debris slides and debris flowslides triggered by rainfall. On natural slopes most landslides occurred as first-time failures. For landslide hazard assessment the following information was derived: (1) landslides on natural slopes grouped into three landslide magnitude classes, based on landslide volumes, (2) the number of future landslides on natural slopes, obtained by establishing a relationship between the number of landslides on natural slopes and cut slopes for different return periods using a Gumbel distribution model, (3) landslide susceptible zones, obtained using a logistic regression model, and (4) distribution of landslides in the susceptible zones, obtained from the model fitting performance (success rate curve). The run-out distance of landslides was assessed empirically using landslide volumes, and the vulnerability of elements at risk was subjectively assessed based on limited historic incidents. Direct specific risk was estimated individually for tea/coffee and horticulture plantations, transport infrastructures, buildings, and people both in initiation and run-out areas. Risks were calculated by considering the minimum, average, and maximum landslide volumes in each magnitude class and the corresponding minimum, average, and maximum run-out distances and vulnerability values, thus obtaining a range of risk values per return period. The results indicate that the total annual minimum, average, and maximum losses are about US 44 000, US 136 000 and US 268 000, respectively. The maximum risk to population varies from 2.1 × 10-1 for one or more lives lost to 6.0 × 10-2 yr-1 for 100 or more lives lost. The obtained results will provide a basis for planning risk reduction strategies in the Nilgiri area.
Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory
NASA Astrophysics Data System (ADS)
Deyi, Feng; Ichikawa, M.
1989-11-01
In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.
APPLICATION OF RADIOISOTOPES TO THE QUANTITATIVE CHROMATOGRAPHY OF FATTY ACIDS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budzynski, A.Z.; Zubrzycki, Z.J.; Campbell, I.G.
1959-10-31
The paper reports work done on the use of I/sup 131/, Zn/sup 65/, Sr/sup 90/, Zr/sup 95/, Ce/sup 144/ for the quantitative estimation of fatty acids on paper chromatograms, and for determination of the degree of usaturation of components of resolved fatty acid mixtures. I/sup 131/ is used to iodinate unsaturated fatty acids, and the amount of such acids is determined from the radiochromatogram. The degree of unsaturation of fatty acids is determined by estimation of the specific activiiy of spots. The other isotopes have been examined from the point of view of their suitability for estimation of total amountsmore » of fatty acids by formation of insoluble radioactive soaps held on the chromatogram. In particular, work is reported on the quantitative estimation of saturated fatty acids by measurement of the activity of their insoluble soaps with radioactive metals. Various quantitative relationships are described between amount of fatty acid in spot and such parameters as radiometrically estimated spot length, width, maximum intensity, and integrated spot activity. A convenient detection apparatus for taking radiochromatograms is also described. In conjunction with conventional chromatographic methods for resolving fatty acids the method permits the estimation of composition of fatty acid mixtures obtained from biological material. (auth)« less
NASA Astrophysics Data System (ADS)
Nikonova, L. G.; Golovatskaya, E. A.; Terechshenko, N. N.
2018-03-01
The research presents quantitative estimates of the decomposition rate of plant residues at the initial stages of the decay of two plant species (Eriophorum vaginatum and Sphagnum fuscum) in a peat deposit of the oligotrophic bog in the southern taiga subzone of Western Siberia. We also studied a change in the content of total carbon and nitrogen in plant residues and the activity of microflora in the initial stages of decomposition. At the initial stage of the transformation process of peat-forming plants the losses of mass of Sph. fuscum is 2.5 times lower then E. vaginatum. The most active mass losses, as well as a decrease in the total carbon content, is observed after four months of the experiment. The most active carbon removal is characteristic for E. vaginatum. During the decomposition of plant residues, the nitrogen content decreases, and the most intense nitrogen losses were characteristic for Sph. fuscum. The microorganisms assimilating organic and mineral nitrogen are more active in August, the oligotrophic and cellulolytic microorganisms – in July.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and posterior eye segment as well as in skin imaging. The new estimator shows superior performance and also shows clearer image contrast.
A novel mesh processing based technique for 3D plant analysis
2012-01-01
Background In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time. Result In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated. Conclusion By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features. PMID:22553969
Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2009-01-01
The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…
Kratochvíla, Jiří; Jiřík, Radovan; Bartoš, Michal; Standara, Michal; Starčuk, Zenon; Taxt, Torfinn
2016-03-01
One of the main challenges in quantitative dynamic contrast-enhanced (DCE) MRI is estimation of the arterial input function (AIF). Usually, the signal from a single artery (ignoring contrast dispersion, partial volume effects and flow artifacts) or a population average of such signals (also ignoring variability between patients) is used. Multi-channel blind deconvolution is an alternative approach avoiding most of these problems. The AIF is estimated directly from the measured tracer concentration curves in several tissues. This contribution extends the published methods of multi-channel blind deconvolution by applying a more realistic model of the impulse residue function, the distributed capillary adiabatic tissue homogeneity model (DCATH). In addition, an alternative AIF model is used and several AIF-scaling methods are tested. The proposed method is evaluated on synthetic data with respect to the number of tissue regions and to the signal-to-noise ratio. Evaluation on clinical data (renal cell carcinoma patients before and after the beginning of the treatment) gave consistent results. An initial evaluation on clinical data indicates more reliable and less noise sensitive perfusion parameter estimates. Blind multi-channel deconvolution using the DCATH model might be a method of choice for AIF estimation in a clinical setup. © 2015 Wiley Periodicals, Inc.
Machado, G D.C.; Paiva, L M.C.; Pinto, G F.; Oestreicher, E G.
2001-03-08
1The Enantiomeric Ratio (E) of the enzyme, acting as specific catalysts in resolution of enantiomers, is an important parameter in the quantitative description of these chiral resolution processes. In the present work, two novel methods hereby called Method I and II, for estimating E and the kinetic parameters Km and Vm of enantiomers were developed. These methods are based upon initial rate (v) measurements using different concentrations of enantiomeric mixtures (C) with several molar fractions of the substrate (x). Both methods were tested using simulated "experimental data" and actual experimental data. Method I is easier to use than Method II but requires that one of the enantiomers is available in pure form. Method II, besides not requiring the enantiomers in pure form shown better results, as indicated by the magnitude of the standard errors of estimates. The theoretical predictions were experimentally confirmed by using the oxidation of 2-butanol and 2-pentanol catalyzed by Thermoanaerobium brockii alcohol dehydrogenase as reaction models. The parameters E, Km and Vm were estimated by Methods I and II with precision and were not significantly different from those obtained experimentally by direct estimation of E from the kinetic parameters of each enantiomer available in pure form.
Xu, Y H; Dragan, Y P; Campbell, H A; Pitot, H C
1998-04-01
The most common organ site of neoplasms induced by carcinogenic chemicals in the rodent bioassay is the liver. The development of cancer in rodent liver is a multistage process involving sequentially the stages of initiation, promotion, and progression. During the stages of promotion and progression, numerous lesions termed altered hepatic foci (AHF) develop. STEREO was developed for the purpose of efficient and accurate quantitation of AHF and related lesions in experimental and test rodents. The system utilized is equipped with a microcomputer (IBM-compatible PC running Windows 95) and a Summagraphics MICROGRID or SummaSketch tablet digitizer. The program records information from digitization of single or serial sections obtained randomly from rat liver tissue. With this information and the methods of quantitative stereology, both the number and volume percentage fraction of AHF in liver are calculated in three dimensions. The recorded data files can be printed graphically or in the format of tabular numerical data. The results of stereologic calculations are stored on floppy disks and can be sorted into different categories and analyzed or displayed with the use of statistics and graphic functions built into the overall program. Results may also be exported into Microsoft Excel for use at a later time. Any IBM-compatible PC capable of utilizing Windows 95 and MS Office can be used with STEREO, which offers inexpensive, easily operated software to obtain three-dimensional information from sections of two dimensions for the identification and relative potency of initiators, promoters, and progressors, and for the establishment of information potentially useful in developing estimations of risk for human cancer.
NASA Astrophysics Data System (ADS)
Witzany, V.; Jefremov, P.
2018-06-01
Context. When a black hole is accreting well below the Eddington rate, a geometrically thick, radiatively inefficient state of the accretion disk is established. There is a limited number of closed-form physical solutions for geometrically thick (nonselfgravitating) toroidal equilibria of perfect fluids orbiting a spinning black hole, and these are predominantly used as initial conditions for simulations of accretion in the aforementioned mode. However, different initial configurations might lead to different results and thus observational predictions drawn from such simulations. Aims: We aim to expand the known equilibria by a number of closed multiparametric solutions with various possibilities of rotation curves and geometric shapes. Then, we ask whether choosing these as initial conditions influences the onset of accretion and the asymptotic state of the disk. Methods: We have investigated a set of examples from the derived solutions in detail; we analytically estimate the growth of the magneto-rotational instability (MRI) from their rotation curves and evolve the analytically obtained tori using the 2D magneto-hydrodynamical code HARM. Properties of the evolutions are then studied through the mass, energy, and angular-momentum accretion rates. Results: The rotation curve has a decisive role in the numerical onset of accretion in accordance with our analytical MRI estimates: in the first few orbital periods, the average accretion rate is linearly proportional to the initial MRI rate in the toroids. The final state obtained from any initial condition within the studied class after an evolution of ten or more orbital periods is mostly qualitatively identical and the quantitative properties vary within a single order of magnitude. The average values of the energy of the accreted fluid have an irregular dependency on initial data, and in some cases fluid with energies many times its rest mass is systematically accreted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, Benjamin S.; Zalavadia, Mital A.; Miller, Brian W.
Environmental sampling and sample analyses by the International Atomic Energy Agency’s (IAEA) Network of Analytical Laboratories (NWAL) is a critical technical tool used to detect facility misuse under a Comprehensive Safeguards Agreement and to verify the absence of undeclared nuclear material activities under an Additional Protocol. Currently all environmental swipe samples (ESS) are screened using gamma spectrometry and x-ray fluorescence to estimate the amount of U and/or Pu in the ESS, to guide further analysis, and to assist in the shipment of ESS to the NWAL. Quantitative Digital Autoradiography for Environmental Samples (QDARES) is being developed to complement existing techniquesmore » through the use of a portable, real-time, high-spatial-resolution camera called the Ionizing-radiation Quantum Imaging Detector (iQID). The iQID constructs a spatial map of radionuclides within a sample or surface in real-time as charged particles (betas) and photons (gamma/x-rays) are detected and localized on an event-by-event basis. Knowledge of the location and nature of radioactive hot spots on the ESS could provide information for subsequent laboratory analysis. As a nondestructive technique, QDARES does not compromise the ESS chain of custody or subsequent laboratory analysis. In this paper we will present the system design and construction, characterization measurements with calibration sources, and initial measurements of ESS.« less
Baker, Robert L; Leong, Wen Fung; An, Nan; Brock, Marcus T; Rubin, Matthew J; Welch, Stephen; Weinig, Cynthia
2018-02-01
We develop Bayesian function-valued trait models that mathematically isolate genetic mechanisms underlying leaf growth trajectories by factoring out genotype-specific differences in photosynthesis. Remote sensing data can be used instead of leaf-level physiological measurements. Characterizing the genetic basis of traits that vary during ontogeny and affect plant performance is a major goal in evolutionary biology and agronomy. Describing genetic programs that specifically regulate morphological traits can be complicated by genotypic differences in physiological traits. We describe the growth trajectories of leaves using novel Bayesian function-valued trait (FVT) modeling approaches in Brassica rapa recombinant inbred lines raised in heterogeneous field settings. While frequentist approaches estimate parameter values by treating each experimental replicate discretely, Bayesian models can utilize information in the global dataset, potentially leading to more robust trait estimation. We illustrate this principle by estimating growth asymptotes in the face of missing data and comparing heritabilities of growth trajectory parameters estimated by Bayesian and frequentist approaches. Using pseudo-Bayes factors, we compare the performance of an initial Bayesian logistic growth model and a model that incorporates carbon assimilation (A max ) as a cofactor, thus statistically accounting for genotypic differences in carbon resources. We further evaluate two remotely sensed spectroradiometric indices, photochemical reflectance (pri2) and MERIS Terrestrial Chlorophyll Index (mtci) as covariates in lieu of A max , because these two indices were genetically correlated with A max across years and treatments yet allow much higher throughput compared to direct leaf-level gas-exchange measurements. For leaf lengths in uncrowded settings, including A max improves model fit over the initial model. The mtci and pri2 indices also outperform direct A max measurements. Of particular importance for evolutionary biologists and plant breeders, hierarchical Bayesian models estimating FVT parameters improve heritabilities compared to frequentist approaches.
Uncertainty of quantitative microbiological methods of pharmaceutical analysis.
Gunar, O V; Sakhno, N G
2015-12-30
The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods. Copyright © 2015 Elsevier B.V. All rights reserved.
Üstündağ, Özgür; Dinç, Erdal; Özdemir, Nurten; Tilkan, M Günseli
2015-01-01
In the development strategies of new drug products and generic drug products, the simultaneous in-vitro dissolution behavior of oral dosage formulations is the most important indication for the quantitative estimation of efficiency and biopharmaceutical characteristics of drug substances. This is to force the related field's scientists to improve very powerful analytical methods to get more reliable, precise and accurate results in the quantitative analysis and dissolution testing of drug formulations. In this context, two new chemometric tools, partial least squares (PLS) and principal component regression (PCR) were improved for the simultaneous quantitative estimation and dissolution testing of zidovudine (ZID) and lamivudine (LAM) in a tablet dosage form. The results obtained in this study strongly encourage us to use them for the quality control, the routine analysis and the dissolution test of the marketing tablets containing ZID and LAM drugs.
Choi, Seo Yeon; Yang, Nuri; Jeon, Soo Kyung; Yoon, Tae Hyun
2014-09-01
In this study, we have demonstrated feasibility of a semi-quantitative approach for the estimation of cellular SiO2 nanoparticles (NPs), which is based on the flow cytometry measurements of their normalized side scattering intensity. In order to improve our understanding on the quantitative aspects of cell-nanoparticle interactions, flow cytometry, transmission electron microscopy, and X-ray fluorescence experiments were carefully performed for the HeLa cells exposed to SiO2 NPs with different core diameters, hydrodynamic sizes, and surface charges. Based on the observed relationships among the experimental data, a semi-quantitative cellular SiO2 NPs estimation method from their normalized side scattering and core diameters was proposed, which can be applied for the determination of cellular SiO2 NPs within their size-dependent linear ranges. © 2014 International Society for Advancement of Cytometry.
NASA Astrophysics Data System (ADS)
Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi
2010-01-01
The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.
Gao, Ying; Goodnough, Candida L.; Erokwu, Bernadette O.; Farr, George W.; Darrah, Rebecca; Lu, Lan; Dell, Katherine M.; Yu, Xin; Flask, Chris A.
2014-01-01
Arterial Spin Labeling (ASL) is a valuable non-contrast perfusion MRI technique with numerous clinical applications. Many previous ASL MRI studies have utilized either Echo-Planar Imaging (EPI) or True Fast Imaging with Steady-State Free Precession (True FISP) readouts that are prone to off-resonance artifacts on high field MRI scanners. We have developed a rapid ASL-FISP MRI acquisition for high field preclinical MRI scanners providing perfusion-weighted images with little or no artifacts in less than 2 seconds. In this initial implementation, a FAIR (Flow-Sensitive Alternating Inversion Recovery) ASL preparation was combined with a rapid, centrically-encoded FISP readout. Validation studies on healthy C57/BL6 mice provided consistent estimation of in vivo mouse brain perfusion at 7 T and 9.4 T (249±38 ml/min/100g and 241±17 ml/min/100g, respectively). The utility of this method was further demonstrated in detecting significant perfusion deficits in a C57/BL6 mouse model of ischemic stroke. Reasonable kidney perfusion estimates were also obtained for a healthy C57/BL6 mouse exhibiting differential perfusion in the renal cortex and medulla. Overall, the ASL-FISP technique provides a rapid and quantitative in vivo assessment of tissue perfusion for high field MRI scanners with minimal image artifacts. PMID:24891124
Comb-push ultrasound shear elastography of breast masses: initial results show promise.
Denis, Max; Mehrmohammadi, Mohammad; Song, Pengfei; Meixner, Duane D; Fazzio, Robert T; Pruthi, Sandhya; Whaley, Dana H; Chen, Shigao; Fatemi, Mostafa; Alizad, Azra
2015-01-01
To evaluate the performance of Comb-push Ultrasound Shear Elastography (CUSE) for classification of breast masses. CUSE is an ultrasound-based quantitative two-dimensional shear wave elasticity imaging technique, which utilizes multiple laterally distributed acoustic radiation force (ARF) beams to simultaneously excite the tissue and induce shear waves. Female patients who were categorized as having suspicious breast masses underwent CUSE evaluations prior to biopsy. An elasticity estimate within the breast mass was obtained from the CUSE shear wave speed map. Elasticity estimates of various types of benign and malignant masses were compared with biopsy results. Fifty-four female patients with suspicious breast masses from our ongoing study are presented. Our cohort included 31 malignant and 23 benign breast masses. Our results indicate that the mean shear wave speed was significantly higher in malignant masses (6 ± 1.58 m/s) in comparison to benign masses (3.65 ± 1.36 m/s). Therefore, the stiffness of the mass quantified by the Young's modulus is significantly higher in malignant masses. According to the receiver operating characteristic curve (ROC), the optimal cut-off value of 83 kPa yields 87.10% sensitivity, 82.61% specificity, and 0.88 for the area under the curve (AUC). CUSE has the potential for clinical utility as a quantitative diagnostic imaging tool adjunct to B-mode ultrasound for differentiation of malignant and benign breast masses.
The Effect of Pickling on Blue Borscht Gelatin and Other Interesting Diffusive Phenomena.
ERIC Educational Resources Information Center
Davis, Lawrence C.; Chou, Nancy C.
1998-01-01
Presents some simple demonstrations that students can construct for themselves in class to learn the difference between diffusion and convection rates. Uses cabbage leaves and gelatin and focuses on diffusion in ungelified media, a quantitative diffusion estimate with hydroxyl ions, and a quantitative diffusion estimate with photons. (DDR)
Flood hazards studies in the Mississippi River basin using remote sensing
NASA Technical Reports Server (NTRS)
Rango, A.; Anderson, A. T.
1974-01-01
The Spring 1973 Mississippi River flood was investigated using remotely sensed data from ERTS-1. Both manual and automatic analyses of the data indicated that ERTS-1 is extremely useful as a regional tool for flood mamagement. Quantitative estimates of area flooded were made in St. Charles County, Missouri and Arkansas. Flood hazard mapping was conducted in three study areas along the Mississippi River using pre-flood ERTS-1 imagery enlarged to 1:250,000 and 1:100,000 scale. Initial results indicate that ERTS-1 digital mapping of flood prone areas can be performed at 1:62,500 which is comparable to some conventional flood hazard map scales.
Dynamic deformable models for 3D MRI heart segmentation
NASA Astrophysics Data System (ADS)
Zhukov, Leonid; Bao, Zhaosheng; Gusikov, Igor; Wood, John; Breen, David E.
2002-05-01
Automated or semiautomated segmentation of medical images decreases interstudy variation, observer bias, and postprocessing time as well as providing clincally-relevant quantitative data. In this paper we present a new dynamic deformable modeling approach to 3D segmentation. It utilizes recently developed dynamic remeshing techniques and curvature estimation methods to produce high-quality meshes. The approach has been implemented in an interactive environment that allows a user to specify an initial model and identify key features in the data. These features act as hard constraints that the model must not pass through as it deforms. We have employed the method to perform semi-automatic segmentation of heart structures from cine MRI data.
Kinetic characterisation of primer mismatches in allele-specific PCR: a quantitative assessment.
Waterfall, Christy M; Eisenthal, Robert; Cobb, Benjamin D
2002-12-20
A novel method of estimating the kinetic parameters of Taq DNA polymerase during rapid cycle PCR is presented. A model was constructed using a simplified sigmoid function to represent substrate accumulation during PCR in combination with the general equation describing high substrate inhibition for Michaelis-Menten enzymes. The PCR progress curve was viewed as a series of independent reactions where initial rates were accurately measured for each cycle. Kinetic parameters were obtained for allele-specific PCR (AS-PCR) amplification to examine the effect of mismatches on amplification. A high degree of correlation was obtained providing evidence of substrate inhibition as a major cause of the plateau phase that occurs in the later cycles of PCR.
Automated analysis of plethysmograms for functional studies of hemodynamics
NASA Astrophysics Data System (ADS)
Zatrudina, R. Sh.; Isupov, I. B.; Gribkov, V. Yu.
2018-04-01
The most promising method for the quantitative determination of cardiovascular tone indicators and of cerebral hemodynamics indicators is the method of impedance plethysmography. The accurate determination of these indicators requires the correct identification of the characteristic points in the thoracic impedance plethysmogram and the cranial impedance plethysmogram respectively. An algorithm for automatic analysis of these plethysmogram is presented. The algorithm is based on the hard temporal relationships between the phases of the cardiac cycle and the characteristic points of the plethysmogram. The proposed algorithm does not require estimation of initial data and selection of processing parameters. Use of the method on healthy subjects showed a very low detection error of characteristic points.
Adkin, A; Brouwer, A; Downs, S H; Kelly, L
2016-01-01
The adoption of bovine tuberculosis (bTB) risk-based trading (RBT) schemes has the potential to reduce the risk of bTB spread. However, any scheme will have cost implications that need to be balanced against its likely success in reducing bTB. This paper describes the first stochastic quantitative model assessing the impact of the implementation of a cattle risk-based trading scheme to inform policy makers and contribute to cost-benefit analyses. A risk assessment for England and Wales was developed to estimate the number of infected cattle traded using historic movement data recorded between July 2010 and June 2011. Three scenarios were implemented: cattle traded with no RBT scheme in place, voluntary provision of the score and a compulsory, statutory scheme applying a bTB risk score to each farm. For each scenario, changes in trade were estimated due to provision of the risk score to potential purchasers. An estimated mean of 3981 bTB infected animals were sold to purchasers with no RBT scheme in place in one year, with 90% confidence the true value was between 2775 and 5288. This result is dependent on the estimated between herd prevalence used in the risk assessment which is uncertain. With the voluntary provision of the risk score by farmers, on average, 17% of movements was affected (purchaser did not wish to buy once the risk score was available), with a reduction of 23% in infected animals being purchased initially. The compulsory provision of the risk score in a statutory scheme resulted in an estimated mean change to 26% of movements, with a reduction of 37% in infected animals being purchased initially, increasing to a 53% reduction in infected movements from higher risk sellers (score 4 and 5). The estimated mean reduction in infected animals being purchased could be improved to 45% given a 10% reduction in risky purchase behaviour by farmers which may be achieved through education programmes, or to an estimated mean of 49% if a rule was implemented preventing farmers from the purchase of animals of higher risk than their own herd. Given voluntary trials currently taking place of a trading scheme, recommendations for future work include the monitoring of initial uptake and changes in the purchase patterns of farmers. Such data could be used to update the risk assessment to reduce uncertainty associated with model estimates. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie
2012-06-01
Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.
NASA Technical Reports Server (NTRS)
Lo, Yunnhon; Johnson, Stephen B.; Breckenridge, Jonathan T.
2014-01-01
The theory of System Health Management (SHM) and of its operational subset Fault Management (FM) states that FM is implemented as a "meta" control loop, known as an FM Control Loop (FMCL). The FMCL detects that all or part of a system is now failed, or in the future will fail (that is, cannot be controlled within acceptable limits to achieve its objectives), and takes a control action (a response) to return the system to a controllable state. In terms of control theory, the effectiveness of each FMCL is estimated based on its ability to correctly estimate the system state, and on the speed of its response to the current or impending failure effects. This paper describes how this theory has been successfully applied on the National Aeronautics and Space Administration's (NASA) Space Launch System (SLS) Program to quantitatively estimate the effectiveness of proposed abort triggers so as to select the most effective suite to protect the astronauts from catastrophic failure of the SLS. The premise behind this process is to be able to quantitatively provide the value versus risk trade-off for any given abort trigger, allowing decision makers to make more informed decisions. All current and planned crewed launch vehicles have some form of vehicle health management system integrated with an emergency launch abort system to ensure crew safety. While the design can vary, the underlying principle is the same: detect imminent catastrophic vehicle failure, initiate launch abort, and extract the crew to safety. Abort triggers are the detection mechanisms that identify that a catastrophic launch vehicle failure is occurring or is imminent and cause the initiation of a notification to the crew vehicle that the escape system must be activated. While ensuring that the abort triggers provide this function, designers must also ensure that the abort triggers do not signal that a catastrophic failure is imminent when in fact the launch vehicle can successfully achieve orbit. That is, the abort triggers must have low false negative rates to be sure that real crew-threatening failures are detected, and also low false positive rates to ensure that the crew does not abort from non-crew-threatening launch vehicle behaviors. The analysis process described in this paper is a compilation of over six years of lessons learned and refinements from experiences developing abort triggers for NASA's Constellation Program (Ares I Project) and the SLS Program, as well as the simultaneous development of SHM/FM theory. The paper will describe the abort analysis concepts and process, developed in conjunction with SLS Safety and Mission Assurance (S&MA) to define a common set of mission phase, failure scenario, and Loss of Mission Environment (LOME) combinations upon which the SLS Loss of Mission (LOM) Probabilistic Risk Assessment (PRA) models are built. This abort analysis also requires strong coordination with the Multi-Purpose Crew Vehicle (MPCV) and SLS Structures and Environments (STE) to formulate a series of abortability tables that encapsulate explosion dynamics over the ascent mission phase. The design and assessment of abort conditions and triggers to estimate their Loss of Crew (LOC) Benefits also requires in-depth integration with other groups, including Avionics, Guidance, Navigation and Control(GN&C), the Crew Office, Mission Operations, and Ground Systems. The outputs of this analysis are a critical input to SLS S&MA's LOC PRA models. The process described here may well be the first full quantitative application of SHM/FM theory to the selection of a sensor suite for any aerospace system.
Couderc, Jean-Philippe
2010-01-01
The sharing of scientific data reinforces open scientific inquiry; it encourages diversity of analysis and opinion while promoting new research and facilitating the education of next generations of scientists. In this article, we present an initiative for the development of a repository containing continuous electrocardiographic information and their associated clinical information. This information is shared with the worldwide scientific community in order to improve quantitative electrocardiology and cardiac safety. First, we present the objectives of the initiative and its mission. Then, we describe the resources available in this initiative following three components: data, expertise and tools. The Data available in the Telemetric and Holter ECG Warehouse (THEW) includes continuous ECG signals and associated clinical information. The initiative attracted various academic and private partners whom expertise covers a large list of research arenas related to quantitative electrocardiography; their contribution to the THEW promotes cross-fertilization of scientific knowledge, resources, and ideas that will advance the field of quantitative electrocardiography. Finally, the tools of the THEW include software and servers to access and review the data available in the repository. To conclude, the THEW is an initiative developed to benefit the scientific community and to advance the field of quantitative electrocardiography and cardiac safety. It is a new repository designed to complement the existing ones such as Physionet, the AHA-BIH Arrhythmia Database, and the CSE database. The THEW hosts unique datasets from clinical trials and drug safety studies that, so far, were not available to the worldwide scientific community. PMID:20863512
Otazú, Ivone B; Tavares, Rita de Cassia B; Hassan, Rocío; Zalcberg, Ilana; Tabak, Daniel G; Seuánez, Héctor N
2002-02-01
Serial assays of qualitative (multiplex and nested) and quantitative PCR were carried out for detecting and estimating the level of BCR-ABL transcripts in 39 CML patients following bone marrow transplantation. Seven of these patients, who received donor lymphocyte infusions (DLIs) following to relapse, were also monitored. Quantitative estimates of BCR-ABL transcripts were obtained by co-amplification with a competitor sequence. Estimates of ABL transcripts were used, an internal control and the ratio BCR-ABL/ABL was thus estimated for evaluating the kinetics of residual clones. Twenty four patients were followed shortly after BMT; two of these patients were in cytogenetic relapse coexisting with very high BCR-ABL levels while other 22 were in clinical, haematologic and cytogenetic remission 2-42 months after BMT. In this latter group, seven patients showed a favourable clinical-haematological progression in association with molecular remission while in 14 patients quantitative PCR assays indicated molecular relapse that was not associated with an early cytogenetic-haematologic relapse. BCR-ABL/ABL levels could not be correlated with presence of GVHD in 24 patients after BMT. In all seven patients treated with DLI, high levels of transcripts were detected at least 4 months before the appearance of clinical haematological relapse. Following DLI, five of these patients showed decreasing transcript levels from 2 to 5 logs between 4 and 12 months. In eight other patients studied long after BMT, five showed molecular relapse up to 117 months post-BMT and only one showed cytogenetic relapse. Our findings indicated that quantitative estimates of BCR-ABL transcripts were valuable for monitoring minimal residual disease in each patient.
Simulating realistic predator signatures in quantitative fatty acid signature analysis
Bromaghin, Jeffrey F.
2015-01-01
Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.
NASA Astrophysics Data System (ADS)
Cabello, Violeta
2017-04-01
This communication will present the advancement of an innovative analytical framework for the analysis of Water-Energy-Food-Climate Nexus termed Quantitative Story Telling (QST). The methodology is currently under development within the H2020 project MAGIC - Moving Towards Adaptive Governance in Complexity: Informing Nexus Security (www.magic-nexus.eu). The key innovation of QST is that it bridges qualitative and quantitative analytical tools into an iterative research process in which each step is built and validated in interaction with stakeholders. The qualitative analysis focusses on the identification of the narratives behind the development of relevant WEFC-Nexus policies and innovations. The quantitative engine is the Multi-Scale Analysis of Societal and Ecosystem Metabolism (MuSIASEM), a resource accounting toolkit capable of integrating multiple analytical dimensions at different scales through relational analysis. Although QST may not be labelled a data-driven but a story-driven approach, I will argue that improving models per se may not lead to an improved understanding of WEF-Nexus problems unless we are capable of generating more robust narratives to frame them. The communication will cover an introduction to MAGIC project, the basic concepts of QST and a case study focussed on agricultural production in a semi-arid region in Southern Spain. Data requirements for this case study and the limitations to find, access or estimate them will be presented alongside a reflection on the relation between analytical scales and data availability.
NASA Astrophysics Data System (ADS)
Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte
2007-01-01
We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.
Clinical significance of quantitative analysis of facial nerve enhancement on MRI in Bell's palsy.
Song, Mee Hyun; Kim, Jinna; Jeon, Ju Hyun; Cho, Chang Il; Yoo, Eun Hye; Lee, Won-Sang; Lee, Ho-Ki
2008-11-01
Quantitative analysis of the facial nerve on the lesion side as well as the normal side, which allowed for more accurate measurement of facial nerve enhancement in patients with facial palsy, showed statistically significant correlation with the initial severity of facial nerve inflammation, although little prognostic significance was shown. This study investigated the clinical significance of quantitative measurement of facial nerve enhancement in patients with Bell's palsy by analyzing the enhancement pattern and correlating MRI findings with initial severity of facial palsy and clinical outcome. Facial nerve enhancement was measured quantitatively by using the region of interest on pre- and postcontrast T1-weighted images in 44 patients diagnosed with Bell's palsy. The signal intensity increase on the lesion side was first compared with that of the contralateral side and then correlated with the initial degree of facial palsy and prognosis. The lesion side showed significantly higher signal intensity increase compared with the normal side in all of the segments except for the mastoid segment. Signal intensity increase at the internal auditory canal and labyrinthine segments showed correlation with the initial degree of facial palsy but no significant difference was found between different prognostic groups.
Road tests of class 8 tractor trailers were conducted by the US Environmental Protection Agency on new and retreaded tires of varying rolling resistance in order to provide estimates of the quantitative relationship between rolling resistance and fuel consumption.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-27
... indirectly through changes in regional climate; and (b) Quantitative research on the relationship of food...). Population Estimates The most quantitative estimate of the historic size of the African lion population... research conducted by Chardonnet et al., three subpopulations were described as consisting of 18 groups...
Estimation of sample size and testing power (part 5).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-02-01
Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.
A new mean estimator using auxiliary variables for randomized response models
NASA Astrophysics Data System (ADS)
Ozgul, Nilgun; Cingi, Hulya
2013-10-01
Randomized response models are commonly used in surveys dealing with sensitive questions such as abortion, alcoholism, sexual orientation, drug taking, annual income, tax evasion to ensure interviewee anonymity and reduce nonrespondents rates and biased responses. Starting from the pioneering work of Warner [7], many versions of RRM have been developed that can deal with quantitative responses. In this study, new mean estimator is suggested for RRM including quantitative responses. The mean square error is derived and a simulation study is performed to show the efficiency of the proposed estimator to other existing estimators in RRM.
Quantitative Adverse Outcome Pathways and Their Application to Predictive Toxicology
A quantitative adverse outcome pathway (qAOP) consists of one or more biologically based, computational models describing key event relationships linking a molecular initiating event (MIE) to an adverse outcome. A qAOP provides quantitative, dose–response, and time-course p...
NASA Astrophysics Data System (ADS)
Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Delrieu, G.; Uijlenhoet, R.
2013-09-01
This paper presents a novel approach to estimate the vertical profile of reflectivity (VPR) from volumetric weather radar data using both a traditional Eulerian as well as a newly proposed Lagrangian implementation. For this latter implementation, the recently developed Rotational Carpenter Square Cluster Algorithm (RoCaSCA) is used to delineate precipitation regions at different reflectivity levels. A piecewise linear VPR is estimated for either stratiform or neither stratiform/convective precipitation. As a second aspect of this paper, a novel approach is presented which is able to account for the impact of VPR uncertainty on the estimated radar rainfall variability. Results show that implementation of the VPR identification and correction procedure has a positive impact on quantitative precipitation estimates from radar. Unfortunately, visibility problems severely limit the impact of the Lagrangian implementation beyond distances of 100 km. However, by combining this procedure with the global Eulerian VPR estimation procedure for a given rainfall type (stratiform and neither stratiform/convective), the quality of the quantitative precipitation estimates increases up to a distance of 150 km. Analyses of the impact of VPR uncertainty shows that this aspect accounts for a large fraction of the differences between weather radar rainfall estimates and rain gauge measurements.
Antoch, Marina P; Wrobel, Michelle; Kuropatwinski, Karen K; Gitlin, Ilya; Leonova, Katerina I; Toshkov, Ilia; Gleiberman, Anatoli S; Hutson, Alan D; Chernova, Olga B; Gudkov, Andrei V
2017-03-19
The development of healthspan-extending pharmaceuticals requires quantitative estimation of age-related progressive physiological decline. In humans, individual health status can be quantitatively assessed by means of a frailty index (FI), a parameter which reflects the scale of accumulation of age-related deficits. However, adaptation of this methodology to animal models is a challenging task since it includes multiple subjective parameters. Here we report a development of a quantitative non-invasive procedure to estimate biological age of an individual animal by creating physiological frailty index (PFI). We demonstrated the dynamics of PFI increase during chronological aging of male and female NIH Swiss mice. We also demonstrated acceleration of growth of PFI in animals placed on a high fat diet, reflecting aging acceleration by obesity and provide a tool for its quantitative assessment. Additionally, we showed that PFI could reveal anti-aging effect of mTOR inhibitor rapatar (bioavailable formulation of rapamycin) prior to registration of its effects on longevity. PFI revealed substantial sex-related differences in normal chronological aging and in the efficacy of detrimental (high fat diet) or beneficial (rapatar) aging modulatory factors. Together, these data introduce PFI as a reliable, non-invasive, quantitative tool suitable for testing potential anti-aging pharmaceuticals in pre-clinical studies.
Hu, Zhe-Yi; Parker, Robert B.; Herring, Vanessa L.; Laizure, S. Casey
2012-01-01
Dabigatran etexilate (DABE) is an oral prodrug that is rapidly converted by esterases to dabigatran (DAB), a direct inhibitor of thrombin. To elucidate the esterase-mediated metabolic pathway of DABE, a high-performance liquid chromatography/mass spectrometer (LC-MS/MS)-based metabolite identification and semi-quantitative estimation approach was developed. To overcome the poor full-scan sensitivity of conventional triple quadrupole mass spectrometry, precursor-product ion pairs were predicted, to search for the potential in vitro metabolites. The detected metabolites were confirmed by the product ion scan. A dilution method was introduced to evaluate the matrix effects of tentatively identified metabolites without chemical standards. Quantitative information on detected metabolites was obtained using ‘metabolite standards’ generated from incubation samples that contain a high concentration of metabolite in combination with a correction factor for mass spectrometry response. Two in vitro metabolites of DABE (M1 and M2) were identified, and quantified by the semi-quantitative estimation approach. It is noteworthy that CES1 convert DABE to M1 while CES2 mediates the conversion of DABE to M2. M1 (or M2) was further metabolized to DAB by CES2 (or CES1). The approach presented here provides a solution to a bioanalytical need for fast identification and semi-quantitative estimation of CES metabolites in preclinical samples. PMID:23239178
NASA Astrophysics Data System (ADS)
Robertson, K. M.; Milliken, R. E.; Li, S.
2016-10-01
Quantitative mineral abundances of lab derived clay-gypsum mixtures were estimated using a revised Hapke VIS-NIR and Shkuratov radiative transfer model. Montmorillonite-gypsum mixtures were used to test the effectiveness of the model in distinguishing between subtle differences in minor absorption features that are diagnostic of mineralogy in the presence of strong H2O absorptions that are not always diagnostic of distinct phases or mineral abundance. The optical constants (k-values) for both endmembers were determined from bi-directional reflectance spectra measured in RELAB as well as on an ASD FieldSpec3 in a controlled laboratory setting. Multiple size fractions were measured in order to derive a single k-value from optimization of the optical path length in the radiative transfer models. It is shown that with careful experimental conditions, optical constants can be accurately determined from powdered samples using a field spectrometer, consistent with previous studies. Variability in the montmorillonite hydration level increased the uncertainties in the derived k-values, but estimated modal abundances for the mixtures were still within 5% of the measured values. Results suggest that the Hapke model works well in distinguishing between hydrated phases that have overlapping H2O absorptions and it is able to detect gypsum and montmorillonite in these simple mixtures where they are present at levels of ∼10%. Care must be taken however to derive k-values from a sample with appropriate H2O content relative to the modeled spectra. These initial results are promising for the potential quantitative analysis of orbital remote sensing data of hydrated minerals, including more complex clay and sulfate assemblages such as mudstones examined by the Curiosity rover in Gale crater.
Abdul Rahman, Hanif; Abdul-Mumin, Khadizah; Naing, Lin
2017-03-01
Little evidence estimated the exposure of psychosocial work stressors, work-related fatigue, and musculoskeletal disorders for nurses working in South-East Asian region, and research on this subject is almost nonexistent in Brunei. The main aim of our study was to provide a comprehensive exploration and estimate exposure of the study variables amongst emergency (ER) and critical care (CC) nurses in Brunei. The study also aims to compare whether experiences of ER nurses differ from those of CC nurses. This cross-sectional study was implemented in the ER and CC departments across Brunei public hospitals from February to April 2016 by using Copenhagen Psychosocial Questionnaire II, Occupational Fatigue Exhaustion Recovery scale, and Cornell Musculoskeletal Discomfort Questionnaire. In total, 201 ER and CC nurses (82.0% response rate) participated in the study. Quantitative demands of CC nurses were significantly higher than ER nurses. Even so, ER nurses were 4.0 times more likely [95% confidence interval (2.21, 7.35)] to experience threats of violence, and 2.8 times more likely [95% confidence interval: (1.50, 5.29)] to experience chronic fatigue. The results revealed that nurses experienced high quantitative demands, work pace, stress, and burnout. High prevalence of chronic and persistent fatigue, threats of violence and bullying, and musculoskeletal pain at the neck, shoulder, upper and lower back, and foot region, was also reported. This study has provided good estimates for the exposure rate of psychosocial work stressors, work-related fatigue, and musculoskeletal disorders among nurses in Brunei. It provided important initial insight for nursing management and policymakers to make informed decisions on current and future planning to provide nurses with a conducive work environment. Copyright © 2017. Published by Elsevier B.V.
Dinsdale, Graham; Moore, Tonia; O'Leary, Neil; Berks, Michael; Roberts, Christopher; Manning, Joanne; Allen, John; Anderson, Marina; Cutolo, Maurizio; Hesselstrand, Roger; Howell, Kevin; Pizzorni, Carmen; Smith, Vanessa; Sulli, Alberto; Wildt, Marie; Taylor, Christopher; Murray, Andrea; Herrick, Ariane L
2017-09-01
Nailfold capillaroscopic parameters hold increasing promise as outcome measures for clinical trials in systemic sclerosis (SSc). Their inclusion as outcomes would often naturally require capillaroscopy images to be captured at several time points during any one study. Our objective was to assess repeatability of image acquisition (which has been little studied), as well as of measurement. 41 patients (26 with SSc, 15 with primary Raynaud's phenomenon) and 10 healthy controls returned for repeat high-magnification (300×) videocapillaroscopy mosaic imaging of 10 digits one week after initial imaging (as part of a larger study of reliability). Images were assessed in a random order by an expert blinded observer and 4 outcome measures extracted: (1) overall image grade and then (where possible) distal vessel locations were marked, allowing (2) vessel density (across the whole nailfold) to be calculated (3) apex width measurement and (4) giant vessel count. Intra-rater, intra-visit and intra-rater inter-visit (baseline vs. 1week) reliability were examined in 475 and 392 images respectively. A linear, mixed-effects model was used to estimate variance components, from which intra-class correlation coefficients (ICCs) were determined. Intra-visit and inter-visit reliability estimates (ICCs) were (respectively): overall image grade, 0.97 and 0.90; vessel density, 0.92 and 0.65; mean vessel width, 0.91 and 0.79; presence of giant capillary, 0.68 and 0.56. These estimates were conditional on each parameter being measurable. Within-operator image analysis and acquisition are reproducible. Quantitative nailfold capillaroscopy, at least with a single observer, provides reliable outcome measures for clinical studies including randomised controlled trials. Copyright © 2017 Elsevier Inc. All rights reserved.
Multiparametric Quantitative Ultrasound Imaging in Assessment of Chronic Kidney Disease.
Gao, Jing; Perlman, Alan; Kalache, Safa; Berman, Nathaniel; Seshan, Surya; Salvatore, Steven; Smith, Lindsey; Wehrli, Natasha; Waldron, Levi; Kodali, Hanish; Chevalier, James
2017-11-01
To evaluate the value of multiparametric quantitative ultrasound imaging in assessing chronic kidney disease (CKD) using kidney biopsy pathologic findings as reference standards. We prospectively measured multiparametric quantitative ultrasound markers with grayscale, spectral Doppler, and acoustic radiation force impulse imaging in 25 patients with CKD before kidney biopsy and 10 healthy volunteers. Based on all pathologic (glomerulosclerosis, interstitial fibrosis/tubular atrophy, arteriosclerosis, and edema) scores, the patients with CKD were classified into mild (no grade 3 and <2 of grade 2) and moderate to severe (at least 2 of grade 2 or 1 of grade 3) CKD groups. Multiparametric quantitative ultrasound parameters included kidney length, cortical thickness, pixel intensity, parenchymal shear wave velocity, intrarenal artery peak systolic velocity (PSV), end-diastolic velocity (EDV), and resistive index. We tested the difference in quantitative ultrasound parameters among mild CKD, moderate to severe CKD, and healthy controls using analysis of variance, analyzed correlations of quantitative ultrasound parameters with pathologic scores and the estimated glomerular filtration rate (GFR) using Pearson correlation coefficients, and examined the diagnostic performance of quantitative ultrasound parameters in determining moderate CKD and an estimated GFR of less than 60 mL/min/1.73 m 2 using receiver operating characteristic curve analysis. There were significant differences in cortical thickness, pixel intensity, PSV, and EDV among the 3 groups (all P < .01). Among quantitative ultrasound parameters, the top areas under the receiver operating characteristic curves for PSV and EDV were 0.88 and 0.97, respectively, for determining pathologic moderate to severe CKD, and 0.76 and 0.86 for estimated GFR of less than 60 mL/min/1.73 m 2 . Moderate to good correlations were found for PSV, EDV, and pixel intensity with pathologic scores and estimated GFR. The PSV, EDV, and pixel intensity are valuable in determining moderate to severe CKD. The value of shear wave velocity in assessing CKD needs further investigation. © 2017 by the American Institute of Ultrasound in Medicine.
Biurrun Manresa, José A.; Arguissain, Federico G.; Medina Redondo, David E.; Mørch, Carsten D.; Andersen, Ole K.
2015-01-01
The agreement between humans and algorithms on whether an event-related potential (ERP) is present or not and the level of variation in the estimated values of its relevant features are largely unknown. Thus, the aim of this study was to determine the categorical and quantitative agreement between manual and automated methods for single-trial detection and estimation of ERP features. To this end, ERPs were elicited in sixteen healthy volunteers using electrical stimulation at graded intensities below and above the nociceptive withdrawal reflex threshold. Presence/absence of an ERP peak (categorical outcome) and its amplitude and latency (quantitative outcome) in each single-trial were evaluated independently by two human observers and two automated algorithms taken from existing literature. Categorical agreement was assessed using percentage positive and negative agreement and Cohen’s κ, whereas quantitative agreement was evaluated using Bland-Altman analysis and the coefficient of variation. Typical values for the categorical agreement between manual and automated methods were derived, as well as reference values for the average and maximum differences that can be expected if one method is used instead of the others. Results showed that the human observers presented the highest categorical and quantitative agreement, and there were significantly large differences between detection and estimation of quantitative features among methods. In conclusion, substantial care should be taken in the selection of the detection/estimation approach, since factors like stimulation intensity and expected number of trials with/without response can play a significant role in the outcome of a study. PMID:26258532
Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity
NASA Astrophysics Data System (ADS)
Hoyt, Kenneth; Kneezel, Timothy; Castaneda, Benjamin; Parker, Kevin J.
2008-08-01
A novel quantitative sonoelastography technique for assessing the viscoelastic properties of skeletal muscle tissue was developed. Slowly propagating shear wave interference patterns (termed crawling waves) were generated using a two-source configuration vibrating normal to the surface. Theoretical models predict crawling wave displacement fields, which were validated through phantom studies. In experiments, a viscoelastic model was fit to dispersive shear wave speed sonoelastographic data using nonlinear least-squares techniques to determine frequency-independent shear modulus and viscosity estimates. Shear modulus estimates derived using the viscoelastic model were in agreement with that obtained by mechanical testing on phantom samples. Preliminary sonoelastographic data acquired in healthy human skeletal muscles confirm that high-quality quantitative elasticity data can be acquired in vivo. Studies on relaxed muscle indicate discernible differences in both shear modulus and viscosity estimates between different skeletal muscle groups. Investigations into the dynamic viscoelastic properties of (healthy) human skeletal muscles revealed that voluntarily contracted muscles exhibit considerable increases in both shear modulus and viscosity estimates as compared to the relaxed state. Overall, preliminary results are encouraging and quantitative sonoelastography may prove clinically feasible for in vivo characterization of the dynamic viscoelastic properties of human skeletal muscle.
Bourret, A; Garant, D
2017-03-01
Quantitative genetics approaches, and particularly animal models, are widely used to assess the genetic (co)variance of key fitness related traits and infer adaptive potential of wild populations. Despite the importance of precision and accuracy of genetic variance estimates and their potential sensitivity to various ecological and population specific factors, their reliability is rarely tested explicitly. Here, we used simulations and empirical data collected from an 11-year study on tree swallow (Tachycineta bicolor), a species showing a high rate of extra-pair paternity and a low recruitment rate, to assess the importance of identity errors, structure and size of the pedigree on quantitative genetic estimates in our dataset. Our simulations revealed an important lack of precision in heritability and genetic-correlation estimates for most traits, a low power to detect significant effects and important identifiability problems. We also observed a large bias in heritability estimates when using the social pedigree instead of the genetic one (deflated heritabilities) or when not accounting for an important cause of resemblance among individuals (for example, permanent environment or brood effect) in model parameterizations for some traits (inflated heritabilities). We discuss the causes underlying the low reliability observed here and why they are also likely to occur in other study systems. Altogether, our results re-emphasize the difficulties of generalizing quantitative genetic estimates reliably from one study system to another and the importance of reporting simulation analyses to evaluate these important issues.
Quantitative subsurface analysis using frequency modulated thermal wave imaging
NASA Astrophysics Data System (ADS)
Subhani, S. K.; Suresh, B.; Ghali, V. S.
2018-01-01
Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.
Pedodiversity and Its Significance in the Context of Modern Soil Geography
NASA Astrophysics Data System (ADS)
Krasilnikov, P. V.; Gerasimova, M. I.; Golovanov, D. L.; Konyushkova, M. V.; Sidorova, V. A.; Sorokin, A. S.
2018-01-01
Methodological basics of the study and quantitative assessment of pedodiversity are discussed. It is shown that the application of various indices and models of pedodiversity can be feasible for solving three major issues in pedology: a comparative geographical analysis of different territories, a comparative historical analysis of soil development in the course of landscape evolution, and the analysis of relationships between biodiversity and pedodiversity. Analogous geographic concepts of geodiversity and landscape diversity are also discussed. Certain limitations in the use of quantitative estimates of pedodiversity related to their linkage to the particular soil classification systems and with the initial soil maps are considered. Problems of the interpretation of the results of pedodiversity assessments are emphasized. It is shown that scientific explanations of biodiversity cannot be adequately applied in soil studies. Promising directions of further studies of pedodiversity are outlined. They include the assessment of the functional diversity of soils on the basis of data on their properties, integration with geostatistical methods of evaluation of soil variability, and assessment of pedodiversity on different scales.
Smith, Thomas J.; Robblee, Michael B.; Wanless, Harold R.; Doyle, Thomas W.
1994-01-01
The track of Hurricane Andrew carried it across one of the most extensive mangrove for ests in the New World. Although it is well known that hurricanes affect mangrove forests, surprisingly little quantitative information exists concerning hurricane impact on forest structure, succession, species composition, and dynamics of mangrove-dependent fauna or on rates of eco-system recovery (see Craighead and Gilbert 1962, Roth 1992, Smith 1992, Smith and Duke 1987, Stoddart 1969).After Hurricane Andrew's passage across south Florida, we assessed the environmental damage to the natural resources of the Everglades and Biscayne National Parks. Quantitative data collected during subsequent field trips (October 1992 to July 1993) are also provided. We present measurements of initial tree mortality by species and size class, estimates of delayed (or continuing) tree mortality, and observations of geomorphological changes along the coast and in the forests that could influence the course of forest recovery. We discuss a potential interaction across two differing scales of disturbance within mangrove forest systems: hurricanes and lightning strikes.
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Quantitative Analysis of High-Quality Officer Selection by Commandants Career-Level Education Board
2017-03-01
due to Marines being evaluated before the end of their initial service commitment. Our research utilizes quantitative variables to analyze the...not provide detailed information why. B. LIMITATIONS The photograph analysis in this research is strictly limited to a quantitative analysis in...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release. Distribution is unlimited. QUANTITATIVE
Fire frequency in the Interior Columbia River Basin: Building regional models from fire history data
McKenzie, D.; Peterson, D.L.; Agee, James K.
2000-01-01
Fire frequency affects vegetation composition and successional pathways; thus it is essential to understand fire regimes in order to manage natural resources at broad spatial scales. Fire history data are lacking for many regions for which fire management decisions are being made, so models are needed to estimate past fire frequency where local data are not yet available. We developed multiple regression models and tree-based (classification and regression tree, or CART) models to predict fire return intervals across the interior Columbia River basin at 1-km resolution, using georeferenced fire history, potential vegetation, cover type, and precipitation databases. The models combined semiqualitative methods and rigorous statistics. The fire history data are of uneven quality; some estimates are based on only one tree, and many are not cross-dated. Therefore, we weighted the models based on data quality and performed a sensitivity analysis of the effects on the models of estimation errors that are due to lack of cross-dating. The regression models predict fire return intervals from 1 to 375 yr for forested areas, whereas the tree-based models predict a range of 8 to 150 yr. Both types of models predict latitudinal and elevational gradients of increasing fire return intervals. Examination of regional-scale output suggests that, although the tree-based models explain more of the variation in the original data, the regression models are less likely to produce extrapolation errors. Thus, the models serve complementary purposes in elucidating the relationships among fire frequency, the predictor variables, and spatial scale. The models can provide local managers with quantitative information and provide data to initialize coarse-scale fire-effects models, although predictions for individual sites should be treated with caution because of the varying quality and uneven spatial coverage of the fire history database. The models also demonstrate the integration of qualitative and quantitative methods when requisite data for fully quantitative models are unavailable. They can be tested by comparing new, independent fire history reconstructions against their predictions and can be continually updated, as better fire history data become available.
A hydro-mechanical framework for early warning of rainfall-induced landslides (Invited)
NASA Astrophysics Data System (ADS)
Godt, J.; Lu, N.; Baum, R. L.
2013-12-01
Landslide early warning requires an estimate of the location, timing, and magnitude of initial movement, and the change in volume and momentum of material as it travels down a slope or channel. In many locations advance assessment of landslide location, volume, and momentum is possible, but prediction of landslide timing entails understanding the evolution of rainfall and soil-water conditions, and consequent effects on slope stability in real time. Existing schemes for landslide prediction generally rely on empirical relations between landslide occurrence and rainfall amount and duration, however, these relations do not account for temporally variable rainfall nor the variably saturated processes that control the hydro-mechanical response of hillside materials to rainfall. Although limited by the resolution and accuracy of rainfall forecasts and now-casts in complex terrain and by the inherent difficulty in adequately characterizing subsurface materials, physics-based models provide a general means to quantitatively link rainfall and landslide occurrence. To obtain quantitative estimates of landslide potential from physics-based models using observed or forecasted rainfall requires explicit consideration of the changes in effective stress that result from changes in soil moisture and pore-water pressures. The physics that control soil-water conditions are transient, nonlinear, hysteretic, and dependent on material composition and history. In order to examine the physical processes that control infiltration and effective stress in variably saturated materials, we present field and laboratory results describing intrinsic relations among soil water and mechanical properties of hillside materials. At the REV (representative elementary volume) scale, the interaction between pore fluids and solid grains can be effectively described by the relation between soil suction, soil water content, hydraulic conductivity, and suction stress. We show that these relations can be obtained independently from outflow, shear strength, and deformation tests for a wide range of earth materials. We then compare laboratory results with measurements of pore pressure and moisture content from landslide-prone settings and demonstrate that laboratory results obtained for hillside materials are representative of field conditions. These fundamental relations provide a basis to combine observed or forecasted rainfall with in-situ measurements of soil water conditions using hydro-mechanical models that simulate transient variably saturated flow and slope stability. We conclude that early warning using an approach in which in-situ observations are used to establish initial conditions for hydro-mechanical models is feasible in areas of high landslide risk where laboratory characterization of materials is practical and accurate rainfall information can be obtained. Analogous to weather and climate forecasting, such models could then be applied in an ensemble fashion to obtain quantitative estimates of landslide probability and error. Application to broader regions likely awaits breakthroughs in the development of remotely sensed proxies of soil properties and subsurface moisture conditions.
Effectiveness of Reirradiation for Painful Bone Metastases: A Systematic Review and Meta-Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huisman, Merel, E-mail: m.huisman-7@umcutrecht.nl; Bosch, Maurice A.A.J. van den; Wijlemans, Joost W.
2012-09-01
Purpose: Reirradiation of painful bone metastases in nonresponders or patients with recurrent pain after initial response is performed in up to 42% of patients initially treated with radiotherapy. Literature on the effect of reirradiation for pain control in those patients is scarce. In this systematic review and meta-analysis, we quantify the effectiveness of reirradiation for achieving pain control in patients with painful bone metastases. Methods and Materials: A free text search was performed to identify eligible studies using the MEDLINE, EMBASE, and the Cochrane Collaboration library electronic databases. After study selection and quality assessment, a pooled estimate was calculated formore » overall pain response for reirradiation of metastatic bone pain. Results: Our literature search identified 707 titles, of which 10 articles were selected for systematic review and seven entered the meta-analysis. Overall study quality was mediocre. Of the 2,694 patients initially treated for metastatic bone pain, 527 (20%) patients underwent reirradiation. Overall, a pain response after reirradiation was achieved in 58% of patients (pooled overall response rate 0.58, 95% confidence interval = 0.49-0.67). There was a substantial between-study heterogeneity (I{sup 2} = 63.3%, p = 0.01) because of clinical and methodological differences between studies. Conclusions: Reirradiation of painful bone metastases is effective in terms of pain relief for a small majority of patients; approximately 40% of patients do not benefit from reirradiation. Although the validity of results is limited, this meta-analysis provides a comprehensive overview and the most quantitative estimate of reirradiation effectiveness to date.« less
Heritable victimization and the benefits of agonistic relationships
Lea, Amanda J.; Blumstein, Daniel T.; Wey, Tina W.; Martin, Julien G. A.
2010-01-01
Here, we present estimates of heritability and selection on network traits in a single population, allowing us to address the evolutionary potential of social behavior and the poorly understood link between sociality and fitness. To evolve, sociality must have some heritable basis, yet the heritability of social relationships is largely unknown. Recent advances in both social network analyses and quantitative genetics allow us to quantify attributes of social relationships and estimate their heritability in free-living populations. Our analyses addressed a variety of measures (in-degree, out-degree, attractiveness, expansiveness, embeddedness, and betweenness), and we hypothesized that traits reflecting relationships controlled by an individual (i.e., those that the individual initiated or were directly involved in) would be more heritable than those based largely on the behavior of conspecifics. Identifying patterns of heritability and selection among related traits may provide insight into which types of relationships are important in animal societies. As expected, we found that variation in indirect measures was largely explained by nongenetic variation. Yet, surprisingly, traits capturing initiated interactions do not possess significant additive genetic variation, whereas measures of received interactions are heritable. Measures describing initiated aggression and position in an agonistic network are under selection (0.3 < |S| < 0.4), although advantageous trait values are not inherited by offspring. It appears that agonistic relationships positively influence fitness and seemingly costly or harmful ties may, in fact, be beneficial. Our study highlights the importance of studying agonistic as well as affiliative relationships to understand fully the connections between sociality and fitness. PMID:21115836
Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W
2018-04-01
The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be the best option for accurate estimation of dual R&C motion in clinical situation. © 2018 American Association of Physicists in Medicine.
Okuyucu, Kursat; Ozaydın, Sukru; Alagoz, Engin; Ozgur, Gokhan; Oysul, Fahrettin Guven; Ozmen, Ozlem; Tuncel, Murat; Ozturk, Mustafa; Arslan, Nuri
2016-01-01
Abstract Background Non-Hodgkin’s lymphomas arising from the tissues other than primary lymphatic organs are named primary extranodal lymphoma. Most of the studies evaluated metabolic tumor parameters in different organs and histopathologic variants of this disease generally for treatment response. We aimed to evaluate the prognostic value of metabolic tumor parameters derived from initial FDG-PET/CT in patients with a medley of primary extranodal lymphoma in this study. Patients and methods There were 67 patients with primary extranodal lymphoma for whom FDG-PET/CT was requested for primary staging. Quantitative PET/CT parameters: maximum standardized uptake value (SUVmax), average standardized uptake value (SUVmean), metabolic tumor volume (MTV) and total lesion glycolysis (TLG) were used to estimate disease-free survival and overall survival. Results SUVmean, MTV and TLG were found statistically significant after multivariate analysis. SUVmean remained significant after ROC curve analysis. Sensitivity and specificity were calculated as 88% and 64%, respectively, when the cut-off value of SUVmean was chosen as 5.15. After the investigation of primary presentation sites and histo-pathological variants according to recurrence, there is no difference amongst the variants. Primary site of extranodal lymphomas however, is statistically important (p = 0.014). Testis and central nervous system lymphomas have higher recurrence rate (62.5%, 73%, respectively). Conclusions High SUVmean, MTV and TLG values obtained from primary staging FDG-PET/CT are potential risk factors for both disease-free survival and overall survival in primary extranodal lymphoma. SUVmean is the most significant one amongst them for estimating recurrence/metastasis. PMID:27904443
USDA-ARS?s Scientific Manuscript database
Classical quantitative genetics aids crop improvement by providing the means to estimate heritability, genetic correlations, and predicted responses to various selection schemes. Genomics has the potential to aid quantitative genetics and applied crop improvement programs via large-scale, high-thro...
The large-time behavior of the scalar, genuinely nonlinear Lax-Friedrichs scheme
NASA Technical Reports Server (NTRS)
Tadmor, E.
1983-01-01
The Lax-Friedrichs scheme, approximating the scalar, genuinely nonlinear conservation law u sub t + f sub x (u) = 0 where f(u) is, say, strictly convex double dot f dot a sub asterisk 0 is studied. The divided differences of the numerical solution at time t do not exceed 2 (t dot a sub asterisk) to the -1. This one-sided Lipschitz boundedness is in complete agreement with the corresponding estimate one has in the differential case; in particular, it is independent of the initial amplitude in sharp contrast to liner problems. It guarantees the entropy compactness of the scheme in this case, as well as providing a quantitive insight into the large-time behavior of the numerical computation.
NASA Technical Reports Server (NTRS)
Waller, Jess M.; Saulsberry, Regor L.; Lucero, Ralph; Nichols, Charles T.; Wentzel, Daniel J.
2010-01-01
ASTM-based ILH methods were found to give a reproducible, quantitative estimate of the stress threshold at which significant accumulated damage began to occur. a) FR events are low energy (<2 V(exp 20 microsec) b) FR events occur close to the observed failure locus. c) FR events consist of more than 30% fiber breakage (>300 kHz) d) FR events show a consistent hierarchy of cooperative damage for composite tow, and for the COPV tested, regardless of applied load. Application of ILH or related stress profiles could lead to robust pass/fail acceptance criteria based on the FR. Initial application of FR and FFT analysis of AE data acquired on COPVs is promising.
Local motion-compensated method for high-quality 3D coronary artery reconstruction
Liu, Bo; Bai, Xiangzhi; Zhou, Fugen
2016-01-01
The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method. PMID:28018741
Results and Validation of MODIS Aerosol Retrievals Over Land and Ocean
NASA Technical Reports Server (NTRS)
Remer, Lorraine; Einaudi, Franco (Technical Monitor)
2001-01-01
The MODerate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Terra spacecraft has been retrieving aerosol parameters since late February 2000. Initial qualitative checking of the products showed very promising results including matching of land and ocean retrievals at coastlines. Using AERONET ground-based radiometers as our primary validation tool, we have established quantitative validation as well. Our results show that for most aerosol types, the MODIS products fall within the pre-launch estimated uncertainties. Surface reflectance and aerosol model assumptions appear to be sufficiently accurate for the optical thickness retrieval. Dust provides a possible exception, which may be due to non-spherical effects. Over ocean the MODIS products include information on particle size, and these parameters are also validated with AERONET retrievals.
Results and Validation of MODIS Aerosol Retrievals over Land and Ocean
NASA Technical Reports Server (NTRS)
Remer, L. A.; Kaufman, Y. J.; Tanre, D.; Ichoku, C.; Chu, D. A.; Mattoo, S.; Levy, R.; Martins, J. V.; Li, R.-R.; Einaudi, Franco (Technical Monitor)
2000-01-01
The MODerate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Terra spacecraft has been retrieving aerosol parameters since late February 2000. Initial qualitative checking of the products showed very promising results including matching of land and ocean retrievals at coastlines. Using AERONET ground-based radiometers as our primary validation tool, we have established quantitative validation as well. Our results show that for most aerosol types, the MODIS products fall within the pre-launch estimated uncertainties. Surface reflectance and aerosol model assumptions appear to be sufficiently accurate for the optical thickness retrieval. Dust provides a possible exception, which may be due to non-spherical effects. Over ocean the MODIS products include information on particle size, and these parameters are also validated with AERONET retrievals.
ERIC Educational Resources Information Center
Marrus, Natasha; Faughn, Carley; Shuman, Jeremy; Petersen, Steve E.; Constantino, John N.; Povinelli, Daniel J.; Pruett, John R., Jr.
2011-01-01
Objective: Comparative studies of social responsiveness, an ability that is impaired in autism spectrum disorders, can inform our understanding of both autism and the cognitive architecture of social behavior. Because there is no existing quantitative measure of social responsiveness in chimpanzees, we generated a quantitative, cross-species…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-19
... Review Draft. These two draft assessment documents describe the quantitative analyses the EPA is... NAAQS,\\3\\ the Agency is conducting quantitative assessments characterizing the: (1) Health risks... present the initial key results, observations, and related uncertainties associated with the quantitative...
Vijayasree, V; Bai, Hebsy; Mathew, Thomas Biju; George, Thomas; Xavier, George; Kumar, N Pratheesh; Visalkumar, S
2014-07-01
Dissipation and decontamination of the semisynthetic macrolide emamectin benzoate and the natural insecticide spinosad on cowpea pods were studied following field application at single and double doses of 11.0 and 22 and 73 and 146 g ai ha(-1), respectively. Residues of these naturalytes were estimated using LC-MS/MS. The initial deposit of 0.073 and 0.153 mg kg(-1) of emamectin benzoate dissipated below quantitation level on the fifth and seventh day at single and double dosage, respectively. For spinosad, the initial deposits of 0.94 and 1.90 mg kg(-1) reached below quantitation level on the 7th day and 15th day at single and double dosage, respectively. The half-life of emamectin benzoate and spinosad was 1.13-1.49 and 1.05-1.39 days with the calculated safe waiting period of 2.99-6.12 and 1.09-3.25 days, respectively, for single and double dosage. Processing of the harvestable pods with different decontamination techniques resulted in 33.82 to 100 % removal 2 h after the application of emamectin benzoate and 100 % removal 3 days after spraying, while the removal was 42.05 to 87.46 % 2 h after the application of spinosad and 38.05 to 68.08 % 3 days after application.
Murphy, Samantha; Martin, Sally; Parton, Robert G.
2010-01-01
Lipid droplets (LDs) are dynamic cytoplasmic organelles containing neutral lipids and bounded by a phospholipid monolayer. Previous studies have suggested that LDs can undergo constitutive homotypic fusion, a process linked to the inhibitory effects of fatty acids on glucose transporter trafficking. Using strict quantitative criteria for LD fusion together with refined light microscopic methods and real-time analysis, we now show that LDs in diverse cell types show low constitutive fusogenic activity under normal growth conditions. To investigate the possible modulation of LD fusion, we screened for agents that can trigger fusion. A number of pharmacological agents caused homotypic fusion of lipid droplets in a variety of cell types. This provided a novel cell system to study rapid regulated fusion between homotypic phospholipid monolayers. LD fusion involved an initial step in which the two adjacent membranes became continuous (<10 s), followed by the slower merging (100 s) of the neutral lipid cores to produce a single spherical LD. These fusion events were accompanied by changes to the LD surface organization. Measurements of LDs undergoing homotypic fusion showed that fused LDs maintained their initial volume, with a corresponding decrease in surface area suggesting rapid removal of membrane from the fused LD. This study provides estimates for the level of constitutive LD fusion in cells and questions the role of LD fusion in vivo. In addition, it highlights the extent of LD restructuring which occurs when homotypic LD fusion is triggered in a variety of cell types. PMID:21203462
Cai, Congbo; Chen, Zhong; van Zijl, Peter C.M.
2017-01-01
The reconstruction of MR quantitative susceptibility mapping (QSM) from local phase measurements is an ill posed inverse problem and different regularization strategies incorporating a priori information extracted from magnitude and phase images have been proposed. However, the anatomy observed in magnitude and phase images does not always coincide spatially with that in susceptibility maps, which could give erroneous estimation in the reconstructed susceptibility map. In this paper, we develop a structural feature based collaborative reconstruction (SFCR) method for QSM including both magnitude and susceptibility based information. The SFCR algorithm is composed of two consecutive steps corresponding to complementary reconstruction models, each with a structural feature based l1 norm constraint and a voxel fidelity based l2 norm constraint, which allows both the structure edges and tiny features to be recovered, whereas the noise and artifacts could be reduced. In the M-step, the initial susceptibility map is reconstructed by employing a k-space based compressed sensing model incorporating magnitude prior. In the S-step, the susceptibility map is fitted in spatial domain using weighted constraints derived from the initial susceptibility map from the M-step. Simulations and in vivo human experiments at 7T MRI show that the SFCR method provides high quality susceptibility maps with improved RMSE and MSSIM. Finally, the susceptibility values of deep gray matter are analyzed in multiple head positions, with the supine position most approximate to the gold standard COSMOS result. PMID:27019480
Rapid and accurate estimation of release conditions in the javelin throw.
Hubbard, M; Alaways, L W
1989-01-01
We have developed a system to measure initial conditions in the javelin throw rapidly enough to be used by the thrower for feedback in performance improvement. The system consists of three subsystems whose main tasks are: (A) acquisition of automatically digitized high speed (200 Hz) video x, y position data for the first 0.1-0.2 s of the javelin flight after release (B) estimation of five javelin release conditions from the x, y position data and (C) graphical presentation to the thrower of these release conditions and a simulation of the subsequent flight together with optimal conditions and flight for the sam release velocity. The estimation scheme relies on a simulation model and is at least an order of magnitude more accurate than previously reported measurements of javelin release conditions. The system provides, for the first time ever in any throwing event, the ability to critique nearly instantly in a precise, quantitative manner the crucial factors in the throw which determine the range. This should be expected to much greater control and consistency of throwing variables by athletes who use system and could even lead to an evolution of new throwing techniques.
NASA Astrophysics Data System (ADS)
Omar, Mahmoud A.; Badr El-Din, Khalid M.; Salem, Hesham; Abdelmageed, Osama H.
2018-03-01
A simple, selective and sensitive kinetic spectrophotometric method was described for estimation of four phenolic sympathomimetic drugs namely; terbutaline sulfate, fenoterol hydrobromide, isoxsuprine hydrochloride and etilefrine hydrochloride. This method is depended on the oxidation of the phenolic drugs with Folin-Ciocalteu reagent in presence of sodium carbonate. The rate of color development at 747-760 nm was measured spectrophotometrically. The experimental parameters controlling the color development were fully studied and optimized. The reaction mechanism for color development was proposed. The calibration graphs for both the initial rate and fixed time methods were constructed, where linear correlations were found in the general concentration ranges of 3.65 × 10- 6-2.19 × 10- 5 mol L- 1 and 2-24.0 μg mL- 1 with correlation coefficients in the following range 0.9992-0.9999, 0.9991-0.9998 respectively. The limits of detection and quantitation for the initial rate and fixed time methods were found to be in general concentration range 0.109-0.273, 0.363-0.910 and 0.210-0.483, 0.700-1.611 μg mL- 1 respectively. The developed method was validated according to ICH and USP 30 -NF 25 guidelines. The suggested method was successfully implemented to the estimation of these drugs in their commercial pharmaceutical formulations and the recovery percentages obtained were ranged from 97.63% ± 1.37 to 100.17% ± 0.95 and 97.29% ± 0.74 to 100.14 ± 0.81 for initial rate and fixed time methods respectively. The data obtained from the analysis of dosage forms were compared with those obtained by reported methods. Statistical analysis of these results indicated no significant variation in the accuracy and precision of both the proposed and reported methods.
Comb-Push Ultrasound Shear Elastography of Breast Masses: Initial Results Show Promise
Song, Pengfei; Fazzio, Robert T.; Pruthi, Sandhya; Whaley, Dana H.; Chen, Shigao; Fatemi, Mostafa
2015-01-01
Purpose or Objective To evaluate the performance of Comb-push Ultrasound Shear Elastography (CUSE) for classification of breast masses. Materials and Methods CUSE is an ultrasound-based quantitative two-dimensional shear wave elasticity imaging technique, which utilizes multiple laterally distributed acoustic radiation force (ARF) beams to simultaneously excite the tissue and induce shear waves. Female patients who were categorized as having suspicious breast masses underwent CUSE evaluations prior to biopsy. An elasticity estimate within the breast mass was obtained from the CUSE shear wave speed map. Elasticity estimates of various types of benign and malignant masses were compared with biopsy results. Results Fifty-four female patients with suspicious breast masses from our ongoing study are presented. Our cohort included 31 malignant and 23 benign breast masses. Our results indicate that the mean shear wave speed was significantly higher in malignant masses (6 ± 1.58 m/s) in comparison to benign masses (3.65 ± 1.36 m/s). Therefore, the stiffness of the mass quantified by the Young’s modulus is significantly higher in malignant masses. According to the receiver operating characteristic curve (ROC), the optimal cut-off value of 83 kPa yields 87.10% sensitivity, 82.61% specificity, and 0.88 for the area under the curve (AUC). Conclusion CUSE has the potential for clinical utility as a quantitative diagnostic imaging tool adjunct to B-mode ultrasound for differentiation of malignant and benign breast masses. PMID:25774978
76 FR 13018 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-09
... statistical surveys that yield quantitative results that can be generalized to the population of study. This... information will not be used for quantitative information collections that are designed to yield reliably... generic mechanisms that are designed to yield quantitative results. Total Burden Estimate for the...
Henshall, John M; Dierens, Leanne; Sellars, Melony J
2014-09-02
While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are sufficiently accurate to provide useful information for a breeding program. Treating genotypes as quantitative values is an alternative to perturbing genotypes using an assumed error distribution, but can produce very different results. An understanding of the distribution of the error is required for SNP genotyping platforms.
Measurement of lung expansion with computed tomography and comparison with quantitative histology.
Coxson, H O; Mayo, J R; Behzad, H; Moore, B J; Verburgt, L M; Staples, C A; Paré, P D; Hogg, J C
1995-11-01
The total and regional lung volumes were estimated from computed tomography (CT), and the pleural pressure gradient was determined by using the milliliters of gas per gram of tissue estimated from the X-ray attenuation values and the pressure-volume curve of the lung. The data show that CT accurately estimated the volume of the resected lobe but overestimated its weight by 24 +/- 19%. The volume of gas per gram of tissue was less in the gravity-dependent regions due to a pleural pressure gradient of 0.24 +/- 0.08 cmH2O/cm of descent in the thorax. The proportion of tissue to air obtained with CT was similar to that obtained by quantitative histology. We conclude that the CT scan can be used to estimate total and regional lung volumes and that measurements of the proportions of tissue and air within the thorax by CT can be used in conjunction with quantitative histology to evaluate lung structure.
[A new method of processing quantitative PCR data].
Ke, Bing-Shen; Li, Guang-Yun; Chen, Shi-Min; Huang, Xiang-Yan; Chen, Ying-Jian; Xu, Jun
2003-05-01
Today standard PCR can't satisfy the need of biotechnique development and clinical research any more. After numerous dynamic research, PE company found there is a linear relation between initial template number and cycling time when the accumulating fluorescent product is detectable.Therefore,they developed a quantitative PCR technique to be used in PE7700 and PE5700. But the error of this technique is too great to satisfy the need of biotechnique development and clinical research. A better quantitative PCR technique is needed. The mathematical model submitted here is combined with the achievement of relative science,and based on the PCR principle and careful analysis of molecular relationship of main members in PCR reaction system. This model describes the function relation between product quantity or fluorescence intensity and initial template number and other reaction conditions, and can reflect the accumulating rule of PCR product molecule accurately. Accurate quantitative PCR analysis can be made use this function relation. Accumulated PCR product quantity can be obtained from initial template number. Using this model to do quantitative PCR analysis,result error is only related to the accuracy of fluorescence intensity or the instrument used. For an example, when the fluorescence intensity is accurate to 6 digits and the template size is between 100 to 1,000,000, the quantitative result accuracy will be more than 99%. The difference of result error is distinct using same condition,same instrument but different analysis method. Moreover,if the PCR quantitative analysis system is used to process data, it will get result 80 times of accuracy than using CT method.
Quantitative estimation of pesticide-likeness for agrochemical discovery.
Avram, Sorin; Funar-Timofei, Simona; Borota, Ana; Chennamaneni, Sridhar Rao; Manchala, Anil Kumar; Muresan, Sorel
2014-12-01
The design of chemical libraries, an early step in agrochemical discovery programs, is frequently addressed by means of qualitative physicochemical and/or topological rule-based methods. The aim of this study is to develop quantitative estimates of herbicide- (QEH), insecticide- (QEI), fungicide- (QEF), and, finally, pesticide-likeness (QEP). In the assessment of these definitions, we relied on the concept of desirability functions. We found a simple function, shared by the three classes of pesticides, parameterized particularly, for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings. Subsequently, we describe the scoring of each pesticide class by the corresponding quantitative estimate. In a comparative study, we assessed the performance of the scoring functions using extensive datasets of patented pesticides. The hereby-established quantitative assessment has the ability to rank compounds whether they fail well-established pesticide-likeness rules or not, and offer an efficient way to prioritize (class-specific) pesticides. These findings are valuable for the efficient estimation of pesticide-likeness of vast chemical libraries in the field of agrochemical discovery. Graphical AbstractQuantitative models for pesticide-likeness were derived using the concept of desirability functions parameterized for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings.
Increasing the demand for childhood vaccination in developing countries: a systematic review
2009-01-01
Background Attempts to maintain or increase vaccination coverage almost all focus on supply side interventions: improving availability and delivery of vaccines. The effectiveness and cost-effectiveness of efforts to increase demand is uncertain. Methods We performed a systematic review of studies that provided quantitative estimates of the impact of demand side interventions on uptake of routine childhood vaccination. We retrieved studies published up to Sept 2008. Results The initial search retrieved 468 potentially eligible studies, including four systematic reviews and eight original studies of the impact of interventions to increase demand for vaccination. We identified only two randomised controlled trials. Interventions with an impact on vaccination uptake included knowledge translation (KT) (mass media, village resource rooms and community discussions) and non-KT initiatives (incentives, economic empowerment, household visits by extension workers). Most claimed to increase vaccine coverage by 20 to 30%. Estimates of the cost per vaccinated child varied considerably with several in the range of $10-20 per vaccinated child. Conclusion Most studies reviewed here represented a low level of evidence. Mass media campaigns may be effective, but the impact depends on access to media and may be costly if run at a local level. The persistence of positive effects has not been investigated. The economics of demand side interventions have not been adequately assessed, but available data suggest that some may be very cost-effective. PMID:19828063
Human papillomavirus vaccination in Auckland: reducing ethnic and socioeconomic inequities.
Poole, Tracey; Goodyear-Smith, Felicity; Petousis-Harris, Helen; Desmond, Natalie; Exeter, Daniel; Pointon, Leah; Jayasinha, Ranmalie
2012-12-17
The New Zealand HPV publicly funded immunisation programme commenced in September 2008. Delivery through a school based programme was anticipated to result in higher coverage rates and reduced inequalities compared to vaccination delivered through other settings. The programme provided for on-going vaccination of girls in year 8 with an initial catch-up programme through general practices for young women born after 1 January 1990 until the end of 2010. To assess the uptake of the funded HPV vaccine through school based vaccination programmes in secondary schools and general practices in 2009, and the factors associated with coverage by database matching. Retrospective quantitative analysis of secondary anonymised data School-Based Vaccination Service and National Immunisation Register databases of female students from secondary schools in Auckland District Health Board catchment area. Data included student and school demographic and other variables. Binary logistic regression was used to estimate odds ratios and significance for univariables. Multivariable logistic regression estimated strength of association between individual factors and initiation and completion, adjusted for all other factors. The programme achieved overall coverage of 71.5%, with Pacific girls highest at 88% and Maori at 78%. Girls higher socioeconomic status were more likely be vaccinated in general practice. School-based vaccination service targeted at ethic sub-populations provided equity for the Maori and Pacific student who achieved high levels of vaccination. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Betsofen, S. Ya.; Kolobov, Yu. R.; Volkova, E. F.; Bozhko, S. A.; Voskresenskaya, I. I.
2015-04-01
Quantitative methods have been developed to estimate the anisotropy of the strength properties and to determine the phase composition of Mg-Al alloys. The efficiency of the methods is confirmed for MA5 alloy subjected to severe plastic deformation. It is shown that the Taylor factors calculated for basal slip averaged over all orientations of a polycrystalline aggregate with allowance for texture can be used for a quantitative estimation of the contribution of the texture of semifinished magnesium alloy products to the anisotropy of their strength properties. A technique of determining the composition of a solid solution and the intermetallic phase Al12Mg17 content is developed using the measurement of the lattice parameters of the solid solution and the known dependence of these lattice parameters on the composition.
Quantitative Aging Pattern in Mouse Urine Vapor as Measured by Gas-Liquid Chromatography
NASA Technical Reports Server (NTRS)
Robinson, Arthur B.; Dirren, Henri; Sheets, Alan; Miquel, Jaime; Lundgren, Paul R.
1975-01-01
We have discovered a quantitative aging pattern in mouse urine vapor. The diagnostic power of the pattern has been found to be high. We hope that this pattern will eventually allow quantitative estimates of physiological age and some insight into the biochemistry of aging.
Genomic Quantitative Genetics to Study Evolution in the Wild.
Gienapp, Phillip; Fior, Simone; Guillaume, Frédéric; Lasky, Jesse R; Sork, Victoria L; Csilléry, Katalin
2017-12-01
Quantitative genetic theory provides a means of estimating the evolutionary potential of natural populations. However, this approach was previously only feasible in systems where the genetic relatedness between individuals could be inferred from pedigrees or experimental crosses. The genomic revolution opened up the possibility of obtaining the realized proportion of genome shared among individuals in natural populations of virtually any species, which could promise (more) accurate estimates of quantitative genetic parameters in virtually any species. Such a 'genomic' quantitative genetics approach relies on fewer assumptions, offers a greater methodological flexibility, and is thus expected to greatly enhance our understanding of evolution in natural populations, for example, in the context of adaptation to environmental change, eco-evolutionary dynamics, and biodiversity conservation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chardon, Jurgen; Swart, Arno
2016-07-01
In the consumer phase of a typical quantitative microbiological risk assessment (QMRA), mathematical equations identify data gaps. To acquire useful data we designed a food consumption and food handling survey (2,226 respondents) for QMRA applications that is especially aimed at obtaining quantitative data. For a broad spectrum of food products, the survey covered the following topics: processing status at retail, consumer storage, preparation, and consumption. Questions were designed to facilitate distribution fitting. In the statistical analysis, special attention was given to the selection of the most adequate distribution to describe the data. Bootstrap procedures were used to describe uncertainty. The final result was a coherent quantitative consumer phase food survey and parameter estimates for food handling and consumption practices in The Netherlands, including variation over individuals and uncertainty estimates.
Application of fracture mechanics to failure in manatee rib bone.
Yan, Jiahau; Clifton, Kari B; Reep, Roger L; Mecholsky, John J
2006-06-01
The Florida manatee (Trichechus manatus latirostris) is listed as endangered by the U.S. Department of the Interior. Manatee ribs have different microstructure from the compact bone of other mammals. Biomechanical properties of the manatee ribs need to be better understood. Fracture toughness (K(C)) has been shown to be a good index to assess the mechanical performance of bone. Quantitative fractography can be used in concert with fracture mechanics equations to identify fracture initiating defects/cracks and to calculate the fracture toughness of bone materials. Fractography is a standard technique for analyzing fracture behavior of brittle and quasi-brittle materials. Manatee ribs are highly mineralized and fracture in a manner similar to quasi-brittle materials. Therefore, quantitative fractography was applied to determine the fracture toughness of manatee ribs. Average fracture toughness values of small flexure specimens from six different sizes of manatees ranged from 1.3 to 2.6 MPa(m)(12). Scanning electron microscope (SEM) images show most of the fracture origins were at openings for blood vessels and interlayer spaces. Quantitative fractography and fracture mechanics can be combined to estimate the fracture toughness of the material in manatee rib bone. Fracture toughness of subadult and calf manatees appears to increase as the size of the manatee increases. Average fracture toughness of the manatee rib bone materials is less than the transverse fracture toughness of human and bovine tibia and femur.
NASA Astrophysics Data System (ADS)
Vulpiani, Gianfranco; Ripepe, Maurizio
2017-04-01
The detection and quantitative retrieval of ash plumes is of significant interest due to the environmental, climatic, and socioeconomic effects of ash fallout which might cause hardship and damages in areas surrounding volcanoes, representing a serious hazard to aircrafts. Real-time monitoring of such phenomena is crucial for initializing ash dispersion models. Ground-based and space-borne remote sensing observations provide essential information for scientific and operational applications. Satellite visible-infrared radiometric observations from geostationary platforms are usually exploited for long-range trajectory tracking and for measuring low-level eruptions. Their imagery is available every 10-30 min and suffers from a relatively poor spatial resolution. Moreover, the field of view of geostationary radiometric measurements may be blocked by water and ice clouds at higher levels and the observations' overall utility is reduced at night. Ground-based microwave weather radars may represent an important tool for detecting and, to a certain extent, mitigating the hazards presented by ash clouds. The possibility of monitoring in all weather conditions at a fairly high spatial resolution (less than a few hundred meters) and every few minutes after the eruption is the major advantage of using ground-based microwave radar systems. Ground-based weather radar systems can also provide data for estimating the ash volume, total mass, and height of eruption clouds. Previous methodological studies have investigated the possibility of using ground-based single- and dual-polarization radar system for the remote sensing of volcanic ash cloud. In the present work, methodology was revised to overcome some limitations related to the assumed microphysics. New scattering simulations based on the T-matrix solution technique were used to set up the parametric algorithms adopted to estimate the mass concentration and ash mean diameter. Furthermore, because quantitative estimation of the erupted materials in the proximity of the volcano's vent is crucial for initializing transportation models, a novel methodology for estimating a volcano eruption's mass discharge rate based on the combination of radar and a thermal camera was developed. We show how it is possible to calculate the mass flow using radar-derived ash concentration and particle diameter at the base of the eruption column using the exit velocity estimated by the thermal camera. The proposed procedure was tested on four Etna eruption episodes that occurred in December 2015 as observed by the available network of C and X band radar systems. The results are congruent with other independent methodologies and observations . The agreement between the total erupted mass derived by the retrieved MDR and the plume concentration can be considered as a self-consistent methodological assessment. Interestingly, the analysis of the polarimetric radar observations allowed us to derive some features of the ash plume, including the size of the eruption column and the height of the gas thrust region.
Challenge in Enhancing the Teaching and Learning of Variable Measurements in Quantitative Research
ERIC Educational Resources Information Center
Kee, Chang Peng; Osman, Kamisah; Ahmad, Fauziah
2013-01-01
Statistical analysis is one component that cannot be avoided in a quantitative research. Initial observations noted that students in higher education institution faced difficulty analysing quantitative data which were attributed to the confusions of various variable measurements. This paper aims to compare the outcomes of two approaches applied in…
ERIC Educational Resources Information Center
Feinstein, Leon
The cost benefits of lifelong learning in the United Kingdom were estimated, based on quantitative evidence. Between 1975-1996, 43 police force areas in England and Wales were studied to determine the effect of wages on crime. It was found that a 10 percent rise in the average pay of those on low pay reduces the overall area property crime rate by…
Quantitative analysis of SMEX'02 AIRSAR data for soil moisture inversion
NASA Technical Reports Server (NTRS)
Zyl, J. J. van; Njoku, E.; Jackson, T.
2003-01-01
This paper discusses in detail the characteristics of the AIRSAR data acquired, and provides an initial quantitative assessment of the accuracy of the radar inversion algorithms under these vegetated conditions.
Repeatability Assessment by ISO 11843-7 in Quantitative HPLC for Herbal Medicines.
Chen, Liangmian; Kotani, Akira; Hakamata, Hideki; Tsutsumi, Risa; Hayashi, Yuzuru; Wang, Zhimin; Kusu, Fumiyo
2015-01-01
We have proposed an assessment methods to estimate the measurement relative standard deviation (RSD) of chromatographic peaks in quantitative HPLC for herbal medicines by the methodology of ISO 11843 Part 7 (ISO 11843-7:2012), which provides detection limits stochastically. In quantitative HPLC with UV detection (HPLC-UV) of Scutellaria Radix for the determination of baicalin, the measurement RSD of baicalin by ISO 11843-7:2012 stochastically was within a 95% confidence interval of the statistically obtained RSD by repetitive measurements (n = 6). Thus, our findings show that it is applicable for estimating of the repeatability of HPLC-UV for determining baicalin without repeated measurements. In addition, the allowable limit of the "System repeatability" in "Liquid Chromatography" regulated in a pharmacopoeia can be obtained by the present assessment method. Moreover, the present assessment method was also successfully applied to estimate the measurement RSDs of quantitative three-channel liquid chromatography with electrochemical detection (LC-3ECD) of Chrysanthemi Flos for determining caffeoylquinic acids and flavonoids. By the present repeatability assessment method, reliable measurement RSD was obtained stochastically, and the experimental time was remarkably reduced.
Ortel, Terry W.; Spies, Ryan R.
2015-11-19
Next-Generation Radar (NEXRAD) has become an integral component in the estimation of precipitation (Kitzmiller and others, 2013). The high spatial and temporal resolution of NEXRAD has revolutionized the ability to estimate precipitation across vast regions, which is especially beneficial in areas without a dense rain-gage network. With the improved precipitation estimates, hydrologic models can produce reliable streamflow forecasts for areas across the United States. NEXRAD data from the National Weather Service (NWS) has been an invaluable tool used by the U.S. Geological Survey (USGS) for numerous projects and studies; NEXRAD data processing techniques similar to those discussed in this Fact Sheet have been developed within the USGS, including the NWS Quantitative Precipitation Estimates archive developed by Blodgett (2013).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhandari, Deepak; Kertesz, Vilmos; Van Berkel, Gary J
RATIONALE: Ascorbic acid (AA) and folic acid (FA) are water-soluble vitamins and are usually fortified in food and dietary supplements. For the safety of human health, proper intake of these vitamins is recommended. Improvement in the analysis time required for the quantitative determination of these vitamins in food and nutritional formulations is desired. METHODS: A simple and fast (~5 min) in-tube sample preparation was performed, independently for FA and AA, by mixing extraction solvent with a powdered sample aliquot followed by agitation, centrifugation, and filtration to recover an extract for analysis. Quantitative detection was achieved by flow-injection (1 L injectionmore » volume) electrospray ionization tandem mass spectrometry (ESI-MS/MS) in negative ion mode using the method of standard addition. RESULTS: Method of standard addition was employed for the quantitative estimation of each vitamin in a sample extract. At least 2 spiked and 1 non-spiked sample extract were injected in triplicate for each quantitative analysis. Given an injection-to-injection interval of approximately 2 min, about 18 min was required to complete the quantitative estimation of each vitamin. The concentration values obtained for the respective vitamins in the standard reference material (SRM) 3280 using this approach were within the statistical range of the certified values provided in the NIST Certificate of Analysis. The estimated limit of detections of FA and AA were 13 and 5.9 ng/g, respectively. CONCLUSIONS: Flow-injection ESI-MS/MS was successfully applied for the rapid quantitation of FA and AA in SRM 3280 multivitamin/multielement tablets.« less
NASA Astrophysics Data System (ADS)
Peres, David Johnny; Cancelliere, Antonino
2016-04-01
Assessment of shallow landslide hazard is important for appropriate planning of mitigation measures. Generally, return period of slope instability is assumed as a quantitative metric to map landslide triggering hazard on a catchment. The most commonly applied approach to estimate such return period consists in coupling a physically-based landslide triggering model (hydrological and slope stability) with rainfall intensity-duration-frequency (IDF) curves. Among the drawbacks of such an approach, the following assumptions may be mentioned: (1) prefixed initial conditions, with no regard to their probability of occurrence, and (2) constant intensity-hyetographs. In our work we propose the use of a Monte Carlo simulation approach in order to investigate the effects of the two above mentioned assumptions. The approach is based on coupling a physically based hydrological and slope stability model with a stochastic rainfall time series generator. By this methodology a long series of synthetic rainfall data can be generated and given as input to a landslide triggering physically based model, in order to compute the return period of landslide triggering as the mean inter-arrival time of a factor of safety less than one. In particular, we couple the Neyman-Scott rectangular pulses model for hourly rainfall generation and the TRIGRS v.2 unsaturated model for the computation of transient response to individual rainfall events. Initial conditions are computed by a water table recession model that links initial conditions at a given event to the final response at the preceding event, thus taking into account variable inter-arrival time between storms. One-thousand years of synthetic hourly rainfall are generated to estimate return periods up to 100 years. Applications are first carried out to map landslide triggering hazard in the Loco catchment, located in highly landslide-prone area of the Peloritani Mountains, Sicily, Italy. Then a set of additional simulations are performed in order to compare the results obtained by the traditional IDF-based method with the Monte Carlo ones. Results indicate that both variability of initial conditions and of intra-event rainfall intensity significantly affect return period estimation. In particular, the common assumption of an initial water table depth at the base of the pervious strata may lead in practice to an overestimation of return period up to one order of magnitude, while the assumption of constant-intensity hyetographs may yield an overestimation by a factor of two or three. Hence, it may be concluded that the analysed simplifications involved in the traditional IDF-based approach generally imply a non-conservative assessment of landslide triggering hazard.
Toxicity Estimation Software Tool (TEST)
The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...
Does Homework Really Matter for College Students in Quantitatively-Based Courses?
ERIC Educational Resources Information Center
Young, Nichole; Dollman, Amanda; Angel, N. Faye
2016-01-01
This investigation was initiated by two students in an Advanced Computer Applications course. They sought to examine the influence of graded homework on final grades in quantitatively-based business courses. They were provided with data from three quantitatively-based core business courses over a period of five years for a total of 10 semesters of…
Code of Federal Regulations, 2013 CFR
2013-07-01
... organisms where higher doses or concentrations resulted in an adverse effect. Quantitative structure... probable or possible human carcinogen, when, because of major qualitative or quantitative limitations, the... quantitative risk assessment, but for which data are inadequate for Tier I criterion development due to a tumor...
Code of Federal Regulations, 2014 CFR
2014-07-01
... organisms where higher doses or concentrations resulted in an adverse effect. Quantitative structure... probable or possible human carcinogen, when, because of major qualitative or quantitative limitations, the... quantitative risk assessment, but for which data are inadequate for Tier I criterion development due to a tumor...
Code of Federal Regulations, 2012 CFR
2012-07-01
... organisms where higher doses or concentrations resulted in an adverse effect. Quantitative structure... probable or possible human carcinogen, when, because of major qualitative or quantitative limitations, the... quantitative risk assessment, but for which data are inadequate for Tier I criterion development due to a tumor...
Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System
Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei
2018-01-01
The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751
Voronovskaja's theorem revisited
NASA Astrophysics Data System (ADS)
Tachev, Gancho T.
2008-07-01
We represent a new quantitative variant of Voronovskaja's theorem for Bernstein operator. This estimate improves the recent quantitative versions of Voronovskaja's theorem for certain Bernstein-type operators, obtained by H. Gonska, P. Pitul and I. Rasa in 2006.
iTRAQ-Based Quantitative Proteomic Analysis of the Initiation of Head Regeneration in Planarians.
Geng, Xiaofang; Wang, Gaiping; Qin, Yanli; Zang, Xiayan; Li, Pengfei; Geng, Zhi; Xue, Deming; Dong, Zimei; Ma, Kexue; Chen, Guangwen; Xu, Cunshuan
2015-01-01
The planarian Dugesia japonica has amazing ability to regenerate a head from the anterior ends of the amputated stump with maintenance of the original anterior-posterior polarity. Although planarians present an attractive system for molecular investigation of regeneration and research has focused on clarifying the molecular mechanism of regeneration initiation in planarians at transcriptional level, but the initiation mechanism of planarian head regeneration (PHR) remains unclear at the protein level. Here, a global analysis of proteome dynamics during the early stage of PHR was performed using isobaric tags for relative and absolute quantitation (iTRAQ)-based quantitative proteomics strategy, and our data are available via ProteomeXchange with identifier PXD002100. The results showed that 162 proteins were differentially expressed at 2 h and 6 h following amputation. Furthermore, the analysis of expression patterns and functional enrichment of the differentially expressed proteins showed that proteins involved in muscle contraction, oxidation reduction and protein synthesis were up-regulated in the initiation of PHR. Moreover, ingenuity pathway analysis showed that predominant signaling pathways such as ILK, calcium, EIF2 and mTOR signaling which were associated with cell migration, cell proliferation and protein synthesis were likely to be involved in the initiation of PHR. The results for the first time demonstrated that muscle contraction and ILK signaling might played important roles in the initiation of PHR at the global protein level. The findings of this research provide a molecular basis for further unraveling the mechanism of head regeneration initiation in planarians.
National-level long-term eruption forecasts by expert elicitation
NASA Astrophysics Data System (ADS)
Bebbington, Mark S.; Stirling, Mark W.; Cronin, Shane; Wang, Ting; Jolly, Gill
2018-06-01
Volcanic hazard estimation is becoming increasingly quantitative, creating the potential for land-use decisions and engineering design to use volcanic information in an analogous manner to seismic codes. The initial requirement is to characterize the possible hazard sources, quantifying the likely timing, magnitude and location of the next eruption in each case. This is complicated by the extremely different driving processes at individual volcanoes, and incomplete and uneven records of past activity at various volcanoes. To address these issues, we carried out an expert elicitation approach to estimate future eruption potential for 12 volcanoes of interest in New Zealand. A total of 28 New Zealand experts provided estimates that were combined using Cooke's classical method to arrive at a hazard estimate. In 11 of the 12 cases, the elicited eruption duration increased with VEI, and was correlated with expected repose, differing little between volcanoes. Most of the andesitic volcanoes had very similar elicited distributions for the VEI of a future eruption, except that Taranaki was expected to produce a larger eruption, due to the current long repose. Elicited future vent locations for Tongariro and Okataina reflect strongly the most recent eruptions. In the poorly studied Bay of Islands volcanic field, the estimated vent location distribution was centred on the centroid of the previous vent locations, while in the Auckland field, it was focused on regions within the field without past eruptions. The elicited median dates for the next eruptions ranged from AD2022 (Whakaari/White Island) to AD4390 (Tuhua/Mayor Island).
Control of storage-protein synthesis during seed development in pea (Pisum sativum L.).
Gatehouse, J A; Evans, I M; Bown, D; Croy, R R; Boulter, D
1982-01-01
The tissue-specific syntheses of seed storage proteins in the cotyledons of developing pea (Pisum sativum L.) seeds have been demonstrated by estimates of their qualitative and quantitative accumulation by sodium dodecyl sulphate/polyacrylamide-gel electrophoresis and rocket immunoelectrophoresis respectively. Vicilin-fraction proteins initially accumulated faster than legumin, but whereas legumin was accumulated throughout development, different components of the vicilin fraction had their predominant periods of synthesis at different stages of development. The translation products in vitro of polysomes isolated from cotyledons at different stages of development reflected the synthesis in vivo of storage-protein polypeptides at corresponding times. The levels of storage-protein mRNA species during development were estimated by 'Northern' hybridization using cloned complementary-DNA probes. This technique showed that the levels of legumin and vicilin (47000-Mr precursors) mRNA species increased and decreased in agreement with estimated rates of synthesis of the respective polypeptides. The relative amounts of these messages, estimated by kinetic hybridization were also consistent. Legumin mRNA was present in leaf poly(A)+ RNA at less than one-thousandth of the level in cotyledon poly(A)+ (polyadenylated) RNA, demonstrating tissue-specific expression. Evidence is presented that storage-protein mRNA species are relatively long-lived, and it is suggested that storage-protein synthesis is regulated primarily at the transcriptional level. Images Fig. 2. Fig. 3. PMID:6897609
NASA Astrophysics Data System (ADS)
Yoon, Heechul; Lee, Hyuntaek; Jung, Haekyung; Lee, Mi-Young; Won, Hye-Sung
2015-03-01
The objective of the paper is to introduce a novel method for nuchal translucency (NT) boundary detection and thickness measurement, which is one of the most significant markers in the early screening of chromosomal defects, namely Down syndrome. To improve the reliability and reproducibility of NT measurements, several automated methods have been introduced. However, the performance of their methods degrades when NT borders are tilted due to varying fetal movements. Therefore, we propose a principal direction estimation based NT measurement method to provide reliable and consistent performance regardless of both fetal positions and NT directions. At first, Radon Transform and cost function are used to estimate the principal direction of NT borders. Then, on the estimated angle bin, i.e., the main direction of NT, gradient based features are employed to find initial NT lines which are beginning points of the active contour fitting method to find real NT borders. Finally, the maximum thickness is measured from distances between the upper and lower border of NT by searching along to the orthogonal lines of main NT direction. To evaluate the performance, 89 of in vivo fetal images were collected and the ground-truth database was measured by clinical experts. Quantitative results using intraclass correlation coefficients and difference analysis verify that the proposed method can improve the reliability and reproducibility in the measurement of maximum NT thickness.
Quantifying the transmission potential of pandemic influenza
NASA Astrophysics Data System (ADS)
Chowell, Gerardo; Nishiura, Hiroshi
2008-03-01
This article reviews quantitative methods to estimate the basic reproduction number of pandemic influenza, a key threshold quantity to help determine the intensity of interventions required to control the disease. Although it is difficult to assess the transmission potential of a probable future pandemic, historical epidemiologic data is readily available from previous pandemics, and as a reference quantity for future pandemic planning, mathematical and statistical analyses of historical data are crucial. In particular, because many historical records tend to document only the temporal distribution of cases or deaths (i.e. epidemic curve), our review focuses on methods to maximize the utility of time-evolution data and to clarify the detailed mechanisms of the spread of influenza. First, we highlight structured epidemic models and their parameter estimation method which can quantify the detailed disease dynamics including those we cannot observe directly. Duration-structured epidemic systems are subsequently presented, offering firm understanding of the definition of the basic and effective reproduction numbers. When the initial growth phase of an epidemic is investigated, the distribution of the generation time is key statistical information to appropriately estimate the transmission potential using the intrinsic growth rate. Applications of stochastic processes are also highlighted to estimate the transmission potential using similar data. Critically important characteristics of influenza data are subsequently summarized, followed by our conclusions to suggest potential future methodological improvements.
Eskelin, K; Suntio, T; Hyvärinen, S; Hafren, A; Mäkinen, K
2010-03-01
A quantitation method based on the sensitive detection of Renilla luciferase (Rluc) activity was developed and optimized for Potato virus A (PVA; genus Potyviridae) gene expression. This system is based on infections initiated by Agrobacterium infiltration and subsequent detection of the translation of PVA::Rluc RNA, which is enhanced by viral replication, first within the cells infected initially and later by translation and replication within new cells after spread of the virus. Firefly luciferase (Fluc) was used as an internal control to normalize the Rluc activity. An approximately 10-fold difference in the Rluc/Fluc activity ratio between a movement-deficient and a replication-deficient mutant was observed starting from 48h post Agrobacterium infiltration (h.p.i.). The Rluc activity derived from wild type (wt) PVA increased significantly between 48 and 72h.p.i. and the Rluc/Fluc activity deviated clearly from that of the mutant viruses. Quantitation of the Rluc and Fluc mRNAs by semi-quantitative RT-PCR indicated that increases and decreases in the Renillareniformis luciferase (rluc) mRNA levels coincided with changes in Rluc activity. However, a subtle increase in the mRNA level led to pronounced changes in Rluc activity. PVA CP accumulation was quantitated by enzyme-linked immunosorbent assay. The increase in Rluc activity correlated closely with virus accumulation. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Biological monitoring of Upper Three Runs Creek, Savannah River Plant, Aiken County, South Carolina
DOE Office of Scientific and Technical Information (OSTI.GOV)
Specht, W.L.
1991-10-01
In anticipation of the fall 1988 start up of effluent discharges into Upper Three Creek by the F/H Area Effluent Treatment Facility of the Savannah River Site, Aiken, SC, a two and one half year biological study was initiated in June 1987. Upper Three Runs Creek is an intensively studied fourth order stream known for its high species richness. Designed to assess the potential impact of F H area effluent on the creek, the study includes qualitative and quantitative macroinvertebrate stream surveys at five sites, chronic toxicity testing of the effluent, water chemistry and bioaccumulation analysis. This final report presentsmore » the results of both pre-operational and post-operational qualitative and quantitative (artificial substrate) macroinvertebrate studies. Six quantitative and three qualitative studies were conducted prior to the initial release of the F/H ETF effluent and five quantitative and two qualitative studies were conducted post-operationally.« less
Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre
2003-03-01
A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.
Influence of multiple scattering on CloudSat measurements in snow: A model study
NASA Astrophysics Data System (ADS)
Matrosov, Sergey Y.; Battaglia, Alessandro
2009-06-01
The effects of multiple scattering on larger precipitating hydrometers have an influence on measurements of the spaceborne W-band (94 GHz) CloudSat radar. This study presents initial quantitative estimates of these effects in “dry” snow using radiative transfer calculations for appropriate snowfall models. It is shown that these effects become significant (i.e., greater than approximately 1 dB) when snowfall radar reflectivity factors are greater than about 10-15 dBZ. Reflectivity enhancement due to multiple scattering can reach 4-5 dB in heavier stratiform snowfalls. Multiple scattering effects counteract signal attenuation, so the observed CloudSat reflectivity factors in snowfall could be relatively close to the values that would be observed in the case of single scattering and the absence of attenuation.
An Eigenvalue Analysis of finite-difference approximations for hyperbolic IBVPs
NASA Technical Reports Server (NTRS)
Warming, Robert F.; Beam, Richard M.
1989-01-01
The eigenvalue spectrum associated with a linear finite-difference approximation plays a crucial role in the stability analysis and in the actual computational performance of the discrete approximation. The eigenvalue spectrum associated with the Lax-Wendroff scheme applied to a model hyperbolic equation was investigated. For an initial-boundary-value problem (IBVP) on a finite domain, the eigenvalue or normal mode analysis is analytically intractable. A study of auxiliary problems (Dirichlet and quarter-plane) leads to asymptotic estimates of the eigenvalue spectrum and to an identification of individual modes as either benign or unstable. The asymptotic analysis establishes an intuitive as well as quantitative connection between the algebraic tests in the theory of Gustafsson, Kreiss, and Sundstrom and Lax-Richtmyer L(sub 2) stability on a finite domain.
NASA Technical Reports Server (NTRS)
1985-01-01
Qualitative analyses (and quantitatively to the extend possible) of the influence of terrain features on wind loading of the space shuttle while on the launch pad, or during early liftoff, are presented. Initially, the climatology and meteorology producing macroscale wind patterns and characteristics fot he Vandenburg Air Force Base (VAFB) launch site are described. Also, limited field test data are analyzed, and then the nature and characteristic of flow disturbances due to the various terrain features, both natural and man-made, are then reviewed. Following this, the magnitude of these wind loads are estimated. Finally, effects of turbulence are discussed. The study concludes that the influence of complex terrain can create significant wind loading on the vehicle. Because of the limited information, it is not possible to quantify the magnitude of these loads.
Convergence and attractivity of memristor-based cellular neural networks with time delays.
Qin, Sitian; Wang, Jun; Xue, Xiaoping
2015-03-01
This paper presents theoretical results on the convergence and attractivity of memristor-based cellular neural networks (MCNNs) with time delays. Based on a realistic memristor model, an MCNN is modeled using a differential inclusion. The essential boundedness of its global solutions is proven. The state of MCNNs is further proven to be convergent to a critical-point set located in saturated region of the activation function, when the initial state locates in a saturated region. It is shown that the state convergence time period is finite and can be quantitatively estimated using given parameters. Furthermore, the positive invariance and attractivity of state in non-saturated regions are also proven. The simulation results of several numerical examples are provided to substantiate the results. Copyright © 2014 Elsevier Ltd. All rights reserved.
Cost effectiveness of the Oregon quitline "free patch initiative".
Fellows, Jeffrey L; Bush, Terry; McAfee, Tim; Dickerson, John
2007-12-01
We estimated the cost effectiveness of the Oregon tobacco quitline's "free patch initiative" compared to the pre-initiative programme. Using quitline utilisation and cost data from the state, intervention providers and patients, we estimated annual programme use and costs for media promotions and intervention services. We also estimated annual quitline registration calls and the number of quitters and life years saved for the pre-initiative and free patch initiative programmes. Service utilisation and 30-day abstinence at six months were obtained from 959 quitline callers. We compared the cost effectiveness of the free patch initiative (media and intervention costs) to the pre-initiative service offered to insured and uninsured callers. We conducted sensitivity analyses on key programme costs and outcomes by estimating a best case and worst case scenario for each intervention strategy. Compared to the pre-intervention programme, the free patch initiative doubled registered calls, increased quitting fourfold and reduced total costs per quit by $2688. We estimated annual paid media costs were $215 per registered tobacco user for the pre-initiative programme and less than $4 per caller during the free patch initiative. Compared to the pre-initiative programme, incremental quitline promotion and intervention costs for the free patch initiative were $86 (range $22-$353) per life year saved. Compared to the pre-initiative programme, the free patch initiative was a highly cost effective strategy for increasing quitting in the population.
Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti
2017-08-11
In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.
Improving the quantification of contrast enhanced ultrasound using a Bayesian approach
NASA Astrophysics Data System (ADS)
Rizzo, Gaia; Tonietto, Matteo; Castellaro, Marco; Raffeiner, Bernd; Coran, Alessandro; Fiocco, Ugo; Stramare, Roberto; Grisan, Enrico
2017-03-01
Contrast Enhanced Ultrasound (CEUS) is a sensitive imaging technique to assess tissue vascularity, that can be useful in the quantification of different perfusion patterns. This can be particularly important in the early detection and staging of arthritis. In a recent study we have shown that a Gamma-variate can accurately quantify synovial perfusion and it is flexible enough to describe many heterogeneous patterns. Moreover, we have shown that through a pixel-by-pixel analysis the quantitative information gathered characterizes more effectively the perfusion. However, the SNR ratio of the data and the nonlinearity of the model makes the parameter estimation difficult. Using classical non-linear-leastsquares (NLLS) approach the number of unreliable estimates (those with an asymptotic coefficient of variation greater than a user-defined threshold) is significant, thus affecting the overall description of the perfusion kinetics and of its heterogeneity. In this work we propose to solve the parameter estimation at the pixel level within a Bayesian framework using Variational Bayes (VB), and an automatic and data-driven prior initialization. When evaluating the pixels for which both VB and NLLS provided reliable estimates, we demonstrated that the parameter values provided by the two methods are well correlated (Pearson's correlation between 0.85 and 0.99). Moreover, the mean number of unreliable pixels drastically reduces from 54% (NLLS) to 26% (VB), without increasing the computational time (0.05 s/pixel for NLLS and 0.07 s/pixel for VB). When considering the efficiency of the algorithms as computational time per reliable estimate, VB outperforms NLLS (0.11 versus 0.25 seconds per reliable estimate respectively).
Green initiative impact on stock prices: A quantitative study of the clean energy industry
NASA Astrophysics Data System (ADS)
Jurisich, John M.
The purpose of this quantitative ex post facto research study was to explore the relationship between green initiative expense disclosures and stock prices of 46 NASDAQ listed Clean Edge Green Energy global companies from 2007 to 2010. The independent variables were sales and marketing, environmental, customer and supplier, community, and corporate governance practices that were correlated with the dependent variable in the study of stock prices. Expense disclosures were examined in an effort to measure the impact of green initiative programs and to expose the interrelationships between green initiative expense disclosures and fluctuations of stock prices. The data for the research was secondary data from existing annual reports. A statistically significant relationship was revealed between environmental practices and changes in stock prices. The study results also provided substantial evidence for leadership and managerial decision making to reduce or increase green initiative practices to maximize shareholder wealth of their respective organizations.
Evaluation of NU-WRF Rainfall Forecasts for IFloodS
NASA Technical Reports Server (NTRS)
Wu, Di; Peters-Lidard, Christa; Tao, Wei-Kuo; Petersen, Walter
2016-01-01
The Iowa Flood Studies (IFloodS) campaign was conducted in eastern Iowa as a pre- GPM-launch campaign from 1 May to 15 June 2013. During the campaign period, real time forecasts are conducted utilizing NASA-Unified Weather Research and Forecasting (NU-WRF) model to support the everyday weather briefing. In this study, two sets of the NU-WRF rainfall forecasts are evaluated with Stage IV and Multi-Radar Multi-Sensor (MRMS) Quantitative Precipitation Estimation (QPE), with the objective to understand the impact of Land Surface initialization on the predicted precipitation. NU-WRF is also compared with North American Mesoscale Forecast System (NAM) 12 kilometer forecast. In general, NU-WRF did a good job at capturing individual precipitation events. NU-WRF is also able to replicate a better rainfall spatial distribution compare with NAM. Further sensitivity tests show that the high-resolution makes a positive impact on rainfall forecast. The two sets of NU-WRF simulations produce very close rainfall characteristics. The Land surface initialization do not show significant impact on short term rainfall forecast, and it is largely due to the soil conditions during the field campaign period.
NASA Technical Reports Server (NTRS)
Niki, Hiromi
1990-01-01
Tropospheric chemical transformations of alternative hydrofluorocarbons (HCF's) and hydrochlorofluorocarbons (HCFC's) are governed by hydroxyl radical initiated oxidation processes, which are likely to be analogous to those known for alkanes and chloroalkanes. A schematic diagram is used to illustrate plausible reaction mechanisms for their atmospheric degradation, where R, R', and R'' denote the F- and/or Cl-substituted alkyl groups derived from HCF's and HCFC's subsequent th the initial H atom abstraction by HO radicals. At present, virtually no kinetic data exist for the majority of these reactions, particularly for those involving RO. Potential degradation intermediates and final products include a large variety of fluorine- and/or chlorine-containing carbonyls, acids, peroxy acids, alcohols, hydrogen peroxides, nitrates and peroxy nitrates, as summarized in the attached table. Probably atmospheric lifetimes of these compounds were also estimated. For some carbonyl and nitrate products shown in this table, there seem to be no significant gas-phase removal mechanisms. Further chemical kinetics and photochemical data are needed to quantitatively assess the atmospheric fate of HCF's and HCFC's, and of the degradation products postulated in this report.
NASA Astrophysics Data System (ADS)
Tsanakas, John A.; Jaffre, Damien; Sicre, Mathieu; Elouamari, Rachid; Vossier, Alexis; de Salins, Jean-Edouard; Bechou, Laurent; Levrier, Bruno; Perona, Arnaud; Dollet, Alain
2014-09-01
This paper presents a preliminary study upon a novel approach proposed for highly accelerated ageing and reliability optimization of high concentrating photovoltaic (HCPV) cells and assemblies. The intended approach aims to overcome several limitations of some current accelerated ageing tests (AAT) adopted up today, proposing the use of an alternative experimental set-up for performing faster and more realistic thermal cycles, under real sun, without the involvement of environmental chamber. The study also includes specific characterization techniques, before and after each AAT sequence, which respectively provide the initial and final diagnosis on the condition of the tested sample. The acquired data from these diagnostic/characterization methods are then used as indices to determine both quantitatively and qualitatively the severity of degradation and, thus, the ageing level for each tested HCPV assembly or cell sample. Ultimate goal of such "initial diagnosis - AAT - final diagnosis" sequences is to provide the basis for a future work on the reliability analysis of the main degradation mechanisms and confident prediction of failure propagation in HCPV cells, by means of acceleration factor (AF) and mean-time-to-failure (MTTF) estimations.
Chow, Steven Kwok Keung; Yeung, David Ka Wai; Ahuja, Anil T; King, Ann D
2012-01-01
Purpose To quantitatively evaluate the kinetic parameter estimation for head and neck (HN) dynamic contrast-enhanced (DCE) MRI with dual-flip-angle (DFA) T1 mapping. Materials and methods Clinical DCE-MRI datasets of 23 patients with HN tumors were included in this study. T1 maps were generated based on multiple-flip-angle (MFA) method and different DFA combinations. Tofts model parameter maps of kep, Ktrans and vp based on MFA and DFAs were calculated and compared. Fitted parameter by MFA and DFAs were quantitatively evaluated in primary tumor, salivary gland and muscle. Results T1 mapping deviations by DFAs produced remarkable kinetic parameter estimation deviations in head and neck tissues. In particular, the DFA of [2º, 7º] overestimated, while [7º, 12º] and [7º, 15º] underestimated Ktrans and vp, significantly (P<0.01). [2º, 15º] achieved the smallest but still statistically significant overestimation for Ktrans and vp in primary tumors, 32.1% and 16.2% respectively. kep fitting results by DFAs were relatively close to the MFA reference compared to Ktrans and vp. Conclusions T1 deviations induced by DFA could result in significant errors in kinetic parameter estimation, particularly Ktrans and vp, through Tofts model fitting. MFA method should be more reliable and robust for accurate quantitative pharmacokinetic analysis in head and neck. PMID:23289084
Yang, Jianhong; Li, Xiaomeng; Xu, Jinwu; Ma, Xianghong
2018-01-01
The quantitative analysis accuracy of calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is severely affected by the self-absorption effect and estimation of plasma temperature. Herein, a CF-LIBS quantitative analysis method based on the auto-selection of internal reference line and the optimized estimation of plasma temperature is proposed. The internal reference line of each species is automatically selected from analytical lines by a programmable procedure through easily accessible parameters. Furthermore, the self-absorption effect of the internal reference line is considered during the correction procedure. To improve the analysis accuracy of CF-LIBS, the particle swarm optimization (PSO) algorithm is introduced to estimate the plasma temperature based on the calculation results from the Boltzmann plot. Thereafter, the species concentrations of a sample can be calculated according to the classical CF-LIBS method. A total of 15 certified alloy steel standard samples of known compositions and elemental weight percentages were used in the experiment. Using the proposed method, the average relative errors of Cr, Ni, and Fe calculated concentrations were 4.40%, 6.81%, and 2.29%, respectively. The quantitative results demonstrated an improvement compared with the classical CF-LIBS method and the promising potential of in situ and real-time application.
Parameter estimation in plasmonic QED
NASA Astrophysics Data System (ADS)
Jahromi, H. Rangani
2018-03-01
We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.
ERIC Educational Resources Information Center
Whaton, George R.; And Others
As the first step in a program to develop quantitative techniques for prescribing the design and use of training systems, the present study attempted: to compile an initial set of quantitative indices, to determine whether these indices could be used to describe a sample of trainee tasks and differentiate among them, to develop a predictive…
NASA Astrophysics Data System (ADS)
Matthews, Thomas P.; Anastasio, Mark A.
2017-12-01
The initial pressure and speed of sound (SOS) distributions cannot both be stably recovered from photoacoustic computed tomography (PACT) measurements alone. Adjunct ultrasound computed tomography (USCT) measurements can be employed to estimate the SOS distribution. Under the conventional image reconstruction approach for combined PACT/USCT systems, the SOS is estimated from the USCT measurements alone and the initial pressure is estimated from the PACT measurements by use of the previously estimated SOS. This approach ignores the acoustic information in the PACT measurements and may require many USCT measurements to accurately reconstruct the SOS. In this work, a joint reconstruction method where the SOS and initial pressure distributions are simultaneously estimated from combined PACT/USCT measurements is proposed. This approach allows accurate estimation of both the initial pressure distribution and the SOS distribution while requiring few USCT measurements.
7 CFR 3430.309 - Priority areas.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Agriculture and Food Research Initiative § 3430.309 Priority areas. NIFA will award competitive grants in the...) Conventional breeding, including cultivar and breed development, selection theory, applied quantitative... development, selection theory, applied quantitative genetics, breeding for improved food quality, breeding for...
7 CFR 3430.309 - Priority areas.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Agriculture and Food Research Initiative § 3430.309 Priority areas. NIFA will award competitive grants in the...) Conventional breeding, including cultivar and breed development, selection theory, applied quantitative... development, selection theory, applied quantitative genetics, breeding for improved food quality, breeding for...
7 CFR 3430.309 - Priority areas.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Agriculture and Food Research Initiative § 3430.309 Priority areas. NIFA will award competitive grants in the...) Conventional breeding, including cultivar and breed development, selection theory, applied quantitative... development, selection theory, applied quantitative genetics, breeding for improved food quality, breeding for...
7 CFR 3430.309 - Priority areas.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Agriculture and Food Research Initiative § 3430.309 Priority areas. NIFA will award competitive grants in the...) Conventional breeding, including cultivar and breed development, selection theory, applied quantitative... development, selection theory, applied quantitative genetics, breeding for improved food quality, breeding for...
Quantitative estimation of film forming polymer-plasticizer interactions by the Lorentz-Lorenz Law.
Dredán, J; Zelkó, R; Dávid, A Z; Antal, I
2006-03-09
Molar refraction as well as refractive index has many uses. Beyond confirming the identity and purity of a compound, determination of molecular structure and molecular weight, molar refraction is also used in other estimation schemes, such as in critical properties, surface tension, solubility parameter, molecular polarizability, dipole moment, etc. In the present study molar refraction values of polymer dispersions were determined for the quantitative estimation of film forming polymer-plasticizer interactions. Information can be obtained concerning the extent of interaction between the polymer and the plasticizer from the calculation of molar refraction values of film forming polymer dispersions containing plasticizer.
Hirschfeld, Gerrit; Blankenburg, Markus R; Süß, Moritz; Zernikow, Boris
2015-01-01
The assessment of somatosensory function is a cornerstone of research and clinical practice in neurology. Recent initiatives have developed novel protocols for quantitative sensory testing (QST). Application of these methods led to intriguing findings, such as the presence lower pain-thresholds in healthy children compared to healthy adolescents. In this article, we (re-) introduce the basic concepts of signal detection theory (SDT) as a method to investigate such differences in somatosensory function in detail. SDT describes participants' responses according to two parameters, sensitivity and response-bias. Sensitivity refers to individuals' ability to discriminate between painful and non-painful stimulations. Response-bias refers to individuals' criterion for giving a "painful" response. We describe how multilevel models can be used to estimate these parameters and to overcome central critiques of these methods. To provide an example we apply these methods to data from the mechanical pain sensitivity test of the QST protocol. The results show that adolescents are more sensitive to mechanical pain and contradict the idea that younger children simply use more lenient criteria to report pain. Overall, we hope that the wider use of multilevel modeling to describe somatosensory functioning may advance neurology research and practice.
Propolis Modifies Collagen Types I and III Accumulation in the Matrix of Burnt Tissue.
Olczyk, Pawel; Wisowski, Grzegorz; Komosinska-Vassev, Katarzyna; Stojko, Jerzy; Klimek, Katarzyna; Olczyk, Monika; Kozma, Ewa M
2013-01-01
Wound healing represents an interactive process which requires highly organized activity of various cells, synthesizing cytokines, growth factors, and collagen. Collagen types I and III, serving as structural and regulatory molecules, play pivotal roles during wound healing. The aim of this study was to compare the propolis and silver sulfadiazine therapeutic efficacy throughout the quantitative and qualitative assessment of collagen types I and III accumulation in the matrix of burnt tissues. Burn wounds were inflicted on pigs, chosen for the evaluation of wound repair because of many similarities between pig and human skin. Isolated collagen types I and III were estimated by the surface plasmon resonance method with a subsequent collagenous quantification using electrophoretic and densitometric analyses. Propolis burn treatment led to enhanced collagens and its components expression, especially during the initial stage of the study. Less expressed changes were observed after silver sulfadiazine (AgSD) application. AgSD and, with a smaller intensity, propolis stimulated accumulation of collagenous degradation products. The assessed propolis therapeutic efficacy, throughout quantitatively and qualitatively analyses of collagen types I and III expression and degradation in wounds matrix, may indicate that apitherapeutic agent can generate favorable biochemical environment supporting reepithelization.
Measurement and Reliability of Response Inhibition
Congdon, Eliza; Mumford, Jeanette A.; Cohen, Jessica R.; Galvan, Adriana; Canli, Turhan; Poldrack, Russell A.
2012-01-01
Response inhibition plays a critical role in adaptive functioning and can be assessed with the Stop-signal task, which requires participants to suppress prepotent motor responses. Evidence suggests that this ability to inhibit a prepotent motor response (reflected as Stop-signal reaction time (SSRT)) is a quantitative and heritable measure of interindividual variation in brain function. Although attention has been given to the optimal method of SSRT estimation, and initial evidence exists in support of its reliability, there is still variability in how Stop-signal task data are treated across samples. In order to examine this issue, we pooled data across three separate studies and examined the influence of multiple SSRT calculation methods and outlier calling on reliability (using Intra-class correlation). Our results suggest that an approach which uses the average of all available sessions, all trials of each session, and excludes outliers based on predetermined lenient criteria yields reliable SSRT estimates, while not excluding too many participants. Our findings further support the reliability of SSRT, which is commonly used as an index of inhibitory control, and provide support for its continued use as a neurocognitive phenotype. PMID:22363308
NASA Technical Reports Server (NTRS)
Consolmagno, G. J.; Drake, M. J.
1977-01-01
Quantitative modeling of the evolution of rare earth element (REE) abundances in the eucrites, which are plagioclase-pigeonite basalt achondrites, indicates that the main group of eucrites (e.g., Juvinas) might have been produced by approximately 10% equilibrium partial melting of a single type of source region with initial REE abundances which were chondritic relative and absolute. Since the age of the eucrites is about equal to that of the solar system, extensive chemical differentiation of the eucrite parent body prior to the formation of eucrites seems unlikely. If homogeneous accretion is assumed, the bulk composition of the eucrite parent body can be estimated; two estimates are provided, representing different hypotheses as to the ratio of metal to olivine in the parent body. Since a large number of differentiated olivine meteorites, which would represent material from the interior of the parent body, have not been detected, the eucrite parent body is thought to be intact. It is suggested that the asteroid 4 Vesta is the eucrite parent body.
Monitoring nekton as a bioindicator in shallow estuarine habitats
Raposa, K.B.; Roman, C.T.; Heltshe, J.F.
2003-01-01
Long-term monitoring of estuarine nekton has many practical and ecological benefits but efforts are hampered by a lack of standardized sampling procedures. This study provides a rationale for monitoring nekton in shallow (< 1 m), temperate, estuarine habitats and addresses some important issues that arise when developing monitoring protocols. Sampling in seagrass and salt marsh habitats is emphasized due to the susceptibility of each habitat to anthropogenic stress and to the abundant and rich nekton assemblages that each habitat supports. Extensive sampling with quantitative enclosure traps that estimate nekton density is suggested. These gears have a high capture efficiency in most habitats and are small enough (e.g., 1 m(2)) to permit sampling in specific microhabitats. Other aspects of nekton monitoring are discussed, including spatial and temporal sampling considerations, station selection, sample size estimation, and data collection and analysis. Developing and initiating long-term nekton monitoring programs will help evaluate natural and human-induced changes in estuarine nekton over time and advance our understanding of the interactions between nekton and the dynamic estuarine environment.
Gil de Prado, Elena; Rivas, Eva-María; de Silóniz, María-Isabel; Diezma, Belén; Barreiro, Pilar; Peinado, José M
2014-11-01
The colony shape of four yeast species growing on agar medium was measured for 116 days by image analysis. Initially, all the colonies are circular, with regular edges. The loss of circularity can be quantitatively estimated by the eccentricity index, Ei , calculated as the ratio between their orthogonal vertical and horizontal diameters. Ei can increase from 1 (complete circularity) to a maximum of 1.17-1.30, depending on the species. One colony inhibits its neighbour only when it has reached a threshold area. Then, Ei of the inhibited colony increases proportionally to the area of the inhibitory colony. The initial distance between colonies affects those threshold values but not the proportionality, Ei /area; this inhibition affects the shape but not the total surface of the colony. The appearance of irregularities in the edges is associated, in all the species, not with age but with nutrient exhaustion. The edge irregularity can be quantified by the Fourier index, Fi , calculated by the minimum number of Fourier coefficients that are needed to describe the colony contour with 99% fitness. An ad hoc function has been developed in Matlab v. 7.0 to automate the computation of the Fourier coefficients. In young colonies, Fi has a value between 2 (circumference) and 3 (ellipse). These values are maintained in mature colonies of Debaryomyces, but can reach values up to 14 in Saccharomyces. All the species studied showed the inhibition of growth in facing colony edges, but only three species showed edge irregularities associated with substrate exhaustion. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Zhen, Xin; Chen, Haibin; Yan, Hao; Zhou, Linghong; Mell, Loren K.; Yashar, Catheryn M.; Jiang, Steve; Jia, Xun; Gu, Xuejun; Cervino, Laura
2015-04-01
Deformable image registration (DIR) of fractional high-dose-rate (HDR) CT images is challenging due to the presence of applicators in the brachytherapy image. Point-to-point correspondence fails because of the undesired deformation vector fields (DVF) propagated from the applicator region (AR) to the surrounding tissues, which can potentially introduce significant DIR errors in dose mapping. This paper proposes a novel segmentation and point-matching enhanced efficient DIR (named SPEED) scheme to facilitate dose accumulation among HDR treatment fractions. In SPEED, a semi-automatic seed point generation approach is developed to obtain the incremented fore/background point sets to feed the random walks algorithm, which is used to segment and remove the AR, leaving empty AR cavities in the HDR CT images. A feature-based ‘thin-plate-spline robust point matching’ algorithm is then employed for AR cavity surface points matching. With the resulting mapping, a DVF defining on each voxel is estimated by B-spline approximation, which serves as the initial DVF for the subsequent Demons-based DIR between the AR-free HDR CT images. The calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative analysis and visual inspection of the DIR results indicate that SPEED can suppress the impact of applicator on DIR, and accurately register HDR CT images as well as deform and add interfractional HDR doses.
Zhen, Xin; Chen, Haibin; Yan, Hao; Zhou, Linghong; Mell, Loren K; Yashar, Catheryn M; Jiang, Steve; Jia, Xun; Gu, Xuejun; Cervino, Laura
2015-04-07
Deformable image registration (DIR) of fractional high-dose-rate (HDR) CT images is challenging due to the presence of applicators in the brachytherapy image. Point-to-point correspondence fails because of the undesired deformation vector fields (DVF) propagated from the applicator region (AR) to the surrounding tissues, which can potentially introduce significant DIR errors in dose mapping. This paper proposes a novel segmentation and point-matching enhanced efficient DIR (named SPEED) scheme to facilitate dose accumulation among HDR treatment fractions. In SPEED, a semi-automatic seed point generation approach is developed to obtain the incremented fore/background point sets to feed the random walks algorithm, which is used to segment and remove the AR, leaving empty AR cavities in the HDR CT images. A feature-based 'thin-plate-spline robust point matching' algorithm is then employed for AR cavity surface points matching. With the resulting mapping, a DVF defining on each voxel is estimated by B-spline approximation, which serves as the initial DVF for the subsequent Demons-based DIR between the AR-free HDR CT images. The calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative analysis and visual inspection of the DIR results indicate that SPEED can suppress the impact of applicator on DIR, and accurately register HDR CT images as well as deform and add interfractional HDR doses.
A quantitative approach to combine sources in stable isotope mixing models
Stable isotope mixing models, used to estimate source contributions to a mixture, typically yield highly uncertain estimates when there are many sources and relatively few isotope elements. Previously, ecologists have either accepted the uncertain contribution estimates for indiv...
Robertson, Angela M.; Lozada, Remedios; Pollini, Robin A.; Rangel, Gudelia; Ojeda, Victoria D.
2012-01-01
Preventing the onset of injection drug use is important in controlling the spread of HIV and other blood borne infections. Undocumented migrants in the United States face social, economic, and legal stressors that may contribute to substance abuse. Little is known about undocumented migrants’ drug abuse trajectories including injection initiation. To examine the correlates and contexts of U.S. injection initiation among undocumented migrants, we administered quantitative surveys (n=309) and qualitative interviews (n=23) on migration and drug abuse experiences to deported male injection drug users (IDUs) in Tijuana, Mexico. U.S. injection initiation was independently associated with ever using drugs in Mexico pre-migration, younger age at first U.S. migration, and U.S. incarceration. Participants’ qualitative interviews contextualized quantitative findings and demonstrated the significance of social contexts surrounding U.S. injection initiation experiences. HIV prevention programs may prevent/delay U.S. injection initiation by addressing socio-economic and migration-related stressors experienced by undocumented migrants. PMID:22246511
NASA Astrophysics Data System (ADS)
Sumin, V. I.; Smolentseva, T. E.; Belokurov, S. V.; Lankin, O. V.
2018-03-01
In the work the process of formation of trainee characteristics with their subsequent change is analyzed and analyzed. Characteristics of trainees were obtained as a result of testing for each section of information on the chosen discipline. The results obtained during testing were input to the dynamic system. The area of control actions consisting of elements of the dynamic system is formed. The limit of deterministic predictability of element trajectories in dynamical systems based on local or global attractors is revealed. The dimension of the phase space of the dynamic system is determined, which allows estimating the parameters of the initial system. On the basis of time series of observations, it is possible to determine the predictability interval of all parameters, which make it possible to determine the behavior of the system discretely in time. Then the measure of predictability will be the sum of Lyapunov’s positive indicators, which are a quantitative measure for all elements of the system. The components for the formation of an algorithm allowing to determine the correlation dimension of the attractor for known initial experimental values of the variables are revealed. The generated algorithm makes it possible to carry out an experimental study of the dynamics of changes in the trainee’s parameters with initial uncertainty.
Gunawardena, Harsha P; O'Brien, Jonathon; Wrobel, John A; Xie, Ling; Davies, Sherri R; Li, Shunqiang; Ellis, Matthew J; Qaqish, Bahjat F; Chen, Xian
2016-02-01
Single quantitative platforms such as label-based or label-free quantitation (LFQ) present compromises in accuracy, precision, protein sequence coverage, and speed of quantifiable proteomic measurements. To maximize the quantitative precision and the number of quantifiable proteins or the quantifiable coverage of tissue proteomes, we have developed a unified approach, termed QuantFusion, that combines the quantitative ratios of all peptides measured by both LFQ and label-based methodologies. Here, we demonstrate the use of QuantFusion in determining the proteins differentially expressed in a pair of patient-derived tumor xenografts (PDXs) representing two major breast cancer (BC) subtypes, basal and luminal. Label-based in-spectra quantitative peptides derived from amino acid-coded tagging (AACT, also known as SILAC) of a non-malignant mammary cell line were uniformly added to each xenograft with a constant predefined ratio, from which Ratio-of-Ratio estimates were obtained for the label-free peptides paired with AACT peptides in each PDX tumor. A mixed model statistical analysis was used to determine global differential protein expression by combining complementary quantifiable peptide ratios measured by LFQ and Ratio-of-Ratios, respectively. With minimum number of replicates required for obtaining the statistically significant ratios, QuantFusion uses the distinct mechanisms to "rescue" the missing data inherent to both LFQ and label-based quantitation. Combined quantifiable peptide data from both quantitative schemes increased the overall number of peptide level measurements and protein level estimates. In our analysis of the PDX tumor proteomes, QuantFusion increased the number of distinct peptide ratios by 65%, representing differentially expressed proteins between the BC subtypes. This quantifiable coverage improvement, in turn, not only increased the number of measurable protein fold-changes by 8% but also increased the average precision of quantitative estimates by 181% so that some BC subtypically expressed proteins were rescued by QuantFusion. Thus, incorporating data from multiple quantitative approaches while accounting for measurement variability at both the peptide and global protein levels make QuantFusion unique for obtaining increased coverage and quantitative precision for tissue proteomes. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Quantitative Microbial Risk Assessment for Escherichia coli O157:H7 in Fresh-Cut Lettuce.
Pang, Hao; Lambertini, Elisabetta; Buchanan, Robert L; Schaffner, Donald W; Pradhan, Abani K
2017-02-01
Leafy green vegetables, including lettuce, are recognized as potential vehicles for foodborne pathogens such as Escherichia coli O157:H7. Fresh-cut lettuce is potentially at high risk of causing foodborne illnesses, as it is generally consumed without cooking. Quantitative microbial risk assessments (QMRAs) are gaining more attention as an effective tool to assess and control potential risks associated with foodborne pathogens. This study developed a QMRA model for E. coli O157:H7 in fresh-cut lettuce and evaluated the effects of different potential intervention strategies on the reduction of public health risks. The fresh-cut lettuce production and supply chain was modeled from field production, with both irrigation water and soil as initial contamination sources, to consumption at home. The baseline model (with no interventions) predicted a mean probability of 1 illness per 10 million servings and a mean of 2,160 illness cases per year in the United States. All intervention strategies evaluated (chlorine, ultrasound and organic acid, irradiation, bacteriophage, and consumer washing) significantly reduced the estimated mean number of illness cases when compared with the baseline model prediction (from 11.4- to 17.9-fold reduction). Sensitivity analyses indicated that retail and home storage temperature were the most important factors affecting the predicted number of illness cases. The developed QMRA model provided a framework for estimating risk associated with consumption of E. coli O157:H7-contaminated fresh-cut lettuce and can guide the evaluation and development of intervention strategies aimed at reducing such risk.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-30
... study examines quantitative and qualitative information obtained from community-based initiatives... future research initiatives targeting childhood obesity. Frequency of Response: One time. Affected Public...
Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina
2016-09-01
Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76-0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. (©) RSNA, 2016 Online supplemental material is available for this article.
Chen, Lin; Ray, Shonket; Keller, Brad M.; Pertuz, Said; McDonald, Elizabeth S.; Conant, Emily F.
2016-01-01
Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88–0.95; weighted κ = 0.83–0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76–0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. © RSNA, 2016 Online supplemental material is available for this article. PMID:27002418
Quantitative method of medication system interface evaluation.
Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F
2007-01-01
The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.
Yap, John Stephen; Fan, Jianqing; Wu, Rongling
2009-12-01
Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.
Elschot, Mattijs; Vermolen, Bart J.; Lam, Marnix G. E. H.; de Keizer, Bart; van den Bosch, Maurice A. A. J.; de Jong, Hugo W. A. M.
2013-01-01
Background After yttrium-90 (90Y) microsphere radioembolization (RE), evaluation of extrahepatic activity and liver dosimetry is typically performed on 90Y Bremsstrahlung SPECT images. Since these images demonstrate a low quantitative accuracy, 90Y PET has been suggested as an alternative. The aim of this study is to quantitatively compare SPECT and state-of-the-art PET on the ability to detect small accumulations of 90Y and on the accuracy of liver dosimetry. Methodology/Principal Findings SPECT/CT and PET/CT phantom data were acquired using several acquisition and reconstruction protocols, including resolution recovery and Time-Of-Flight (TOF) PET. Image contrast and noise were compared using a torso-shaped phantom containing six hot spheres of various sizes. The ability to detect extra- and intrahepatic accumulations of activity was tested by quantitative evaluation of the visibility and unique detectability of the phantom hot spheres. Image-based dose estimates of the phantom were compared to the true dose. For clinical illustration, the SPECT and PET-based estimated liver dose distributions of five RE patients were compared. At equal noise level, PET showed higher contrast recovery coefficients than SPECT. The highest contrast recovery coefficients were obtained with TOF PET reconstruction including resolution recovery. All six spheres were consistently visible on SPECT and PET images, but PET was able to uniquely detect smaller spheres than SPECT. TOF PET-based estimates of the dose in the phantom spheres were more accurate than SPECT-based dose estimates, with underestimations ranging from 45% (10-mm sphere) to 11% (37-mm sphere) for PET, and 75% to 58% for SPECT, respectively. The differences between TOF PET and SPECT dose-estimates were supported by the patient data. Conclusions/Significance In this study we quantitatively demonstrated that the image quality of state-of-the-art PET is superior over Bremsstrahlung SPECT for the assessment of the 90Y microsphere distribution after radioembolization. PMID:23405207
Estimating soil erosion in Natura 2000 areas located on three semi-arid Mediterranean Islands.
Zaimes, George N; Emmanouloudis, Dimitris; Iakovoglou, Valasia
2012-03-01
A major initiative in Europe is the protection of its biodiversity. To accomplish this, specific areas from all countries of the European Union are protected by the establishment of the "Natura 2000" network. One of the major threats to these areas and in general to ecosystems is soil erosion. The objective of this study was to quantitatively estimate surface soil losses for three of these protected areas that are located on semi-arid islands of the Mediterranean. One Natura 2000 area was selected from each of the following islands: Sicily in Italy, Cyprus and Rhodes in Greece. To estimate soil losses, Gerlach troughs were used. These troughs were established on slopes that ranged from 35-40% in four different vegetation types: i) Quercus ilex and Quercus rotundifolia forests, ii) Pinus brutia forests, iii) "Phrygana" shrublands and iv) vineyards. The shrublands had the highest soil losses (270 kg ha(-1) yr(-1)) that were 5-13 times more than the other three vegetation types. Soil losses in these shrublands should be considered a major concern. However, the other vegetation types also had high soil losses (21-50 kg ha(-1) yr(-1)). Conclusively, in order to enhance and conserve the biodiversity of these Natura 2000 areas protective management measures should be taken into consideration to decrease soil losses.
Mass properties survey of solar array technologies
NASA Technical Reports Server (NTRS)
Kraus, Robert
1991-01-01
An overview of the technologies, electrical performance, and mass characteristics of many of the presently available and the more advanced developmental space solar array technologies is presented. Qualitative trends and quantitative mass estimates as total array output power is increased from 1 kW to 5 kW at End of Life (EOL) from a single wing are shown. The array technologies are part of a database supporting an ongoing solar power subsystem model development for top level subsystem and technology analyses. The model is used to estimate the overall electrical and thermal performance of the complete subsystem, and then calculate the mass and volume of the array, batteries, power management, and thermal control elements as an initial sizing. The array types considered here include planar rigid panel designs, flexible and rigid fold-out planar arrays, and two concentrator designs, one with one critical axis and the other with two critical axes. Solar cell technologies of Si, GaAs, and InP were included in the analyses. Comparisons were made at the array level; hinges, booms, harnesses, support structures, power transfer, and launch retention mountings were included. It is important to note that the results presented are approximations, and in some cases revised or modified performance and mass estimates of specific designs.
The NIST Quantitative Infrared Database
Chu, P. M.; Guenther, F. R.; Rhoderick, G. C.; Lafferty, W. J.
1999-01-01
With the recent developments in Fourier transform infrared (FTIR) spectrometers it is becoming more feasible to place these instruments in field environments. As a result, there has been enormous increase in the use of FTIR techniques for a variety of qualitative and quantitative chemical measurements. These methods offer the possibility of fully automated real-time quantitation of many analytes; therefore FTIR has great potential as an analytical tool. Recently, the U.S. Environmental Protection Agency (U.S.EPA) has developed protocol methods for emissions monitoring using both extractive and open-path FTIR measurements. Depending upon the analyte, the experimental conditions and the analyte matrix, approximately 100 of the hazardous air pollutants (HAPs) listed in the 1990 U.S.EPA Clean Air Act amendment (CAAA) can be measured. The National Institute of Standards and Technology (NIST) has initiated a program to provide quality-assured infrared absorption coefficient data based on NIST prepared primary gas standards. Currently, absorption coefficient data has been acquired for approximately 20 of the HAPs. For each compound, the absorption coefficient spectrum was calculated using nine transmittance spectra at 0.12 cm−1 resolution and the Beer’s law relationship. The uncertainties in the absorption coefficient data were estimated from the linear regressions of the transmittance data and considerations of other error sources such as the nonlinear detector response. For absorption coefficient values greater than 1 × 10−4 μmol/mol)−1 m−1 the average relative expanded uncertainty is 2.2 %. This quantitative infrared database is currently an ongoing project at NIST. Additional spectra will be added to the database as they are acquired. Our current plans include continued data acquisition of the compounds listed in the CAAA, as well as the compounds that contribute to global warming and ozone depletion.
Boe, S G; Dalton, B H; Harwood, B; Doherty, T J; Rice, C L
2009-05-01
To establish the inter-rater reliability of decomposition-based quantitative electromyography (DQEMG) derived motor unit number estimates (MUNEs) and quantitative motor unit (MU) analysis. Using DQEMG, two examiners independently obtained a sample of needle and surface-detected motor unit potentials (MUPs) from the tibialis anterior muscle from 10 subjects. Coupled with a maximal M wave, surface-detected MUPs were used to derive a MUNE for each subject and each examiner. Additionally, size-related parameters of the individual MUs were obtained following quantitative MUP analysis. Test-retest MUNE values were similar with high reliability observed between examiners (ICC=0.87). Additionally, MUNE variability from test-retest as quantified by a 95% confidence interval was relatively low (+/-28 MUs). Lastly, quantitative data pertaining to MU size, complexity and firing rate were similar between examiners. MUNEs and quantitative MU data can be obtained with high reliability by two independent examiners using DQEMG. Establishing the inter-rater reliability of MUNEs and quantitative MU analysis using DQEMG is central to the clinical applicability of the technique. In addition to assessing response to treatments over time, multiple clinicians may be involved in the longitudinal assessment of the MU pool of individuals with disorders of the central or peripheral nervous system.
Quantitative acoustic emission monitoring of fatigue cracks in fracture critical steel bridges.
DOT National Transportation Integrated Search
2014-01-01
The objective of this research is to evaluate the feasibility to employ quantitative acoustic : emission (AE) techniques for monitoring of fatigue crack initiation and propagation in steel : bridge members. Three A36 compact tension steel specimens w...
Forest restoration: a global dataset for biodiversity and vegetation structure.
Crouzeilles, Renato; Ferreira, Mariana S; Curran, Michael
2016-08-01
Restoration initiatives are becoming increasingly applied around the world. Billions of dollars have been spent on ecological restoration research and initiatives, but restoration outcomes differ widely among these initiatives in part due to variable socioeconomic and ecological contexts. Here, we present the most comprehensive dataset gathered to date on forest restoration. It encompasses 269 primary studies across 221 study landscapes in 53 countries and contains 4,645 quantitative comparisons between reference ecosystems (e.g., old-growth forest) and degraded or restored ecosystems for five taxonomic groups (mammals, birds, invertebrates, herpetofauna, and plants) and five measures of vegetation structure reflecting different ecological processes (cover, density, height, biomass, and litter). We selected studies that (1) were conducted in forest ecosystems; (2) had multiple replicate sampling sites to measure indicators of biodiversity and/or vegetation structure in reference and restored and/or degraded ecosystems; and (3) used less-disturbed forests as a reference to the ecosystem under study. We recorded (1) latitude and longitude; (2) study year; (3) country; (4) biogeographic realm; (5) past disturbance type; (6) current disturbance type; (7) forest conversion class; (8) restoration activity; (9) time that a system has been disturbed; (10) time elapsed since restoration started; (11) ecological metric used to assess biodiversity; and (12) quantitative value of the ecological metric of biodiversity and/or vegetation structure for reference and restored and/or degraded ecosystems. These were the most common data available in the selected studies. We also estimated forest cover and configuration in each study landscape using a recently developed 1 km consensus land cover dataset. We measured forest configuration as the (1) mean size of all forest patches; (2) size of the largest forest patch; and (3) edge:area ratio of forest patches. Global analyses of the factors influencing ecological restoration success at both the local and landscape scale are urgently needed to guide restoration initiatives and to further develop restoration knowledge in a topic area of much contemporary interest. © 2016 by the Ecological Society of America.
[Quantitative determination of 7-phenoxyacetamidodesacetoxycephalosporanic acid].
Dzegilenko, N B; Riabova, N M; Zinchenko, E Ia; Korchagin, V B
1976-11-01
7-Phenoxyacetamidodesacetoxycephalosporanic acid, an intermediate product in synthesis of cephalexin, was prepared by oxydation of phenoxymethylpenicillin into the respective sulphoxide and transformation of the latter. The UV-spectra of the reaction products were studied. A quantitative method is proposed for determination of 7-phenoxyacetamidodesacetoxycephalosporanic acid in the finished products based on estimation os the coefficient of specific extinction of the ethanol solutions at a wave length of 268 um in the UV-spectrum region in combination with semiquantitative estimation of the admixtures with the method of thin-layer chromatography.
NASA Astrophysics Data System (ADS)
Creagh, Dudley; Cameron, Alyce
2017-08-01
When skeletonized remains are found it becomes a police task to determine to identify the body and establish the cause of death. It assists investigators if the Post-Mortem Interval (PMI) can be established. Hitherto no reliable qualitative method of estimating the PMI has been found. A quantitative method has yet to be developed. This paper shows that IR spectroscopy and Raman microscopy have the potential to form the basis of a quantitative method.
Transcript copy number estimation using a mouse whole-genome oligonucleotide microarray
Carter, Mark G; Sharov, Alexei A; VanBuren, Vincent; Dudekula, Dawood B; Carmack, Condie E; Nelson, Charlie; Ko, Minoru SH
2005-01-01
The ability to quantitatively measure the expression of all genes in a given tissue or cell with a single assay is an exciting promise of gene-expression profiling technology. An in situ-synthesized 60-mer oligonucleotide microarray designed to detect transcripts from all mouse genes was validated, as well as a set of exogenous RNA controls derived from the yeast genome (made freely available without restriction), which allow quantitative estimation of absolute endogenous transcript abundance. PMID:15998450
Ley, P
1985-04-01
Patients frequently fail to understand what they are told. Further, they frequently forget the information given to them. These factors have effects on patients' satisfaction with the consultation. All three of these factors--understanding, memory and satisfaction--have effects on the probability that a patient will comply with advice. The levels of failure to understand and remember and levels of dissatisfaction are described. Quantitative estimates of the effects of these factors on non-compliance are presented.
USDA-ARS?s Scientific Manuscript database
We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung.
Holman, Beverley F; Cuplov, Vesna; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris
2016-04-21
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant (18)F-FDG and (18)F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung
NASA Astrophysics Data System (ADS)
Holman, Beverley F.; Cuplov, Vesna; Hutton, Brian F.; Groves, Ashley M.; Thielemans, Kris
2016-04-01
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant 18F-FDG and 18F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
Vascular responses to radiotherapy and androgen-deprivation therapy in experimental prostate cancer
2012-01-01
Background Radiotherapy (RT) and androgen-deprivation therapy (ADT) are standard treatments for advanced prostate cancer (PC). Tumor vascularization is recognized as an important physiological feature likely to impact on both RT and ADT response, and this study therefore aimed to characterize the vascular responses to RT and ADT in experimental PC. Methods Using mice implanted with CWR22 PC xenografts, vascular responses to RT and ADT by castration were visualized in vivo by DCE MRI, before contrast-enhancement curves were analyzed both semi-quantitatively and by pharmacokinetic modeling. Extracted image parameters were correlated to the results from ex vivo quantitative fluorescent immunohistochemical analysis (qIHC) of tumor vascularization (9 F1), perfusion (Hoechst 33342), and hypoxia (pimonidazole), performed on tissue sections made from tumors excised directly after DCE MRI. Results Compared to untreated (Ctrl) tumors, an improved and highly functional vascularization was detected in androgen-deprived (AD) tumors, reflected by increases in DCE MRI parameters and by increased number of vessels (VN), vessel density ( VD), and vessel area fraction ( VF) from qIHC. Although total hypoxic fractions ( HF) did not change, estimated acute hypoxia scores ( AHS) – the proportion of hypoxia staining within 50 μm from perfusion staining – were increased in AD tumors compared to in Ctrl tumors. Five to six months after ADT renewed castration-resistant (CR) tumor growth appeared with an even further enhanced tumor vascularization. Compared to the large vascular changes induced by ADT, RT induced minor vascular changes. Correlating DCE MRI and qIHC parameters unveiled the semi-quantitative parameters area under curve ( AUC) from initial time-points to strongly correlate with VD and VF, whereas estimation of vessel size ( VS) by DCE MRI required pharmacokinetic modeling. HF was not correlated to any DCE MRI parameter, however, AHS may be estimated after pharmacokinetic modeling. Interestingly, such modeling also detected tumor necrosis very strongly. Conclusions DCE MRI reliably allows non-invasive assessment of tumors’ vascular function. The findings of increased tumor vascularization after ADT encourage further studies into whether these changes are beneficial for combined RT, or if treatment with anti-angiogenic therapy may be a strategy to improve the therapeutic efficacy of ADT in advanced PC. PMID:22621752
A subagging regression method for estimating the qualitative and quantitative state of groundwater
NASA Astrophysics Data System (ADS)
Jeong, J.; Park, E.; Choi, J.; Han, W. S.; Yun, S. T.
2016-12-01
A subagging regression (SBR) method for the analysis of groundwater data pertaining to the estimation of trend and the associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of the other methods and the uncertainties are reasonably estimated where the others have no uncertainty analysis option. To validate further, real quantitative and qualitative data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by SBR, whereas the GPR has limitations in representing the variability of non-Gaussian skewed data. From the implementations, it is determined that the SBR method has potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data.
Comparison of Dynamic Contrast Enhanced MRI and Quantitative SPECT in a Rat Glioma Model
Skinner, Jack T.; Yankeelov, Thomas E.; Peterson, Todd E.; Does, Mark D.
2012-01-01
Pharmacokinetic modeling of dynamic contrast enhanced (DCE)-MRI data provides measures of the extracellular volume fraction (ve) and the volume transfer constant (Ktrans) in a given tissue. These parameter estimates may be biased, however, by confounding issues such as contrast agent and tissue water dynamics, or assumptions of vascularization and perfusion made by the commonly used model. In contrast to MRI, radiotracer imaging with SPECT is insensitive to water dynamics. A quantitative dual-isotope SPECT technique was developed to obtain an estimate of ve in a rat glioma model for comparison to the corresponding estimates obtained using DCE-MRI with a vascular input function (VIF) and reference region model (RR). Both DCE-MRI methods produced consistently larger estimates of ve in comparison to the SPECT estimates, and several experimental sources were postulated to contribute to these differences. PMID:22991315
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor)
2009-01-01
A method of recovering unknown aberrations in an optical system includes collecting intensity data produced by the optical system, generating an initial estimate of a phase of the optical system, iteratively performing a phase retrieval on the intensity data to generate a phase estimate using an initial diversity function corresponding to the intensity data, generating a phase map from the phase retrieval phase estimate, decomposing the phase map to generate a decomposition vector, generating an updated diversity function by combining the initial diversity function with the decomposition vector, generating an updated estimate of the phase of the optical system by removing the initial diversity function from the phase map. The method may further include repeating the process beginning with iteratively performing a phase retrieval on the intensity data using the updated estimate of the phase of the optical system in place of the initial estimate of the phase of the optical system, and using the updated diversity function in place of the initial diversity function, until a predetermined convergence is achieved.
The physics of functional magnetic resonance imaging (fMRI)
NASA Astrophysics Data System (ADS)
Buxton, Richard B.
2013-09-01
Functional magnetic resonance imaging (fMRI) is a methodology for detecting dynamic patterns of activity in the working human brain. Although the initial discoveries that led to fMRI are only about 20 years old, this new field has revolutionized the study of brain function. The ability to detect changes in brain activity has a biophysical basis in the magnetic properties of deoxyhemoglobin, and a physiological basis in the way blood flow increases more than oxygen metabolism when local neural activity increases. These effects translate to a subtle increase in the local magnetic resonance signal, the blood oxygenation level dependent (BOLD) effect, when neural activity increases. With current techniques, this pattern of activation can be measured with resolution approaching 1 mm3 spatially and 1 s temporally. This review focuses on the physical basis of the BOLD effect, the imaging methods used to measure it, the possible origins of the physiological effects that produce a mismatch of blood flow and oxygen metabolism during neural activation, and the mathematical models that have been developed to understand the measured signals. An overarching theme is the growing field of quantitative fMRI, in which other MRI methods are combined with BOLD methods and analyzed within a theoretical modeling framework to derive quantitative estimates of oxygen metabolism and other physiological variables. That goal is the current challenge for fMRI: to move fMRI from a mapping tool to a quantitative probe of brain physiology.
The physics of functional magnetic resonance imaging (fMRI)
Buxton, Richard B
2015-01-01
Functional magnetic resonance imaging (fMRI) is a methodology for detecting dynamic patterns of activity in the working human brain. Although the initial discoveries that led to fMRI are only about 20 years old, this new field has revolutionized the study of brain function. The ability to detect changes in brain activity has a biophysical basis in the magnetic properties of deoxyhemoglobin, and a physiological basis in the way blood flow increases more than oxygen metabolism when local neural activity increases. These effects translate to a subtle increase in the local magnetic resonance signal, the blood oxygenation level dependent (BOLD) effect, when neural activity increases. With current techniques, this pattern of activation can be measured with resolution approaching 1 mm3 spatially and 1 s temporally. This review focuses on the physical basis of the BOLD effect, the imaging methods used to measure it, the possible origins of the physiological effects that produce a mismatch of blood flow and oxygen metabolism during neural activation, and the mathematical models that have been developed to understand the measured signals. An overarching theme is the growing field of quantitative fMRI, in which other MRI methods are combined with BOLD methods and analyzed within a theoretical modeling framework to derive quantitative estimates of oxygen metabolism and other physiological variables. That goal is the current challenge for fMRI: to move fMRI from a mapping tool to a quantitative probe of brain physiology. PMID:24006360
The physics of functional magnetic resonance imaging (fMRI).
Buxton, Richard B
2013-09-01
Functional magnetic resonance imaging (fMRI) is a methodology for detecting dynamic patterns of activity in the working human brain. Although the initial discoveries that led to fMRI are only about 20 years old, this new field has revolutionized the study of brain function. The ability to detect changes in brain activity has a biophysical basis in the magnetic properties of deoxyhemoglobin, and a physiological basis in the way blood flow increases more than oxygen metabolism when local neural activity increases. These effects translate to a subtle increase in the local magnetic resonance signal, the blood oxygenation level dependent (BOLD) effect, when neural activity increases. With current techniques, this pattern of activation can be measured with resolution approaching 1 mm(3) spatially and 1 s temporally. This review focuses on the physical basis of the BOLD effect, the imaging methods used to measure it, the possible origins of the physiological effects that produce a mismatch of blood flow and oxygen metabolism during neural activation, and the mathematical models that have been developed to understand the measured signals. An overarching theme is the growing field of quantitative fMRI, in which other MRI methods are combined with BOLD methods and analyzed within a theoretical modeling framework to derive quantitative estimates of oxygen metabolism and other physiological variables. That goal is the current challenge for fMRI: to move fMRI from a mapping tool to a quantitative probe of brain physiology.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-27
... quantitative and qualitative information obtained from community-based initiatives; community characteristics... published in scientific journals and will be used for the development of future research initiatives...
Rong, Xing; Du, Yong; Frey, Eric C
2012-06-21
Quantitative Yttrium-90 ((90)Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging has shown great potential to provide reliable estimates of (90)Y activity distribution for targeted radionuclide therapy dosimetry applications. One factor that potentially affects the reliability of the activity estimates is the choice of the acquisition energy window. In contrast to imaging conventional gamma photon emitters where the acquisition energy windows are usually placed around photopeaks, there has been great variation in the choice of the acquisition energy window for (90)Y imaging due to the continuous and broad energy distribution of the bremsstrahlung photons. In quantitative imaging of conventional gamma photon emitters, previous methods for optimizing the acquisition energy window assumed unbiased estimators and used the variance in the estimates as a figure of merit (FOM). However, for situations, such as (90)Y imaging, where there are errors in the modeling of the image formation process used in the reconstruction there will be bias in the activity estimates. In (90)Y bremsstrahlung imaging this will be especially important due to the high levels of scatter, multiple scatter, and collimator septal penetration and scatter. Thus variance will not be a complete measure of reliability of the estimates and thus is not a complete FOM. To address this, we first aimed to develop a new method to optimize the energy window that accounts for both the bias due to model-mismatch and the variance of the activity estimates. We applied this method to optimize the acquisition energy window for quantitative (90)Y bremsstrahlung SPECT imaging in microsphere brachytherapy. Since absorbed dose is defined as the absorbed energy from the radiation per unit mass of tissues in this new method we proposed a mass-weighted root mean squared error of the volume of interest (VOI) activity estimates as the FOM. To calculate this FOM, two analytical expressions were derived for calculating the bias due to model-mismatch and the variance of the VOI activity estimates, respectively. To obtain the optimal acquisition energy window for general situations of interest in clinical (90)Y microsphere imaging, we generated phantoms with multiple tumors of various sizes and various tumor-to-normal activity concentration ratios using a digital phantom that realistically simulates human anatomy, simulated (90)Y microsphere imaging with a clinical SPECT system and typical imaging parameters using a previously validated Monte Carlo simulation code, and used a previously proposed method for modeling the image degrading effects in quantitative SPECT reconstruction. The obtained optimal acquisition energy window was 100-160 keV. The values of the proposed FOM were much larger than the FOM taking into account only the variance of the activity estimates, thus demonstrating in our experiment that the bias of the activity estimates due to model-mismatch was a more important factor than the variance in terms of limiting the reliability of activity estimates.
Qualitative and quantitative approaches in the dose-response assessment of genotoxic carcinogens.
Fukushima, Shoji; Gi, Min; Kakehashi, Anna; Wanibuchi, Hideki; Matsumoto, Michiharu
2016-05-01
Qualitative and quantitative approaches are important issues in field of carcinogenic risk assessment of the genotoxic carcinogens. Herein, we provide quantitative data on low-dose hepatocarcinogenicity studies for three genotoxic hepatocarcinogens: 2-amino-3,8-dimethylimidazo[4,5-f]quinoxaline (MeIQx), 2-amino-3-methylimidazo[4,5-f]quinoline (IQ) and N-nitrosodiethylamine (DEN). Hepatocarcinogenicity was examined by quantitative analysis of glutathione S-transferase placental form (GST-P) positive foci, which are the preneoplastic lesions in rat hepatocarcinogenesis and the endpoint carcinogenic marker in the rat liver medium-term carcinogenicity bioassay. We also examined DNA damage and gene mutations which occurred through the initiation stage of carcinogenesis. For the establishment of points of departure (PoD) from which the cancer-related risk can be estimated, we analyzed the above events by quantitative no-observed-effect level and benchmark dose approaches. MeIQx at low doses induced formation of DNA-MeIQx adducts; somewhat higher doses caused elevation of 8-hydroxy-2'-deoxyquanosine levels; at still higher doses gene mutations occurred; and the highest dose induced formation of GST-P positive foci. These data indicate that early genotoxic events in the pathway to carcinogenesis showed the expected trend of lower PoDs for earlier events in the carcinogenic process. Similarly, only the highest dose of IQ caused an increase in the number of GST-P positive foci in the liver, while IQ-DNA adduct formation was observed with low doses. Moreover, treatment with DEN at low doses had no effect on development of GST-P positive foci in the liver. These data on PoDs for the markers contribute to understand whether genotoxic carcinogens have a threshold for their carcinogenicity. The most appropriate approach to use in low dose-response assessment must be approved on the basis of scientific judgment. © The Author 2015. Published by Oxford University Press on behalf of the UK Environmental Mutagen Society. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Izenberg, N.R.; Arvidson, R. E.; Brackett, R.A.; Saatchi, S.S.; Osburn, G.R.; Dohrenwend, J.
1996-01-01
The Missouri River floods of 1993 caused significant and widespread damage to the floodplains between Kansas City and St. Louis. Immediately downstream of levee breaks, flood waters scoured the bottoms. As the floodwaters continued, they spread laterally and deposited massive amounts of sand as crevasse splays on top of agricultural fields. We explore the use of radar interferometry and backscatter data for quantitative estimation of scour and deposition for Jameson Island/Arrow Rock Bottoms and Lisbon Bottoms, two bottoms that were heavily damaged during the floods and subsequently abandoned. Shuttle imaging radar C (SIR-C) L band (24 cm) HH (horizontally transmitted and horizontally received) radar backscatter data acquired in October 1994 were used together with a distorted Born approximation canopy scattering model to determine that the abundance of natural leafy forbs controlled the magnitude of backscatter for former agricultural fields. Forb areal density was found to be inversely correlated with thickness of sand deposited during the floods, presumably because thick sands prevented roots from reaching nutrient rich, moist bottoms soils. Using the inverse relationship, a lower bound for the mass of sand added was found to be 6.3 million metric tons over the 17 km2 study area. Digital elevation data from topographic synthetic aperture radar (TOPSAR) C band (5.6 cm) interferometric observations acquired in August 1994 were compared to a series of elevation profiles collected on the ground. Vertical errors in TOPSAR were estimated to range from 1 to 2 m, providing enough accuracy to generate an estimate of total mass (4.7 million metric tons) removed during erosion of levees and scour of the bottoms terrains. Net accretion of material to the study areas is consistent with the geologic record of major floods where sediment-laden floodwaters crested over natural levees, initially scoured into the bottoms, and then deposited sands as crevasse splays as the flows spread out and slowed by frictional dissipation. The addition of artificial levees to the Missouri River system has undoubtedly enhanced flood damage, although quantitative estimation of the degree of enhancement will require additional work. Copyright 1996 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Izenberg, N. R.; Arvidson, R. E.; Brackett, R. A.; Saatchi, S. S.; Osburn, G. R.; Dohrenwend, J.
1996-10-01
The Missouri River floods of 1993 caused significant and widespread damage to the floodplains between Kansas City and St. Louis. Immediately downstream of levee breaks, flood waters scoured the bottoms. As the floodwaters continued, they spread laterally and deposited massive amounts of sand as crevasse splays on top of agricultural fields. We explore the use of radar interferometry and backscatter data for quantitative estimation of scour and deposition for Jameson Island/Arrow Rock Bottoms and Lisbon Bottoms, two bottoms that were heavily damaged during the floods and subsequently abandoned. Shuttle imaging radar C (SIR-C) L band (24 cm) HH (horizontally transmitted and horizontally received) radar backscatter data acquired in October 1994 were used together with a distorted Born approximation canopy scattering model to determine that the abundance of natural leafy forbs controlled the magnitude of backscatter for former agricultural fields. Forb areal density was found to be inversely correlated with thickness of sand deposited during the floods, presumably because thick sands prevented roots from reaching nutrient rich, moist bottoms soils. Using the inverse relationship, a lower bound for the mass of sand added was found to be 6.3 million metric tons over the 17 km2 study area. Digital elevation data from topographic synthetic aperture radar (TOPSAR) C band (5.6 cm) interferometric observations acquired in August 1994 were compared to a series of elevation profiles collected on the ground. Vertical errors in TOPSAR were estimated to range from 1 to 2 m, providing enough accuracy to generate an estimate of total mass (4.7 million metric tons) removed during erosion of levees and scour of the bottoms terrains. Net accretion of material to the study areas is consistent with the geologic record of major floods where sediment-laden floodwaters crested over natural levees, initially scoured into the bottoms, and then deposited sands as crevasse splays as the flows spread out and slowed by frictional dissipation. The addition of artificial levees to the Missouri River system has undoubtedly enhanced flood damage, although quantitative estimation of the degree of enhancement will require additional work.
Jin, Yan; Huang, Jing-feng; Peng, Dai-liang
2009-01-01
Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749
Jin, Yan; Huang, Jing-feng; Peng, Dai-liang
2009-04-01
Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.
Connecting the Dots: Linking Environmental Justice Indicators to Daily Dose Model Estimates
Many different quantitative techniques have been developed to either assess Environmental Justice (EJ) issues or estimate exposure and dose for risk assessment. However, very few approaches have been applied to link EJ factors to exposure dose estimate and identify potential impa...
Comparative assessment of techniques for initial pose estimation using monocular vision
NASA Astrophysics Data System (ADS)
Sharma, Sumant; D`Amico, Simone
2016-06-01
This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.
Across a set of ecological communities connected to each other through organismal dispersal (a ‘meta-community’), turnover in composition is governed by (ecological) Drift, Selection, and Dispersal Limitation. Quantitative estimates of these processes remain elusive, but would represent a common currency needed to unify community ecology. Using a novel analytical framework we quantitatively estimate the relative influences of Drift, Selection, and Dispersal Limitation on subsurface, sediment-associated microbial meta-communities. The communities we study are distributed across two geologic formations encompassing ~12,500m3 of uranium-contaminated sediments within the Hanford Site in eastern Washington State. We find that Drift consistently governs ~25% of spatial turnovermore » in community composition; Selection dominates (governing ~60% of turnover) across spatially-structured habitats associated with fine-grained, low permeability sediments; and Dispersal Limitation is most influential (governing ~40% of turnover) across spatially-unstructured habitats associated with coarse-grained, highly-permeable sediments. Quantitative influences of Selection and Dispersal Limitation may therefore be predictable from knowledge of environmental structure. To develop a system-level conceptual model we extend our analytical framework to compare process estimates across formations, characterize measured and unmeasured environmental variables that impose Selection, and identify abiotic features that limit dispersal. Insights gained here suggest that community ecology can benefit from a shift in perspective; the quantitative approach developed here goes beyond the ‘niche vs. neutral’ dichotomy by moving towards a style of natural history in which estimates of Selection, Dispersal Limitation and Drift can be described, mapped and compared across ecological systems.« less
Bayesian parameter estimation in spectral quantitative photoacoustic tomography
NASA Astrophysics Data System (ADS)
Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja
2016-03-01
Photoacoustic tomography (PAT) is an imaging technique combining strong contrast of optical imaging to high spatial resolution of ultrasound imaging. These strengths are achieved via photoacoustic effect, where a spatial absorption of light pulse is converted into a measurable propagating ultrasound wave. The method is seen as a potential tool for small animal imaging, pre-clinical investigations, study of blood vessels and vasculature, as well as for cancer imaging. The goal in PAT is to form an image of the absorbed optical energy density field via acoustic inverse problem approaches from the measured ultrasound data. Quantitative PAT (QPAT) proceeds from these images and forms quantitative estimates of the optical properties of the target. This optical inverse problem of QPAT is illposed. To alleviate the issue, spectral QPAT (SQPAT) utilizes PAT data formed at multiple optical wavelengths simultaneously with optical parameter models of tissue to form quantitative estimates of the parameters of interest. In this work, the inverse problem of SQPAT is investigated. Light propagation is modelled using the diffusion equation. Optical absorption is described with chromophore concentration weighted sum of known chromophore absorption spectra. Scattering is described by Mie scattering theory with an exponential power law. In the inverse problem, the spatially varying unknown parameters of interest are the chromophore concentrations, the Mie scattering parameters (power law factor and the exponent), and Gruneisen parameter. The inverse problem is approached with a Bayesian method. It is numerically demonstrated, that estimation of all parameters of interest is possible with the approach.
EPA's methodology for estimation of inhalation reference concentrations (RfCs) as benchmark estimates of the quantitative dose-response assessment of chronic noncancer toxicity for individual inhaled chemicals.
50 CFR 600.340 - National Standard 7-Costs and Benefits.
Code of Federal Regulations, 2013 CFR
2013-10-01
... regulation are real and substantial relative to the added research, administrative, and enforcement costs, as..., including the status quo, is adequate. If quantitative estimates are not possible, qualitative estimates...
50 CFR 600.340 - National Standard 7-Costs and Benefits.
Code of Federal Regulations, 2014 CFR
2014-10-01
... regulation are real and substantial relative to the added research, administrative, and enforcement costs, as..., including the status quo, is adequate. If quantitative estimates are not possible, qualitative estimates...
50 CFR 600.340 - National Standard 7-Costs and Benefits.
Code of Federal Regulations, 2011 CFR
2011-10-01
... regulation are real and substantial relative to the added research, administrative, and enforcement costs, as..., including the status quo, is adequate. If quantitative estimates are not possible, qualitative estimates...
50 CFR 600.340 - National Standard 7-Costs and Benefits.
Code of Federal Regulations, 2010 CFR
2010-10-01
... regulation are real and substantial relative to the added research, administrative, and enforcement costs, as..., including the status quo, is adequate. If quantitative estimates are not possible, qualitative estimates...
50 CFR 600.340 - National Standard 7-Costs and Benefits.
Code of Federal Regulations, 2012 CFR
2012-10-01
... regulation are real and substantial relative to the added research, administrative, and enforcement costs, as..., including the status quo, is adequate. If quantitative estimates are not possible, qualitative estimates...
Measuring Aircraft Capability for Military and Political Analysis
1976-03-01
challenged in 1932 when a panel of distinguished British scientists discussed the feasibility of quantitatively estimating sensory events... Quantitative Analysis of Social Problems , E.R. Tufte (ed.), p. 407, Addison-Wesley, 1970. 17 "artificial" boundaries are imposed on the data. Less...of arms transfers in various parts of the world as well. Quantitative research (and hence measurement) contributes to theoretical development by
Couch, James R; Petersen, Martin; Rice, Carol; Schubauer-Berigan, Mary K
2011-05-01
To construct a job-exposure matrix (JEM) for an Ohio beryllium processing facility between 1953 and 2006 and to evaluate temporal changes in airborne beryllium exposures. Quantitative area- and breathing-zone-based exposure measurements of airborne beryllium were made between 1953 and 2006 and used by plant personnel to estimate daily weighted average (DWA) exposure concentrations for sampled departments and operations. These DWA measurements were used to create a JEM with 18 exposure metrics, which was linked to the plant cohort consisting of 18,568 unique job, department and year combinations. The exposure metrics ranged from quantitative metrics (annual arithmetic/geometric average DWA exposures, maximum DWA and peak exposures) to descriptive qualitative metrics (chemical beryllium species and physical form) to qualitative assignment of exposure to other risk factors (yes/no). Twelve collapsed job titles with long-term consistent industrial hygiene samples were evaluated using regression analysis for time trends in DWA estimates. Annual arithmetic mean DWA estimates (overall plant-wide exposures including administration, non-production, and production estimates) for the data by decade ranged from a high of 1.39 μg/m(3) in the 1950s to a low of 0.33 μg/m(3) in the 2000s. Of the 12 jobs evaluated for temporal trend, the average arithmetic DWA mean was 2.46 μg/m(3) and the average geometric mean DWA was 1.53 μg/m(3). After the DWA calculations were log-transformed, 11 of the 12 had a statistically significant (p < 0.05) decrease in reported exposure over time. The constructed JEM successfully differentiated beryllium exposures across jobs and over time. This is the only quantitative JEM containing exposure estimates (average and peak) for the entire plant history.
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
Fernandes, Ricardo; Grootes, Pieter; Nadeau, Marie-Josée; Nehlich, Olaf
2015-07-14
The island cemetery site of Ostorf (Germany) consists of individual human graves containing Funnel Beaker ceramics dating to the Early or Middle Neolithic. However, previous isotope and radiocarbon analysis demonstrated that the Ostorf individuals had a diet rich in freshwater fish. The present study was undertaken to quantitatively reconstruct the diet of the Ostorf population and establish if dietary habits are consistent with the traditional characterization of a Neolithic diet. Quantitative diet reconstruction was achieved through a novel approach consisting of the use of the Bayesian mixing model Food Reconstruction Using Isotopic Transferred Signals (FRUITS) to model isotope measurements from multiple dietary proxies (δ 13 C collagen , δ 15 N collagen , δ 13 C bioapatite , δ 34 S methione , 14 C collagen ). The accuracy of model estimates was verified by comparing the agreement between observed and estimated human dietary radiocarbon reservoir effects. Quantitative diet reconstruction estimates confirm that the Ostorf individuals had a high protein intake due to the consumption of fish and terrestrial animal products. However, FRUITS estimates also show that plant foods represented a significant source of calories. Observed and estimated human dietary radiocarbon reservoir effects are in good agreement provided that the aquatic reservoir effect at Lake Ostorf is taken as reference. The Ostorf population apparently adopted elements associated with a Neolithic culture but adapted to available local food resources and implemented a subsistence strategy that involved a large proportion of fish and terrestrial meat consumption. This case study exemplifies the diversity of subsistence strategies followed during the Neolithic. Am J Phys Anthropol, 2015. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Peeters, Michael J; Vaidya, Varun A
2016-06-25
Objective. To describe an approach for assessing the Accreditation Council for Pharmacy Education's (ACPE) doctor of pharmacy (PharmD) Standard 4.4, which focuses on students' professional development. Methods. This investigation used mixed methods with triangulation of qualitative and quantitative data to assess professional development. Qualitative data came from an electronic developmental portfolio of professionalism and ethics, completed by PharmD students during their didactic studies. Quantitative confirmation came from the Defining Issues Test (DIT)-an assessment of pharmacists' professional development. Results. Qualitatively, students' development reflections described growth through this course series. Quantitatively, the 2015 PharmD class's DIT N2-scores illustrated positive development overall; the lower 50% had a large initial improvement compared to the upper 50%. Subsequently, the 2016 PharmD class confirmed these average initial improvements of students and also showed further substantial development among students thereafter. Conclusion. Applying an assessment for learning approach, triangulation of qualitative and quantitative assessments confirmed that PharmD students developed professionally during this course series.
Wang, Yi; Peng, Hsin-Chieh; Liu, Jingyue; Huang, Cheng Zhi; Xia, Younan
2015-02-11
Kinetic control is a powerful means for maneuvering the twin structure and shape of metal nanocrystals and thus optimizing their performance in a variety of applications. However, there is only a vague understanding of the explicit roles played by reaction kinetics due to the lack of quantitative information about the kinetic parameters. With Pd as an example, here we demonstrate that kinetic parameters, including rate constant and activation energy, can be derived from spectroscopic measurements and then used to calculate the initial reduction rate and further have this parameter quantitatively correlated with the twin structure of a seed and nanocrystal. On a quantitative basis, we were able to determine the ranges of initial reduction rates required for the formation of nanocrystals with a specific twin structure, including single-crystal, multiply twinned, and stacking fault-lined. This work represents a major step forward toward the deterministic syntheses of colloidal noble-metal nanocrystals with specific twin structures and shapes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Specht, W.L.
1991-10-01
In anticipation of the fall 1988 start up of effluent discharges into Upper Three Creek by the F/H Area Effluent Treatment Facility of the Savannah River Site, Aiken, SC, a two and one half year biological study was initiated in June 1987. Upper Three Runs Creek is an intensively studied fourth order stream known for its high species richness. Designed to assess the potential impact of F?H area effluent on the creek, the study includes qualitative and quantitative macroinvertebrate stream surveys at five sites, chronic toxicity testing of the effluent, water chemistry and bioaccumulation analysis. This final report presents themore » results of both pre-operational and post-operational qualitative and quantitative (artificial substrate) macroinvertebrate studies. Six quantitative and three qualitative studies were conducted prior to the initial release of the F/H ETF effluent and five quantitative and two qualitative studies were conducted post-operationally.« less
Walker, Rachel A; Andreansky, Christopher; Ray, Madelyn H; McDannald, Michael A
2018-06-01
Childhood adversity is associated with exaggerated threat processing and earlier alcohol use initiation. Conclusive links remain elusive, as childhood adversity typically co-occurs with detrimental socioeconomic factors, and its impact is likely moderated by biological sex. To unravel the complex relationships among childhood adversity, sex, threat estimation, and alcohol use initiation, we exposed female and male Long-Evans rats to early adolescent adversity (EAA). In adulthood, >50 days following the last adverse experience, threat estimation was assessed using a novel fear discrimination procedure in which cues predict a unique probability of footshock: danger (p = 1.00), uncertainty (p = .25), and safety (p = .00). Alcohol use initiation was assessed using voluntary access to 20% ethanol, >90 days following the last adverse experience. During development, EAA slowed body weight gain in both females and males. In adulthood, EAA selectively inflated female threat estimation, exaggerating fear to uncertainty and safety, but promoted alcohol use initiation across sexes. Meaningful relationships between threat estimation and alcohol use initiation were not observed, underscoring the independent effects of EAA. Results isolate the contribution of EAA to adult threat estimation, alcohol use initiation, and reveal moderation by biological sex. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
The linearized multistage model and the future of quantitative risk assessment.
Crump, K S
1996-10-01
The linearized multistage (LMS) model has for over 15 years been the default dose-response model used by the U.S. Environmental Protection Agency (USEPA) and other federal and state regulatory agencies in the United States for calculating quantitative estimates of low-dose carcinogenic risks from animal data. The LMS model is in essence a flexible statistical model that can describe both linear and non-linear dose-response patterns, and that produces an upper confidence bound on the linear low-dose slope of the dose-response curve. Unlike its namesake, the Armitage-Doll multistage model, the parameters of the LMS do not correspond to actual physiological phenomena. Thus the LMS is 'biological' only to the extent that the true biological dose response is linear at low dose and that low-dose slope is reflected in the experimental data. If the true dose response is non-linear the LMS upper bound may overestimate the true risk by many orders of magnitude. However, competing low-dose extrapolation models, including those derived from 'biologically-based models' that are capable of incorporating additional biological information, have not shown evidence to date of being able to produce quantitative estimates of low-dose risks that are any more accurate than those obtained from the LMS model. Further, even if these attempts were successful, the extent to which more accurate estimates of low-dose risks in a test animal species would translate into improved estimates of human risk is questionable. Thus, it does not appear possible at present to develop a quantitative approach that would be generally applicable and that would offer significant improvements upon the crude bounding estimates of the type provided by the LMS model. Draft USEPA guidelines for cancer risk assessment incorporate an approach similar to the LMS for carcinogens having a linear mode of action. However, under these guidelines quantitative estimates of low-dose risks would not be developed for carcinogens having a non-linear mode of action; instead dose-response modelling would be used in the experimental range to calculate an LED10* (a statistical lower bound on the dose corresponding to a 10% increase in risk), and safety factors would be applied to the LED10* to determine acceptable exposure levels for humans. This approach is very similar to the one presently used by USEPA for non-carcinogens. Rather than using one approach for carcinogens believed to have a linear mode of action and a different approach for all other health effects, it is suggested herein that it would be more appropriate to use an approach conceptually similar to the 'LED10*-safety factor' approach for all health effects, and not to routinely develop quantitative risk estimates from animal data.
Periotto, N A; Tundisi, J G
2013-08-01
The aim of this study was to identify and make an initial accounting of the ecosystem services of the hydroelectric power generation plant, UHE Carlos Botelho (Itirapina Municipality, São Paulo State, Brazil), and its most extensive wetlands--total of 2,640 ha--and also identify the drivers of change of these services. Twenty (20) ecosystem services were identified and the estimated quantitative total value obtained was USD 120,445,657.87. year(-1) or USD 45,623.35 ha(-1).year(-1). Investments on restoration of spatial heterogeneity along Tietê-Jacaré hydrographic basin and new technologies for regional economic activities must maintain ecological functions as well as increase marginal values of ecosystem services and the potential annual economic return of ecological functions.
2013-01-01
Background Previous studies have reported the lower reference limit (LRL) of quantitative cord glucose-6-phosphate dehydrogenase (G6PD), but they have not used approved international statistical methodology. Using common standards is expecting to yield more true findings. Therefore, we aimed to estimate LRL of quantitative G6PD detection in healthy term neonates by using statistical analyses endorsed by the International Federation of Clinical Chemistry (IFCC) and the Clinical and Laboratory Standards Institute (CLSI) for reference interval estimation. Methods This cross sectional retrospective study was performed at King Abdulaziz Hospital, Saudi Arabia, between March 2010 and June 2012. The study monitored consecutive neonates born to mothers from one Arab Muslim tribe that was assumed to have a low prevalence of G6PD-deficiency. Neonates that satisfied the following criteria were included: full-term birth (37 weeks); no admission to the special care nursery; no phototherapy treatment; negative direct antiglobulin test; and fathers of female neonates were from the same mothers’ tribe. The G6PD activity (Units/gram Hemoglobin) was measured spectrophotometrically by an automated kit. This study used statistical analyses endorsed by IFCC and CLSI for reference interval estimation. The 2.5th percentiles and the corresponding 95% confidence intervals (CI) were estimated as LRLs, both in presence and absence of outliers. Results 207 males and 188 females term neonates who had cord blood quantitative G6PD testing met the inclusion criteria. Method of Horn detected 20 G6PD values as outliers (8 males and 12 females). Distributions of quantitative cord G6PD values exhibited a normal distribution in absence of the outliers only. The Harris-Boyd method and proportion criteria revealed that combined gender LRLs were reliable. The combined bootstrap LRL in presence of the outliers was 10.0 (95% CI: 7.5-10.7) and the combined parametric LRL in absence of the outliers was 11.0 (95% CI: 10.5-11.3). Conclusion These results contribute to the LRL of quantitative cord G6PD detection in full-term neonates. They are transferable to another laboratory when pre-analytical factors and testing methods are comparable and the IFCC-CLSI requirements of transference are satisfied. We are suggesting using estimated LRL in absence of the outliers as mislabeling G6PD-deficient neonates as normal is intolerable whereas mislabeling G6PD-normal neonates as deficient is tolerable. PMID:24016342
Waldmann, P; García-Gil, M R; Sillanpää, M J
2005-06-01
Comparison of the level of differentiation at neutral molecular markers (estimated as F(ST) or G(ST)) with the level of differentiation at quantitative traits (estimated as Q(ST)) has become a standard tool for inferring that there is differential selection between populations. We estimated Q(ST) of timing of bud set from a latitudinal cline of Pinus sylvestris with a Bayesian hierarchical variance component method utilizing the information on the pre-estimated population structure from neutral molecular markers. Unfortunately, the between-family variances differed substantially between populations that resulted in a bimodal posterior of Q(ST) that could not be compared in any sensible way with the unimodal posterior of the microsatellite F(ST). In order to avoid publishing studies with flawed Q(ST) estimates, we recommend that future studies should present heritability estimates for each trait and population. Moreover, to detect variance heterogeneity in frequentist methods (ANOVA and REML), it is of essential importance to check also that the residuals are normally distributed and do not follow any systematically deviating trends.
Robust estimation of adaptive tensors of curvature by tensor voting.
Tong, Wai-Shun; Tang, Chi-Keung
2005-03-01
Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.
Dynamic deformations and the M6.7, Northridge, California earthquake
Gomberg, J.
1997-01-01
A method of estimating the complete time-varying dynamic formation field from commonly available three-component single station seismic data has been developed and applied to study the relationship between dynamic deformation and ground failures and structural damage using observations from the 1994 Northridge, California earthquake. Estimates from throughout the epicentral region indicate that the horizontal strains exceed the vertical ones by more than a factor of two. The largest strains (exceeding ???100 ??strain) correlate with regions of greatest ground failure. There is a poor correlation between structural damage and peak strain amplitudes. The smallest strains, ???35 ??strain, are estimated in regions of no damage or ground failure. Estimates in the two regions with most severe and well mapped permanent deformation, Potrero Canyon and the Granada-Mission Hills regions, exhibit the largest strains; peak horizontal strains estimates in these regions equal ???139 and ???229 ??strain respectively. Of note, the dynamic principal strain axes have strikes consistent with the permanent failure features suggesting that, while gravity, sub-surface materials, and hydrologic conditions undoubtedly played fundamental roles in determining where and what types of failures occurred, the dynamic deformation field may have been favorably sized and oriented to initiate failure processes. These results support other studies that conclude that the permanent deformation resulted from ground shaking, rather than from static strains associated with primary or secondary faulting. They also suggest that such an analysis, either using data or theoretical calculations, may enable observations of paleo-ground failure to be used as quantitative constraints on the size and geometry of previous earthquakes. ?? 1997 Elsevier Science Limited.
Regional landfills methane emission inventory in Malaysia.
Abushammala, Mohammed F M; Noor Ezlin Ahmad Basri; Basri, Hassan; Ahmed Hussein El-Shafie; Kadhum, Abdul Amir H
2011-08-01
The decomposition of municipal solid waste (MSW) in landfills under anaerobic conditions produces landfill gas (LFG) containing approximately 50-60% methane (CH(4)) and 30-40% carbon dioxide (CO(2)) by volume. CH(4) has a global warming potential 21 times greater than CO(2); thus, it poses a serious environmental problem. As landfills are the main method for waste disposal in Malaysia, the major aim of this study was to estimate the total CH(4) emissions from landfills in all Malaysian regions and states for the year 2009 using the IPCC, 1996 first-order decay (FOD) model focusing on clean development mechanism (CDM) project applications to initiate emission reductions. Furthermore, the authors attempted to assess, in quantitative terms, the amount of CH(4) that would be emitted from landfills in the period from 1981-2024 using the IPCC 2006 FOD model. The total CH(4) emission using the IPCC 1996 model was estimated to be 318.8 Gg in 2009. The Northern region had the highest CH(4) emission inventory, with 128.8 Gg, whereas the Borneo region had the lowest, with 24.2 Gg. It was estimated that Pulau Penang state produced the highest CH(4) emission, 77.6 Gg, followed by the remaining states with emission values ranging from 38.5 to 1.5 Gg. Based on the IPCC 1996 FOD model, the total Malaysian CH( 4) emission was forecast to be 397.7 Gg by 2020. The IPCC 2006 FOD model estimated a 201 Gg CH(4) emission in 2009, and estimates ranged from 98 Gg in 1981 to 263 Gg in 2024.
NASA Astrophysics Data System (ADS)
Filatov, I. E.; Uvarin, V. V.; Kuznetsov, D. L.
2018-05-01
The efficiency of removal of volatile organic impurities in air by a pulsed corona discharge is investigated using model mixtures. Based on the method of competing reactions, an approach to estimating the qualitative and quantitative parameters of the employed electrophysical technique is proposed. The concept of the "toluene coefficient" characterizing the relative reactivity of a component as compared to toluene is introduced. It is proposed that the energy efficiency of the electrophysical method be estimated using the concept of diversified yield of the removal process. Such an approach makes it possible to substantially intensify the determination of energy parameters of removal of impurities and can also serve as a criterion for estimating the effectiveness of various methods in which a nonequilibrium plasma is used for air cleaning from volatile impurities.
QFASAR: Quantitative fatty acid signature analysis with R
Bromaghin, Jeffrey F.
2017-01-01
Knowledge of predator diets provides essential insights into their ecology, yet diet estimation is challenging and remains an active area of research.Quantitative fatty acid signature analysis (QFASA) is a popular method of estimating diet composition that continues to be investigated and extended. However, software to implement QFASA has only recently become publicly available.I summarize a new R package, qfasar, for diet estimation using QFASA methods. The package also provides functionality to evaluate and potentially improve the performance of a library of prey signature data, compute goodness-of-fit diagnostics, and support simulation-based research. Several procedures in the package have not previously been published.qfasar makes traditional and recently published QFASA diet estimation methods accessible to ecologists for the first time. Use of the package is illustrated with signature data from Chukchi Sea polar bears and potential prey species.
Austin, Peter C
2018-01-01
Propensity score methods are frequently used to estimate the effects of interventions using observational data. The propensity score was originally developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (e.g. pack-years of cigarettes smoked, dose of medication, or years of education). We describe how the GPS can be used to estimate the effect of continuous exposures on survival or time-to-event outcomes. To do so we modified the concept of the dose-response function for use with time-to-event outcomes. We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of quantitative exposures on survival or time-to-event outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. The use of methods based on the GPS was compared with the use of conventional G-computation and weighted G-computation. Conventional G-computation resulted in estimates of the dose-response function that displayed the lowest bias and the lowest variability. Amongst the two GPS-based methods, covariate adjustment using the GPS tended to have the better performance. We illustrate the application of these methods by estimating the effect of average neighbourhood income on the probability of survival following hospitalization for an acute myocardial infarction.
Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C
2015-04-13
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
Developing Methods for Fraction Cover Estimation Toward Global Mapping of Ecosystem Composition
NASA Astrophysics Data System (ADS)
Roberts, D. A.; Thompson, D. R.; Dennison, P. E.; Green, R. O.; Kokaly, R. F.; Pavlick, R.; Schimel, D.; Stavros, E. N.
2016-12-01
Terrestrial vegetation seldom covers an entire pixel due to spatial mixing at many scales. Estimating the fractional contributions of photosynthetic green vegetation (GV), non-photosynthetic vegetation (NPV), and substrate (soil, rock, etc.) to mixed spectra can significantly improve quantitative remote measurement of terrestrial ecosystems. Traditional methods for estimating fractional vegetation cover rely on vegetation indices that are sensitive to variable substrate brightness, NPV and sun-sensor geometry. Spectral mixture analysis (SMA) is an alternate framework that provides estimates of fractional cover. However, simple SMA, in which the same set of endmembers is used for an entire image, fails to account for natural spectral variability within a cover class. Multiple Endmember Spectral Mixture Analysis (MESMA) is a variant of SMA that allows the number and types of pure spectra to vary on a per-pixel basis, thereby accounting for endmember variability and generating more accurate cover estimates, but at a higher computational cost. Routine generation and delivery of GV, NPV, and substrate (S) fractions using MESMA is currently in development for large, diverse datasets acquired by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS). We present initial results, including our methodology for ensuring consistency and generalizability of fractional cover estimates across a wide range of regions, seasons, and biomes. We also assess uncertainty and provide a strategy for validation. GV, NPV, and S fractions are an important precursor for deriving consistent measurements of ecosystem parameters such as plant stress and mortality, functional trait assessment, disturbance susceptibility and recovery, and biomass and carbon stock assessment. Copyright 2016 California Institute of Technology. All Rights Reserved. We acknowledge support of the US Government, NASA, the Earth Science Division and Terrestrial Ecology program.
Computational Study of the Richtmyer-Meshkov Instability with a Complex Initial Condition
NASA Astrophysics Data System (ADS)
McFarland, Jacob; Reilly, David; Greenough, Jeffrey; Ranjan, Devesh
2014-11-01
Results are presented for a computational study of the Richtmyer-Meshkov instability with a complex initial condition. This study covers experiments which will be conducted at the newly-built inclined shock tube facility at the Georgia Institute of Technology. The complex initial condition employed consists of an underlying inclined interface perturbation with a broadband spectrum of modes superimposed. A three-dimensional staggered mesh arbitrary Lagrange Eulerian (ALE) hydrodynamics code developed at Lawerence Livermore National Laboratory called ARES was used to obtain both qualitative and quantitative results. Qualitative results are discussed using time series of density plots from which mixing width may be extracted. Quantitative results are also discussed using vorticity fields, circulation components, and energy spectra. The inclined interface case is compared to the complex interface case in order to study the effect of initial conditions on shocked, variable-density flows.
Initial Results of Illinois' Shifting Gears Pilot Demonstration Evaluation
ERIC Educational Resources Information Center
Bragg, Debra D.; Harmon, Timothy; Kirby, Catherine L.; Kim, Sujung
2009-01-01
This report provides initial results of Illinois' Shifting Gears Initiative that operated between July 1, 2007 and June 30, 2009. This mixed method (qualitative and quantitative) evaluation sought to accomplish three goals: (1) to assess program and student outcomes for two models (adult education and developmental education) for two target groups…
NASA Technical Reports Server (NTRS)
Cole, Tony A.; Wanik, David W.; Molthan, Andrew L.; Roman, Miguel O.; Griffin, Robert E.
2017-01-01
Natural and anthropogenic hazards are frequently responsible for disaster events, leading to damaged physical infrastructure, which can result in loss of electrical power for affected locations. Remotely-sensed, nighttime satellite imagery from the Suomi National Polar-orbiting Partnership (Suomi-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) can monitor power outages in disaster-affected areas through the identification of missing city lights. When combined with locally-relevant geospatial information, these observations can be used to estimate power outages, defined as geographic locations requiring manual intervention to restore power. In this study, we produced a power outage product based on Suomi-NPP VIIRS DNB observations to estimate power outages following Hurricane Sandy in 2012. This product, combined with known power outage data and ambient population estimates, was then used to predict power outages in a layered, feedforward neural network model. We believe this is the first attempt to synergistically combine such data sources to quantitatively estimate power outages. The VIIRS DNB power outage product was able to identify initial loss of light following Hurricane Sandy, as well as the gradual restoration of electrical power. The neural network model predicted power outages with reasonable spatial accuracy, achieving Pearson coefficients (r) between 0.48 and 0.58 across all folds. Our results show promise for producing a continental United States (CONUS)- or global-scale power outage monitoring network using satellite imagery and locally-relevant geospatial data.
Reiffsteck, A; Dehennin, L; Scholler, R
1982-11-01
Estrone, 2-methoxyestrone and estradiol-17 beta have been definitely identified in seminal plasma of man, bull, boar and stallion by high resolution gas chromatography associated with selective monitoring of characteristic ions of suitable derivatives. Quantitative estimations were performed by isotope dilution with deuterated analogues and by monitoring molecular ions of trimethylsilyl ethers of labelled and unlabelled compounds. Concentrations of unconjugated and total estrogens are reported together with the statistical evaluation of accuracy and precision.
Quantitative trait nucleotide analysis using Bayesian model selection.
Blangero, John; Goring, Harald H H; Kent, Jack W; Williams, Jeff T; Peterson, Charles P; Almasy, Laura; Dyer, Thomas D
2005-10-01
Although much attention has been given to statistical genetic methods for the initial localization and fine mapping of quantitative trait loci (QTLs), little methodological work has been done to date on the problem of statistically identifying the most likely functional polymorphisms using sequence data. In this paper we provide a general statistical genetic framework, called Bayesian quantitative trait nucleotide (BQTN) analysis, for assessing the likely functional status of genetic variants. The approach requires the initial enumeration of all genetic variants in a set of resequenced individuals. These polymorphisms are then typed in a large number of individuals (potentially in families), and marker variation is related to quantitative phenotypic variation using Bayesian model selection and averaging. For each sequence variant a posterior probability of effect is obtained and can be used to prioritize additional molecular functional experiments. An example of this quantitative nucleotide analysis is provided using the GAW12 simulated data. The results show that the BQTN method may be useful for choosing the most likely functional variants within a gene (or set of genes). We also include instructions on how to use our computer program, SOLAR, for association analysis and BQTN analysis.
A quantitative microbial risk assessment for center pivot irrigation of dairy wastewaters
USDA-ARS?s Scientific Manuscript database
In the western United States where livestock wastewaters are commonly land applied, there are concerns over individuals being exposed to airborne pathogens. In response, a quantitative microbial risk assessment (QMRA) was performed to estimate infectious risks from inhaling pathogens aerosolized dur...
76 FR 50904 - Thiamethoxam; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-17
... exposure and risk. A separate assessment was done for clothianidin. i. Acute exposure. Quantitative acute... not expected to pose a cancer risk, a quantitative dietary exposure assessment for the purposes of...-dietary sources of post application exposure to obtain an estimate of potential combined exposure. These...
Quantitative Analysis of Radar Returns from Insects
NASA Technical Reports Server (NTRS)
Riley, J. R.
1979-01-01
When a number of flying insects is low enough to permit their resolution as individual radar targets, quantitative estimates of their aerial density are developed. Accurate measurements of heading distribution using a rotating polarization radar to enhance the wingbeat frequency method of identification are presented.
Bound Pool Fractions Complement Diffusion Measures to Describe White Matter Micro and Macrostructure
Stikov, Nikola; Perry, Lee M.; Mezer, Aviv; Rykhlevskaia, Elena; Wandell, Brian A.; Pauly, John M.; Dougherty, Robert F.
2010-01-01
Diffusion imaging and bound pool fraction (BPF) mapping are two quantitative magnetic resonance imaging techniques that measure microstructural features of the white matter of the brain. Diffusion imaging provides a quantitative measure of the diffusivity of water in tissue. BPF mapping is a quantitative magnetization transfer (qMT) technique that estimates the proportion of exchanging protons bound to macromolecules, such as those found in myelin, and is thus a more direct measure of myelin content than diffusion. In this work, we combine BPF estimates of macromolecular content with measurements of diffusivity within human white matter tracts. Within the white matter, the correlation between BPFs and diffusivity measures such as fractional anisotropy and radial diffusivity was modest, suggesting that diffusion tensor imaging and bound pool fractions are complementary techniques. We found that several major tracts have high BPF, suggesting a higher density of myelin in these tracts. We interpret these results in the context of a quantitative tissue model. PMID:20828622
Nazemi, S Majid; Amini, Morteza; Kontulainen, Saija A; Milner, Jaques S; Holdsworth, David W; Masri, Bassam A; Wilson, David R; Johnston, James D
2015-08-01
Quantitative computed tomography based subject-specific finite element modeling has potential to clarify the role of subchondral bone alterations in knee osteoarthritis initiation, progression, and pain initiation. Calculation of bone elastic moduli from image data is a basic step when constructing finite element models. However, different relationships between elastic moduli and imaged density (known as density-modulus relationships) have been reported in the literature. The objective of this study was to apply seven different trabecular-specific and two cortical-specific density-modulus relationships from the literature to finite element models of proximal tibia subchondral bone, and identify the relationship(s) that best predicted experimentally measured local subchondral structural stiffness with highest explained variance and least error. Thirteen proximal tibial compartments were imaged via quantitative computed tomography. Imaged bone mineral density was converted to elastic moduli using published density-modulus relationships and mapped to corresponding finite element models. Proximal tibial structural stiffness values were compared to experimentally measured stiffness values from in-situ macro-indentation testing directly on the subchondral bone surface (47 indentation points). Regression lines between experimentally measured and finite element calculated stiffness had R(2) values ranging from 0.56 to 0.77. Normalized root mean squared error varied from 16.6% to 337.6%. Of the 21 evaluated density-modulus relationships in this study, Goulet combined with Snyder and Schneider or Rho appeared most appropriate for finite element modeling of local subchondral bone structural stiffness. Though, further studies are needed to optimize density-modulus relationships and improve finite element estimates of local subchondral bone structural stiffness. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gölitz, P; Muehlen, I; Gerner, S T; Knossalla, F; Doerfler, A
2018-06-01
Mechanical thrombectomy has high evidence in stroke therapy; however, successful recanalization guarantees not a favorable clinical outcome. We aimed to quantitatively assess the reperfusion status ultraearly after successful middle cerebral artery (MCA) recanalization to identify flow parameters that potentially allow predicting clinical outcome. Sixty-seven stroke patients with acute MCA occlusion, undergoing recanalization, were enrolled. Using parametric color coding, a post-processing algorithm, pre-, and post-interventional digital subtraction angiography series were evaluated concerning the following parameters: pre- and post-procedural cortical relative time to peak (rTTP) of MCA territory, reperfusion time, and index. Functional long-term outcome was assessed by the 90-day modified Rankin Scale score (mRS; favorable: 0-2). Cortical rTTP was significantly shorter before (3.33 ± 1.36 seconds; P = .03) and after intervention (2.05 ± 0.70 seconds; P = .003) in patients with favorable clinical outcome. Additionally, age (P = .005) and initial National Institutes of Health Stroke Scale score (P = .02) were significantly different between the patients, whereas reperfusion index and time as well as initially estimated infarct size were not. In multivariate analysis, only post-procedural rTTP (P = .005) was independently associated with favorable clinical outcome. 2.29 seconds for post-procedural rTTP might be a threshold to predict favorable clinical outcome. Ultraearly quantitative assessment of reperfusion status after successful MCA recanalization reveals post-procedural cortical rTTP as possible independent prognostic value in predicting favorable clinical outcome, even determining a threshold value might be possible. In consequence, focusing stroke therapy on microcirculatory patency could be valuable to improve outcome. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A prioritization of generic safety issues. Supplement 19, Revision insertion instructions
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1995-11-01
The report presents the safety priority ranking for generic safety issues related to nuclear power plants. The purpose of these rankings is to assist in the timely and efficient allocation of NRC resources for the resolution of those safety issues that have a significant potential for reducing risk. The safety priority rankings are HIGH, MEDIUM, LOW, and DROP, and have been assigned on the basis of risk significance estimates, the ratio of risk to costs and other impacts estimated to result if resolution of the safety issues were implemented, and the consideration of uncertainties and other quantitative or qualitative factors.more » To the extent practical, estimates are quantitative. This document provides revisions and amendments to the report.« less
Sornborger, Andrew; Broder, Josef; Majumder, Anirban; Srinivasamoorthy, Ganesh; Porter, Erika; Reagin, Sean S; Keith, Charles; Lauderdale, James D
2008-09-01
Ratiometric fluorescent indicators are used for making quantitative measurements of a variety of physiological variables. Their utility is often limited by noise. This is the second in a series of papers describing statistical methods for denoising ratiometric data with the aim of obtaining improved quantitative estimates of variables of interest. Here, we outline a statistical optimization method that is designed for the analysis of ratiometric imaging data in which multiple measurements have been taken of systems responding to the same stimulation protocol. This method takes advantage of correlated information across multiple datasets for objectively detecting and estimating ratiometric signals. We demonstrate our method by showing results of its application on multiple, ratiometric calcium imaging experiments.
The adverse outcome pathway (AOP) framework can be used to support the use of mechanistic toxicology data as a basis for risk assessment. For certain risk contexts this includes defining, quantitative linkages between the molecular initiating event (MIE) and subsequent key events...
Harn, Nicholas R; Hunt, Suzanne L; Hill, Jacqueline; Vidoni, Eric; Perry, Mark; Burns, Jeffrey M
2017-08-01
Establishing reliable methods for interpreting elevated cerebral amyloid-β plaque on PET scans is increasingly important for radiologists, as availability of PET imaging in clinical practice increases. We examined a 3-step method to detect plaque in cognitively normal older adults, focusing on the additive value of quantitative information during the PET scan interpretation process. Fifty-five F-florbetapir PET scans were evaluated by 3 experienced raters. Scans were first visually interpreted as having "elevated" or "nonelevated" plaque burden ("Visual Read"). Images were then processed using a standardized quantitative analysis software (MIMneuro) to generate whole brain and region of interest SUV ratios. This "Quantitative Read" was considered elevated if at least 2 of 6 regions of interest had an SUV ratio of more than 1.1. The final interpretation combined both visual and quantitative data together ("VisQ Read"). Cohen kappa values were assessed as a measure of interpretation agreement. Plaque was elevated in 25.5% to 29.1% of the 165 total Visual Reads. Interrater agreement was strong (kappa = 0.73-0.82) and consistent with reported values. Quantitative Reads were elevated in 45.5% of participants. Final VisQ Reads changed from initial Visual Reads in 16 interpretations (9.7%), with most changing from "nonelevated" Visual Reads to "elevated." These changed interpretations demonstrated lower plaque quantification than those initially read as "elevated" that remained unchanged. Interrater variability improved for VisQ Reads with the addition of quantitative information (kappa = 0.88-0.96). Inclusion of quantitative information increases consistency of PET scan interpretations for early detection of cerebral amyloid-β plaque accumulation.
Fakhri, Georges El
2011-01-01
82Rb cardiac PET allows the assessment of myocardial perfusion using a column generator in clinics that lack a cyclotron. We and others have previously shown that quantitation of myocardial blood flow (MBF) and coronary flow reserve (CFR) is feasible using dynamic 82Rb PET and factor and compartment analyses. The aim of the present work was to determine the intra- and inter-observer variability of MBF estimation using 82Rb PET as well as the reproducibility of our generalized factor + compartment analyses methodology to estimate MBF and assess its accuracy by comparing, in the same subjects, 82Rb estimates of MBF to those obtained using 13N-ammonia. Methods Twenty-two subjects were included in the reproducibility and twenty subjects in the validation study. Patients were injected with 60±5mCi of 82Rb and imaged dynamically for 6 minutes at rest and during dipyridamole stress Left and right ventricular (LV+RV) time-activity curves were estimated by GFADS and used as input to a 2-compartment kinetic analysis that estimates parametric maps of myocardial tissue extraction (K1) and egress (k2), as well as LV+RV contributions (fv,rv). Results Our results show excellent reproducibility of the quantitative dynamic approach itself with coefficients of repeatability of 1.7% for estimation of MBF at rest, 1.4% for MBF at peak stress and 2.8% for CFR estimation. The inter-observer reproducibility between the four observers that participated in this study was also very good with correlation coefficients greater than 0.87 between any two given observers when estimating coronary flow reserve. The reproducibility of MBF in repeated 82Rb studies was good at rest and excellent at peak stress (r2=0.835). Furthermore, the slope of the correlation line was very close to 1 when estimating stress MBF and CFR in repeated 82Rb studies. The correlation between myocardial flow estimates obtained at rest and during peak stress in 82Rb and 13N-ammonia studies was very good at rest (r2=0.843) and stress (r2=0.761). The Bland-Altman plots show no significant presence of proportional error at rest or stress, nor a dependence of the variations on the amplitude of the myocardial blood flow at rest or stress. A small systematic overestimation of 13N-ammonia MBF was observed with 82Rb at rest (0.129 ml/g/min) and the opposite, i.e., underestimation, at stress (0.22 ml/g/min). Conclusions Our results show that absolute quantitation of myocardial bloof flow is reproducible and accurate with 82Rb dynamic cardiac PET as compared to 13N-ammonia. The reproducibility of the quantitation approach itself was very good as well as inter-observer reproducibility. PMID:19525467
Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.
Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K
2011-01-01
We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.
NASA Astrophysics Data System (ADS)
Sarradj, Ennes
2010-04-01
Phased microphone arrays are used in a variety of applications for the estimation of acoustic source location and spectra. The popular conventional delay-and-sum beamforming methods used with such arrays suffer from inaccurate estimations of absolute source levels and in some cases also from low resolution. Deconvolution approaches such as DAMAS have better performance, but require high computational effort. A fast beamforming method is proposed that can be used in conjunction with a phased microphone array in applications with focus on the correct quantitative estimation of acoustic source spectra. This method bases on an eigenvalue decomposition of the cross spectral matrix of microphone signals and uses the eigenvalues from the signal subspace to estimate absolute source levels. The theoretical basis of the method is discussed together with an assessment of the quality of the estimation. Experimental tests using a loudspeaker setup and an airfoil trailing edge noise setup in an aeroacoustic wind tunnel show that the proposed method is robust and leads to reliable quantitative results.
Werner, Benjamin; Scott, Jacob G; Sottoriva, Andrea; Anderson, Alexander R A; Traulsen, Arne; Altrock, Philipp M
2016-04-01
Many tumors are hierarchically organized and driven by a subpopulation of tumor-initiating cells (TIC), or cancer stem cells. TICs are uniquely capable of recapitulating the tumor and are thought to be highly resistant to radio- and chemotherapy. Macroscopic patterns of tumor expansion before treatment and tumor regression during treatment are tied to the dynamics of TICs. Until now, the quantitative information about the fraction of TICs from macroscopic tumor burden trajectories could not be inferred. In this study, we generated a quantitative method based on a mathematical model that describes hierarchically organized tumor dynamics and patient-derived tumor burden information. The method identifies two characteristic equilibrium TIC regimes during expansion and regression. We show that tumor expansion and regression curves can be leveraged to infer estimates of the TIC fraction in individual patients at detection and after continued therapy. Furthermore, our method is parameter-free; it solely requires the knowledge of a patient's tumor burden over multiple time points to reveal microscopic properties of the malignancy. We demonstrate proof of concept in the case of chronic myeloid leukemia (CML), wherein our model recapitulated the clinical history of the disease in two independent patient cohorts. On the basis of patient-specific treatment responses in CML, we predict that after one year of targeted treatment, the fraction of TICs increases 100-fold and continues to increase up to 1,000-fold after 5 years of treatment. Our novel framework may significantly influence the implementation of personalized treatment strategies and has the potential for rapid translation into the clinic. Cancer Res; 76(7); 1705-13. ©2016 AACR. ©2016 American Association for Cancer Research.
Quantitative degassing of gas hydrate-bearing pressure cores from Green Canyon 955, Gulf of Mexico
NASA Astrophysics Data System (ADS)
Phillips, S. C.; Holland, M. E.; Flemings, P. B.; Schultheiss, P. J.; Waite, W. F.; Petrou, E. G.; Jang, J.; Polito, P. J.; O'Connell, J.; Dong, T.; Meazell, K.
2017-12-01
We present results from 20 quantitative degassing experiments of pressure-core sections collected during Expedition UT-GOM2-1 from Green Canyon 955 in the northern Gulf of Mexico. These experiments highlight an average pore-space methane hydrate saturation, Sh, of 59% (min: 12%; max 87%) in sediments between 413 and 440 mbsf in 2032 m water depth. There is a strong lithofacies control of hydrate saturation within the reservoir, with a high saturation sandy silt facies (Sh of 65 to 87%) interbedded with a low saturation clayey silt facies (Sh of 12 to 30%). Bedding occurs on the scale of tens of centimeters. Outside of the main hydrate reservoir, methane hydrate occurs in low saturations (Sh of 0.8 to 3%). Hydrate saturations exhibit a strong correlation (R2=0.89) with the average P-wave velocity measured through the degassed sections. These preliminary hydrate saturations were calculated assuming a porosity of 40% with core filling the full internal diameter of the core liner. Gas recovered during these experiments is composed of almost entirely methane, with an average of 94 ppm ethane and detectable, but not quantifiable, propane. Degassed pressure cores were depressurized through a manifold by the stepwise release of fluid, and the volumes of produced gas and water were monitored. The core's hydrostatic pressure was measured and recorded continuously at the manifold. Pressure and temperature were also measured by data storage tags within the sample chambers. Two slow, multi-day degassing experiments were performed to estimate the in situ salinity within core sections. Based on temperature and pressure observations at the point of the initial pressure rebound due to hydrate dissociation, we estimate the salinity within these samples to be between 33 and 42 g kg-1.
Binary sensitivity and specificity metrics are not adequate to describe the performance of quantitative microbial source tracking methods because the estimates depend on the amount of material tested and limit of detection. We introduce a new framework to compare the performance ...
Code of Federal Regulations, 2010 CFR
2010-07-01
... appropriate for a quantitative survey and include consideration of the methods used in other studies performed... basis for any assumptions and quantitative estimates. If you plan to use an entrainment survival rate...
Code of Federal Regulations, 2013 CFR
2013-07-01
... appropriate for a quantitative survey and include consideration of the methods used in other studies performed... basis for any assumptions and quantitative estimates. If you plan to use an entrainment survival rate...
Code of Federal Regulations, 2014 CFR
2014-07-01
... appropriate for a quantitative survey and include consideration of the methods used in other studies performed... basis for any assumptions and quantitative estimates. If you plan to use an entrainment survival rate...
Code of Federal Regulations, 2011 CFR
2011-07-01
... appropriate for a quantitative survey and include consideration of the methods used in other studies performed... basis for any assumptions and quantitative estimates. If you plan to use an entrainment survival rate...
Code of Federal Regulations, 2012 CFR
2012-07-01
... appropriate for a quantitative survey and include consideration of the methods used in other studies performed... basis for any assumptions and quantitative estimates. If you plan to use an entrainment survival rate...
Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C
2011-09-01
Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.
Marques, João P; Gener, Isabelle; Ayrault, Philippe; Lopes, José M; Ribeiro, F Ramôa; Guisnet, Michel
2004-10-21
A simple method based on the characterization (composition, Bronsted and Lewis acidities) of acid treated HBEA zeolites was developed for estimating the concentrations of framework, extraframework and defect Al species.
Reliability and precision of pellet-group counts for estimating landscape-level deer density
David S. deCalesta
2013-01-01
This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...
REVIEW OF DRAFT REVISED BLUE BOOK ON ESTIMATING CANCER RISKS FROM EXPOSURE TO IONIZING RADIATION
In 1994, EPA published a report, referred to as the “Blue Book,” which lays out EPA’s current methodology for quantitatively estimating radiogenic cancer risks. A follow-on report made minor adjustments to the previous estimates and presented a partial analysis of the uncertainti...
Diffusion Coefficients of Endogenous Cytosolic Proteins from Rabbit Skinned Muscle Fibers
Carlson, Brian E.; Vigoreaux, Jim O.; Maughan, David W.
2014-01-01
Efflux time courses of endogenous cytosolic proteins were obtained from rabbit psoas muscle fibers skinned in oil and transferred to physiological salt solution. Proteins were separated by gel electrophoresis and compared to load-matched standards for quantitative analysis. A radial diffusion model incorporating the dissociation and dissipation of supramolecular complexes accounts for an initial lag and subsequent efflux of glycolytic and glycogenolytic enzymes. The model includes terms representing protein crowding, myofilament lattice hindrance, and binding to the cytomatrix. Optimization algorithms returned estimates of the apparent diffusion coefficients, D(r,t), that were very low at the onset of diffusion (∼10−10 cm2 s−1) but increased with time as cytosolic protein density, which was initially high, decreased. D(r,t) at later times ranged from 2.11 × 10−7 cm2 s−1 (parvalbumin) to 0.20 × 10−7 cm2 s−1 (phosphofructose kinase), values that are 3.6- to 12.3-fold lower than those predicted in bulk water. The low initial values are consistent with the presence of complexes in situ; the higher later values are consistent with molecular sieving and transient binding of dissociated proteins. Channeling of metabolic intermediates via enzyme complexes may enhance production of adenosine triphosphate at rates beyond that possible with randomly and/or sparsely distributed enzymes, thereby matching supply with demand. PMID:24559981
NASA Astrophysics Data System (ADS)
Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc
2012-11-01
Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling curves before explaining differences in diversity.
Systems engineering and integration: Cost estimation and benefits analysis
NASA Technical Reports Server (NTRS)
Dean, ED; Fridge, Ernie; Hamaker, Joe
1990-01-01
Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are needed as well as improved usage of risk data by decision-makers. More and better ways to display and communicate cost and cost risk to management are required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berryman, J. G.
While the well-known Voigt and Reuss (VR) bounds, and the Voigt-Reuss-Hill (VRH) elastic constant estimators for random polycrystals are all straightforwardly calculated once the elastic constants of anisotropic crystals are known, the Hashin-Shtrikman (HS) bounds and related self-consistent (SC) estimators for the same constants are, by comparison, more difficult to compute. Recent work has shown how to simplify (to some extent) these harder to compute HS bounds and SC estimators. An overview and analysis of a subsampling of these results is presented here with the main point being to show whether or not this extra work (i.e., in calculating bothmore » the HS bounds and the SC estimates) does provide added value since, in particular, the VRH estimators often do not fall within the HS bounds, while the SC estimators (for good reasons) have always been found to do so. The quantitative differences between the SC and the VRH estimators in the eight cases considered are often quite small however, being on the order of ±1%. These quantitative results hold true even though these polycrystal Voigt-Reuss-Hill estimators more typically (but not always) fall outside the Hashin-Shtrikman bounds, while the self-consistent estimators always fall inside (or on the boundaries of) these same bounds.« less
Oxidative DNA damage background estimated by a system model of base excision repair
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokhansanj, B A; Wilson, III, D M
Human DNA can be damaged by natural metabolism through free radical production. It has been suggested that the equilibrium between innate damage and cellular DNA repair results in an oxidative DNA damage background that potentially contributes to disease and aging. Efforts to quantitatively characterize the human oxidative DNA damage background level based on measuring 8-oxoguanine lesions as a biomarker have led to estimates varying over 3-4 orders of magnitude, depending on the method of measurement. We applied a previously developed and validated quantitative pathway model of human DNA base excision repair, integrating experimentally determined endogenous damage rates and model parametersmore » from multiple sources. Our estimates of at most 100 8-oxoguanine lesions per cell are consistent with the low end of data from biochemical and cell biology experiments, a result robust to model limitations and parameter variation. Our results show the power of quantitative system modeling to interpret composite experimental data and make biologically and physiologically relevant predictions for complex human DNA repair pathway mechanisms and capacity.« less
Effects of finite spatial resolution on quantitative CBF images from dynamic PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phelps, M.E.; Huang, S.C.; Mahoney, D.K.
1985-05-01
The finite spatial resolution of PET causes the time-activity responses on pixels around the boundaries between gray and white matter regions to contain kinetic components from tissues of different CBF's. CBF values estimated from kinetics of such mixtures are underestimated because of the nonlinear relationship between the time-activity response and the estimated CBF. Computer simulation is used to investigate these effects on phantoms of circular structures and realistic brain slice in terms of object size and quantitative CBF values. The CBF image calculated is compared to the case of having resolution loss alone. Results show that the size of amore » high flow region in the CBF image is decreased while that of a low flow region is increased. For brain phantoms, the qualitative appearance of CBF images is not seriously affected, but the estimated CBF's are underestimated by 11 to 16 percent in local gray matter regions (of size 1 cm/sup 2/) with about 14 percent reduction in global CBF over the whole slice. It is concluded that the combined effect of finite spatial resolution and the nonlinearity in estimating CBF from dynamic PET is quite significant and must be considered in processing and interpreting quantitative CBF images.« less
Development of risk-based nanomaterial groups for occupational exposure control
NASA Astrophysics Data System (ADS)
Kuempel, E. D.; Castranova, V.; Geraci, C. L.; Schulte, P. A.
2012-09-01
Given the almost limitless variety of nanomaterials, it will be virtually impossible to assess the possible occupational health hazard of each nanomaterial individually. The development of science-based hazard and risk categories for nanomaterials is needed for decision-making about exposure control practices in the workplace. A possible strategy would be to select representative (benchmark) materials from various mode of action (MOA) classes, evaluate the hazard and develop risk estimates, and then apply a systematic comparison of new nanomaterials with the benchmark materials in the same MOA class. Poorly soluble particles are used here as an example to illustrate quantitative risk assessment methods for possible benchmark particles and occupational exposure control groups, given mode of action and relative toxicity. Linking such benchmark particles to specific exposure control bands would facilitate the translation of health hazard and quantitative risk information to the development of effective exposure control practices in the workplace. A key challenge is obtaining sufficient dose-response data, based on standard testing, to systematically evaluate the nanomaterials' physical-chemical factors influencing their biological activity. Categorization processes involve both science-based analyses and default assumptions in the absence of substance-specific information. Utilizing data and information from related materials may facilitate initial determinations of exposure control systems for nanomaterials.
Quantifying Impact of Chromosome Copy Number on Recombination in Escherichia coli.
Reynolds, T Steele; Gill, Ryan T
2015-07-17
The ability to precisely and efficiently recombineer synthetic DNA into organisms of interest in a quantitative manner is a key requirement in genome engineering. Even though considerable effort has gone into the characterization of recombination in Escherichia coli, there is still substantial variability in reported recombination efficiencies. We hypothesized that this observed variability could, in part, be explained by the variability in chromosome copy number as well as the location of the replication forks relative to the recombination site. During rapid growth, E. coli cells may contain several pairs of open replication forks. While recombineered forks are resolving and segregating within the population, changes in apparent recombineering efficiency should be observed. In the case of dominant phenotypes, we predicted and then experimentally confirmed that the apparent recombination efficiency declined during recovery until complete segregation of recombineered and wild-type genomes had occurred. We observed the reverse trend for recessive phenotypes. The observed changes in apparent recombination efficiency were found to be in agreement with mathematical calculations based on our proposed mechanism. We also provide a model that can be used to estimate the total segregated recombination efficiency based on an initial efficiency and growth rate. These results emphasize the importance of employing quantitative strategies in the design of genome-scale engineering efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thrall, Brian D.; Minard, Kevin R.; Teeguarden, Justin G.
A Cooperative Research and Development Agreement (CRADA) was sponsored by Battelle Memorial Institute (Battelle, Columbus), to initiate a collaborative research program across multiple Department of Energy (DOE) National Laboratories aimed at developing a suite of new capabilities for predictive toxicology. Predicting the potential toxicity of emerging classes of engineered nanomaterials was chosen as one of two focusing problems for this program. PNNL’s focus toward this broader goal was to refine and apply experimental and computational tools needed to provide quantitative understanding of nanoparticle dosimetry for in vitro cell culture systems, which is necessary for comparative risk estimates for different nanomaterialsmore » or biological systems. Research conducted using lung epithelial and macrophage cell models successfully adapted magnetic particle detection and fluorescent microscopy technologies to quantify uptake of various forms of engineered nanoparticles, and provided experimental constraints and test datasets for benchmark comparison against results obtained using an in vitro computational dosimetry model, termed the ISSD model. The experimental and computational approaches developed were used to demonstrate how cell dosimetry is applied to aid in interpretation of genomic studies of nanoparticle-mediated biological responses in model cell culture systems. The combined experimental and theoretical approach provides a highly quantitative framework for evaluating relationships between biocompatibility of nanoparticles and their physical form in a controlled manner.« less
Shah, Ashesh; Coste, Jérôme; Lemaire, Jean-Jacques; Taub, Ethan; Schüpbach, W M Michael; Pollo, Claudio; Schkommodau, Erik; Guzman, Raphael; Hemm-Ode, Simone
2017-05-01
Deep brain stimulation (DBS) surgery is extensively used in the treatment of movement disorders. Nevertheless, methods to evaluate the clinical response during intraoperative stimulation tests to identify the optimal position for the implantation of the chronic DBS lead remain subjective. In this paper, we describe a new, versatile method for quantitative intraoperative evaluation of improvement in tremor with an acceleration sensor that is mounted on the patient's wrist during surgery. At each anatomical test position, the improvement in tremor compared to the initial tremor is estimated on the basis of extracted outcome measures. This method was tested on 15 tremor patients undergoing DBS surgery in two centers. Data from 359 stimulation tests were acquired. Our results suggest that accelerometric evaluation detects tremor changes more sensitively than subjective visual ratings. The effective stimulation current amplitudes identified from the quantitative data (1.1 ± 0.8 mA) are lower than those identified by visual evaluation (1.7 ± 0.8 mA) for similar improvement in tremor. Additionally, if these data had been used to choose the chronic implant position of the DBS lead, 15 of the 26 choices would have been different. These results show that our method of accelerometric evaluation can potentially improve DBS targeting.
Simulation of bubble expansion and collapse in the vicinity of a free surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koukouvinis, P., E-mail: foivos.koukouvinis.1@city.ac.uk; Gavaises, M.; Supponen, O.
The present paper focuses on the numerical simulation of the interaction of laser-generated bubbles with a free surface, including comparison of the results with instances from high-speed videos of the experiment. The Volume Of Fluid method was employed for tracking liquid and gas phases while compressibility effects were introduced with appropriate equations of state for each phase. Initial conditions of the bubble pressure were estimated through the traditional Rayleigh Plesset equation. The simulated bubble expands in a non-spherically symmetric way due to the interference of the free surface, obtaining an oval shape at the maximum size. During collapse, a jetmore » with mushroom cap is formed at the axis of symmetry with the same direction as the gravity vector, which splits the initial bubble to an agglomeration of toroidal structures. Overall, the simulation results are in agreement with the experimental images, both quantitatively and qualitatively, while pressure waves are predicted both during the expansion and the collapse of the bubble. Minor discrepancies in the jet velocity and collapse rate are found and are attributed to the thermodynamic closure of the gas inside the bubble.« less
Lau, Darryl; Hervey-Jumper, Shawn L; Han, Seunggu J; Berger, Mitchel S
2018-05-01
OBJECTIVE There is ample evidence that extent of resection (EOR) is associated with improved outcomes for glioma surgery. However, it is often difficult to accurately estimate EOR intraoperatively, and surgeon accuracy has yet to be reviewed. In this study, the authors quantitatively assessed the accuracy of intraoperative perception of EOR during awake craniotomy for tumor resection. METHODS A single-surgeon experience of performing awake craniotomies for tumor resection over a 17-year period was examined. Retrospective review of operative reports for quantitative estimation of EOR was recorded. Definitive EOR was based on postoperative MRI. Analysis of accuracy of EOR estimation was examined both as a general outcome (gross-total resection [GTR] or subtotal resection [STR]), and quantitatively (5% within EOR on postoperative MRI). Patient demographics, tumor characteristics, and surgeon experience were examined. The effects of accuracy on motor and language outcomes were assessed. RESULTS A total of 451 patients were included in the study. Overall accuracy of intraoperative perception of whether GTR or STR was achieved was 79.6%, and overall accuracy of quantitative perception of resection (within 5% of postoperative MRI) was 81.4%. There was a significant difference (p = 0.049) in accuracy for gross perception over the 17-year period, with improvement over the later years: 1997-2000 (72.6%), 2001-2004 (78.5%), 2005-2008 (80.7%), and 2009-2013 (84.4%). Similarly, there was a significant improvement (p = 0.015) in accuracy of quantitative perception of EOR over the 17-year period: 1997-2000 (72.2%), 2001-2004 (69.8%), 2005-2008 (84.8%), and 2009-2013 (93.4%). This improvement in accuracy is demonstrated by the significantly higher odds of correctly estimating quantitative EOR in the later years of the series on multivariate logistic regression. Insular tumors were associated with the highest accuracy of gross perception (89.3%; p = 0.034), but lowest accuracy of quantitative perception (61.1% correct; p < 0.001) compared with tumors in other locations. Even after adjusting for surgeon experience, this particular trend for insular tumors remained true. The absence of 1p19q co-deletion was associated with higher quantitative perception accuracy (96.9% vs 81.5%; p = 0.051). Tumor grade, recurrence, diagnosis, and isocitrate dehydrogenase-1 (IDH-1) status were not associated with accurate perception of EOR. Overall, new neurological deficits occurred in 8.4% of cases, and 42.1% of those new neurological deficits persisted after the 3-month follow-up. Correct quantitative perception was associated with lower postoperative motor deficits (2.4%) compared with incorrect perceptions (8.0%; p = 0.029). There were no detectable differences in language outcomes based on perception of EOR. CONCLUSIONS The findings from this study suggest that there is a learning curve associated with the ability to accurately assess intraoperative EOR during glioma surgery, and it may take more than a decade to be truly proficient. Understanding the factors associated with this ability to accurately assess EOR will provide safer surgeries while maximizing tumor resection.
Shahgaldi, Kambiz; Gudmundsson, Petri; Manouras, Aristomenis; Brodin, Lars-Ake; Winter, Reidar
2009-08-25
Visual assessment of left ventricular ejection fraction (LVEF) is often used in clinical routine despite general recommendations to use quantitative biplane Simpsons (BPS) measurements. Even thou quantitative methods are well validated and from many reasons preferable, the feasibility of visual assessment (eyeballing) is superior. There is to date only sparse data comparing visual EF assessment in comparison to quantitative methods available. The aim of this study was to compare visual EF assessment by two-dimensional echocardiography (2DE) and triplane echocardiography (TPE) using quantitative real-time three-dimensional echocardiography (RT3DE) as the reference method. Thirty patients were enrolled in the study. Eyeballing EF was assessed using apical 4-and 2 chamber views and TP mode by two experienced readers blinded to all clinical data. The measurements were compared to quantitative RT3DE. There were an excellent correlation between eyeballing EF by 2D and TP vs 3DE (r = 0.91 and 0.95 respectively) without any significant bias (-0.5 +/- 3.7% and -0.2 +/- 2.9% respectively). Intraobserver variability was 3.8% for eyeballing 2DE, 3.2% for eyeballing TP and 2.3% for quantitative 3D-EF. Interobserver variability was 7.5% for eyeballing 2D and 8.4% for eyeballing TP. Visual estimation of LVEF both using 2D and TP by an experienced reader correlates well with quantitative EF determined by RT3DE. There is an apparent trend towards a smaller variability using TP in comparison to 2D, this was however not statistically significant.
Patient-Centered Care in Breast Cancer Genetic Clinics
Brédart, Anne; Anota, Amélie; Kuboth, Violetta; Lareyre, Olivier; Cano, Alejandra; Stoppa-Lyonnet, Dominique; Schmutzler, Rita; Dolbeault, Sylvie
2018-01-01
With advances in breast cancer (BC) gene panel testing, risk counseling has become increasingly complex, potentially leading to unmet psychosocial needs. We assessed psychosocial needs and correlates in women initiating testing for high genetic BC risk in clinics in France and Germany, and compared these results with data from a literature review. Among the 442 counselees consecutively approached, 212 (83%) in France and 180 (97%) in Germany, mostly BC patients (81% and 92%, respectively), returned the ‘Psychosocial Assessment in Hereditary Cancer’ questionnaire. Based on the Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm (BOADICEA) BC risk estimation model, the mean BC lifetime risk estimates were 19% and 18% in France and Germany, respectively. In both countries, the most prevalent needs clustered around the “living with cancer” and “children-related issues” domains. In multivariate analyses, a higher number of psychosocial needs were significantly associated with younger age (b = −0.05), higher anxiety (b = 0.78), and having children (b = 1.51), but not with country, educational level, marital status, depression, or loss of a family member due to hereditary cancer. These results are in line with the literature review data. However, this review identified only seven studies that quantitatively addressed psychosocial needs in the BC genetic counseling setting. Current data lack understandings of how cancer risk counseling affects psychosocial needs, and improves patient-centered care in that setting. PMID:29439543
Heritability estimates on resting state fMRI data using ENIGMA analysis pipeline.
Adhikari, Bhim M; Jahanshad, Neda; Shukla, Dinesh; Glahn, David C; Blangero, John; Reynolds, Richard C; Cox, Robert W; Fieremans, Els; Veraart, Jelle; Novikov, Dmitry S; Nichols, Thomas E; Hong, L Elliot; Thompson, Paul M; Kochunov, Peter
2018-01-01
Big data initiatives such as the Enhancing NeuroImaging Genetics through Meta-Analysis consortium (ENIGMA), combine data collected by independent studies worldwide to achieve more generalizable estimates of effect sizes and more reliable and reproducible outcomes. Such efforts require harmonized image analyses protocols to extract phenotypes consistently. This harmonization is particularly challenging for resting state fMRI due to the wide variability of acquisition protocols and scanner platforms; this leads to site-to-site variance in quality, resolution and temporal signal-to-noise ratio (tSNR). An effective harmonization should provide optimal measures for data of different qualities. We developed a multi-site rsfMRI analysis pipeline to allow research groups around the world to process rsfMRI scans in a harmonized way, to extract consistent and quantitative measurements of connectivity and to perform coordinated statistical tests. We used the single-modality ENIGMA rsfMRI preprocessing pipeline based on modelfree Marchenko-Pastur PCA based denoising to verify and replicate resting state network heritability estimates. We analyzed two independent cohorts, GOBS (Genetics of Brain Structure) and HCP (the Human Connectome Project), which collected data using conventional and connectomics oriented fMRI protocols, respectively. We used seed-based connectivity and dual-regression approaches to show that the rsfMRI signal is consistently heritable across twenty major functional network measures. Heritability values of 20-40% were observed across both cohorts.
Regional cardiac wall motion from gated myocardial perfusion SPECT studies
NASA Astrophysics Data System (ADS)
Smith, M. F.; Brigger, P.; Ferrand, S. K.; Dilsizian, V.; Bacharach, S. L.
1999-06-01
A method for estimating regional epicardial and endocardial wall motion from gated myocardial perfusion SPECT studies has been developed. The method uses epicardial and endocardial boundaries determined from four long-axis slices at each gate of the cardiac cycle. The epicardial and endocardial wall position at each time gate is computed with respect to stationary reference ellipsoids, and wall motion is measured along lines normal to these ellipsoids. An initial quantitative evaluation of the method was made using the beating heart from the dynamic mathematical cardiac torso (MCAT) phantom, with and without a 1.5-cm FWHM Gaussian blurring filter. Epicardial wall motion was generally well-estimated within a fraction of a 3.56-mm voxel, although apical motion was overestimated with the Gaussian filter. Endocardial wall motion was underestimated by about two voxels with and without the Gaussian filter. The MCAT heart phantom was modified to model hypokinetic and dyskinetic wall motion. The wall motion analysis method enabled this abnormal motion to be differentiated from normal motion. Regional cardiac wall motion also was analyzed for /sup 201/Tl patient studies. Estimated wall motion was consistent with a nuclear medicine physician's visual assessment of motion from gated long-axis slices for male and female study examples. Additional research is required for a comprehensive evaluation of the applicability of the method to patient studies with normal and abnormal wall motion.
NASA Astrophysics Data System (ADS)
Atencia, A.; Llasat, M. C.; Garrote, L.; Mediero, L.
2010-10-01
The performance of distributed hydrological models depends on the resolution, both spatial and temporal, of the rainfall surface data introduced. The estimation of quantitative precipitation from meteorological radar or satellite can improve hydrological model results, thanks to an indirect estimation at higher spatial and temporal resolution. In this work, composed radar data from a network of three C-band radars, with 6-minutal temporal and 2 × 2 km2 spatial resolution, provided by the Catalan Meteorological Service, is used to feed the RIBS distributed hydrological model. A Window Probability Matching Method (gage-adjustment method) is applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation in both convective and stratiform Z/R relations used over Catalonia. Once the rainfall field has been adequately obtained, an advection correction, based on cross-correlation between two consecutive images, was introduced to get several time resolutions from 1 min to 30 min. Each different resolution is treated as an independent event, resulting in a probable range of input rainfall data. This ensemble of rainfall data is used, together with other sources of uncertainty, such as the initial basin state or the accuracy of discharge measurements, to calibrate the RIBS model using probabilistic methodology. A sensitivity analysis of time resolutions was implemented by comparing the various results with real values from stream-flow measurement stations.
Patient-Centered Care in Breast Cancer Genetic Clinics.
Brédart, Anne; Anota, Amélie; Dick, Julia; Kuboth, Violetta; Lareyre, Olivier; De Pauw, Antoine; Cano, Alejandra; Stoppa-Lyonnet, Dominique; Schmutzler, Rita; Dolbeault, Sylvie; Kop, Jean-Luc
2018-02-12
With advances in breast cancer (BC) gene panel testing, risk counseling has become increasingly complex, potentially leading to unmet psychosocial needs. We assessed psychosocial needs and correlates in women initiating testing for high genetic BC risk in clinics in France and Germany, and compared these results with data from a literature review. Among the 442 counselees consecutively approached, 212 (83%) in France and 180 (97%) in Germany, mostly BC patients (81% and 92%, respectively), returned the 'Psychosocial Assessment in Hereditary Cancer' questionnaire. Based on the Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm (BOADICEA) BC risk estimation model, the mean BC lifetime risk estimates were 19% and 18% in France and Germany, respectively. In both countries, the most prevalent needs clustered around the "living with cancer" and "children-related issues" domains. In multivariate analyses, a higher number of psychosocial needs were significantly associated with younger age (b = -0.05), higher anxiety (b = 0.78), and having children (b = 1.51), but not with country, educational level, marital status, depression, or loss of a family member due to hereditary cancer. These results are in line with the literature review data. However, this review identified only seven studies that quantitatively addressed psychosocial needs in the BC genetic counseling setting. Current data lack understandings of how cancer risk counseling affects psychosocial needs, and improves patient-centered care in that setting.
The mathematics of cancer: integrating quantitative models.
Altrock, Philipp M; Liu, Lin L; Michor, Franziska
2015-12-01
Mathematical modelling approaches have become increasingly abundant in cancer research. The complexity of cancer is well suited to quantitative approaches as it provides challenges and opportunities for new developments. In turn, mathematical modelling contributes to cancer research by helping to elucidate mechanisms and by providing quantitative predictions that can be validated. The recent expansion of quantitative models addresses many questions regarding tumour initiation, progression and metastases as well as intra-tumour heterogeneity, treatment responses and resistance. Mathematical models can complement experimental and clinical studies, but also challenge current paradigms, redefine our understanding of mechanisms driving tumorigenesis and shape future research in cancer biology.
In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...
A Comparative Assessment of Greek Universities' Efficiency Using Quantitative Analysis
ERIC Educational Resources Information Center
Katharaki, Maria; Katharakis, George
2010-01-01
In part due to the increased demand for higher education, typical evaluation frameworks for universities often address the key issue of available resource utilisation. This study seeks to estimate the efficiency of 20 public universities in Greece through quantitative analysis (including performance indicators, data envelopment analysis (DEA) and…
Employment from Solar Energy: A Bright but Partly Cloudy Future.
ERIC Educational Resources Information Center
Smeltzer, K. K.; Santini, D. J.
A comparison of quantitative and qualitative employment effects of solar and conventional systems can prove the increased employment postulated as one of the significant secondary benefits of a shift from conventional to solar energy use. Current quantitative employment estimates show solar technology-induced employment to be generally greater…
78 FR 53336 - List of Fisheries for 2013
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-29
... provided on the LOF are solely used for descriptive purposes and will not be used in determining future... this information to determine whether the fishery can be classified on the LOF based on quantitative... does not have a quantitative estimate of the number of mortalities and serious injuries of pantropical...
The calibration of video cameras for quantitative measurements
NASA Technical Reports Server (NTRS)
Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.
1993-01-01
Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.
Code of Federal Regulations, 2012 CFR
2012-07-01
...). (7) ASTM E 168-88, “Standard Practices for General Techniques of Infrared Quantitative Analysis,” IBR...-Visible Quantitative Analysis,” IBR approved for § 264.1063. (9) ASTM E 260-85, “Standard Practice for..., Research Triangle Park, NC. (1) “Screening Procedures for Estimating the Air Quality Impact of Stationary...
Code of Federal Regulations, 2014 CFR
2014-07-01
...). (7) ASTM E 168-88, “Standard Practices for General Techniques of Infrared Quantitative Analysis,” IBR...-Visible Quantitative Analysis,” IBR approved for § 264.1063. (9) ASTM E 260-85, “Standard Practice for..., Research Triangle Park, NC. (1) “Screening Procedures for Estimating the Air Quality Impact of Stationary...
Code of Federal Regulations, 2013 CFR
2013-07-01
...). (7) ASTM E 168-88, “Standard Practices for General Techniques of Infrared Quantitative Analysis,” IBR...-Visible Quantitative Analysis,” IBR approved for § 264.1063. (9) ASTM E 260-85, “Standard Practice for..., Research Triangle Park, NC. (1) “Screening Procedures for Estimating the Air Quality Impact of Stationary...
A set of literature data was used to derive several quantitative structure-activity relationships (QSARs) to predict the rate constants for the microbial reductive dehalogenation of chlorinated aromatics. Dechlorination rate constants for 25 chloroaromatics were corrected for th...
ABSTRACT
We report quantitative estimates of the parameters for metabolism of bromodichloromethane (BDCM) by recombinant preparations of hepatic cytochrome P450s (CYPs) from rat and human. BDCM is a drinking water disinfectant byproduct that has been implicated in liver, kidn...
Anomalous chiral transport in heavy ion collisions from Anomalous-Viscous Fluid Dynamics
NASA Astrophysics Data System (ADS)
Shi, Shuzhe; Jiang, Yin; Lilleskov, Elias; Liao, Jinfeng
2018-07-01
Chiral anomaly is a fundamental aspect of quantum theories with chiral fermions. How such microscopic anomaly manifests itself in a macroscopic many-body system with chiral fermions, is a highly nontrivial question that has recently attracted significant interest. As it turns out, unusual transport currents can be induced by chiral anomaly under suitable conditions in such systems, with the notable example of the Chiral Magnetic Effect (CME) where a vector current (e.g. electric current) is generated along an external magnetic field. A lot of efforts have been made to search for CME in heavy ion collisions, by measuring the charge separation effect induced by the CME transport. A crucial challenge in such effort, is the quantitative prediction for the CME signal. In this paper, we develop the Anomalous-Viscous Fluid Dynamics (AVFD) framework, which implements the anomalous fluid dynamics to describe the evolution of fermion currents in QGP, on top of the neutral bulk background described by the VISH2+1 hydrodynamic simulations for heavy ion collisions. With this new tool, we quantitatively and systematically investigate the dependence of the CME signal to a series of theoretical inputs and associated uncertainties. With realistic estimates of initial conditions and magnetic field lifetime, the predicted CME signal is quantitatively consistent with measured change separation data in 200GeV Au-Au collisions. Based on analysis of Au-Au collisions, we further make predictions for the CME observable to be measured in the planned isobaric (Ru-Ru v.s. Zr-Zr) collision experiment, which could provide a most decisive test of the CME in heavy ion collisions.
Towards quantitative quasi-static elastography with a gravity-induced deformation source
NASA Astrophysics Data System (ADS)
Griesenauer, Rebekah H.; Weis, Jared A.; Arlinghaus, Lori R.; Meszoely, Ingrid M.; Miga, Michael I.
2017-03-01
Biomechanical breast models have been employed for applications in image registration and analysis, breast augmentation simulation, and for surgical and biopsy guidance. Accurate applications of stress-strain relationships of tissue within the breast can improve the accuracy of biomechanical models that attempt to simulate breast movements. Reported stiffness values for adipose, glandular, and cancerous tissue types vary greatly. Variations in reported stiffness properties are mainly due to differences in testing methodologies and assumptions, measurement errors, and natural inter patient differences in tissue elasticity. Therefore, patient specific, in vivo determination of breast tissue properties is ideal for these procedural applications. Many in vivo elastography methods are not quantitative and/or do not measure material properties under deformation conditions that are representative of the procedure being simulated in the model. In this study, we developed an elasticity estimation method that is performed using deformations representative of supine therapeutic procedures. Reconstruction of material properties was performed by iteratively fitting two anatomical images before and after tissue stimulation. The method proposed is work flow friendly, quantitative, and uses a non-contact, gravity-induced deformation source. We tested this material property optimization procedure in a healthy volunteer and in simulation. In simulation, we show that the algorithm can reconstruct properties with errors below 1% for adipose and 5.6% for glandular tissue regardless of the starting stiffness values used as initial guesses. In clinical data, reconstruction errors are higher (3.6% and 24.2%) due to increased noise in the system. In a clinical context, the elastography method was shown to be promising for use in biomechanical model assisted supine procedures.
Customizing a rangefinder for community-based wildlife conservation initiatives
Ransom, Jason I.
2011-01-01
Population size of many threatened and endangered species is relatively unknown because estimating animal abundance in remote parts of the world, without access to aircraft for surveying vast areas, is a scientific challenge with few proposed solutions. One option is to enlist local community members and train them in data collection for large line transect or point count surveys, but financial and sometimes technological constraints prevent access to the necessary equipment and training for accurately quantifying distance measurements. Such measurements are paramount for generating reliable estimates of animal density. This problem was overcome in a survey of Asiatic wild ass (Equus hemionus) in the Great Gobi B Strictly Protected Area, Mongolia, by converting an inexpensive optical sporting rangefinder into a species-specific rangefinder with visual-based categorical labels. Accuracy trials concluded 96.86% of 350 distance measures matched those from a laser rangefinder. This simple customized optic subsequently allowed for a large group of minimally-trained observers to simultaneously record quantitative measures of distance, despite language, education, and skill differences among the diverse group. The large community-based effort actively engaged local residents in species conservation by including them as the foundation for collecting scientific data.
Ship collision risk assessment for the Singapore Strait.
Qu, Xiaobo; Meng, Qiang; Suyi, Li
2011-11-01
The Singapore Strait is considered as the bottleneck and chokepoint of the shipping routes connecting the Indian and the Pacific Ocean. Therefore, the ship collision risk assessment is of significant importance for ships passing through the narrow, shallow, and busy waterway. In this paper, three ship collision risk indices are initially proposed to quantitatively assess the ship collision risks in the Strait: index of speed dispersion, degree of acceleration and deceleration, and number of fuzzy ship domain overlaps. These three risk indices for the Singapore Strait are estimated by using the real-time ship locations and sailing speeds provide by Lloyd's MIU automatic identification system (AIS). Based on estimation of these three risk indices, it can be concluded that Legs 4W, 5W, 11E, and 12E are the most risky legs in the Strait. Therefore, the ship collision risk reduction solutions should be prioritized being implemented in these four legs. This study also finds that around 25% of the vessels sail with a speed in excess of the speed limit, which results in higher potentials of ship collision. Analysis indicates that the safety level would be significantly improved if all the vessels follow the passage guidelines. Copyright © 2011 Elsevier Ltd. All rights reserved.
Dall, Timothy M; Zhang, Yiduo; Chen, Yaozhu J; Wagner, Rachel C Askarinam; Hogan, Paul F; Fagan, Nancy K; Olaiya, Samuel T; Tornberg, David N
2007-01-01
To estimate medical and indirect costs to the Department of Defense (DoD) that are associated with tobacco use, being overweight or obese, and high alcohol consumption. Retrospective, quantitative research. Healthcare provided in military treatment facilities and by providers participating in the military health system. The 4.3 million beneficiaries under age 65 years who were enrolled in the military TRICARE Prime health plan option in 2006. The findings come from a cost-of-disease model developed by combining information from DoD and civilian health surveys and studies; DoD healthcare encounter data for 4.1 million beneficiaries; and epidemiology literature on the increased risk of comorbidities from unhealthy behaviors. DoD spends an estimated $2.1 billion per year for medical care associated with tobacco use ($564 million), excess weight and obesity ($1.1 billion), and high alcohol consumption ($425 million). DoD incurs nonmedical costs related to tobacco use, excess weight and obesity, and high alcohol consumption in excess of $965 million per year. Unhealthy lifestyles are significant contributors to the cost of providing healthcare services to the nation's military personnel, military retirees, and their dependents. The continued rise in healthcare costs could impact other DoD programs and could potentially affect areas related to military capability and readiness. In 2006, DoD initiated Healthy Choices for Life initiatives to address the high cost of unhealthy lifestyles and behaviors, and the DoD continues to monitor lifestyle trends through the DoD Lifestyle Assessment Program.
Antibodies against toluene diisocyanate protein conjugates. Three methods of measurement.
Patterson, R; Harris, K E; Zeiss, C R
1983-12-01
With the use of canine antisera against toluene diisocyanate (TDI)-dog serum albumin (DSA), techniques for measuring antibody against TDI-DSA were evaluated. The use of an ammonium sulfate precipitation assay showed suggestive evidence of antibody binding but high levels of TDI-DSA precipitation in the absence of antibody limit any usefulness of this technique. Double-antibody co-precipitation techniques will measure total antibody or Ig class antibody against 125I-TDI-DSA. These techniques are quantitative. The polystyrene tube radioimmunoassay is a highly sensitive method of detecting and quantitatively estimating IgG antibody. The enzyme linked immunosorbent assay is a rapidly adaptable method for the quantitative estimation of IgG, IgA, and IgM against TDI-homologous proteins. All these techniques were compared and results are demonstrated by using the same serum sample for analysis.
Andersen, S T; Erichsen, A C; Mark, O; Albrechtsen, H-J
2013-12-01
Quantitative microbial risk assessments (QMRAs) often lack data on water quality leading to great uncertainty in the QMRA because of the many assumptions. The quantity of waste water contamination was estimated and included in a QMRA on an extreme rain event leading to combined sewer overflow (CSO) to bathing water where an ironman competition later took place. Two dynamic models, (1) a drainage model and (2) a 3D hydrodynamic model, estimated the dilution of waste water from source to recipient. The drainage model estimated that 2.6% of waste water was left in the system before CSO and the hydrodynamic model estimated that 4.8% of the recipient bathing water came from the CSO, so on average there was 0.13% of waste water in the bathing water during the ironman competition. The total estimated incidence rate from a conservative estimate of the pathogenic load of five reference pathogens was 42%, comparable to 55% in an epidemiological study of the case. The combination of applying dynamic models and exposure data led to an improved QMRA that included an estimate of the dilution factor. This approach has not been described previously.
Benefits of dynamic mobility applications : preliminary estimates from the literature.
DOT National Transportation Integrated Search
2012-12-01
This white paper examines the available quantitative information on the potential mobility benefits of the connected vehicle Dynamic Mobility Applications (DMA). This work will be refined as more and better estimates of benefits from mobility applica...
Prediction of Solvent Physical Properties using the Hierarchical Clustering Method
Recently a QSAR (Quantitative Structure Activity Relationship) method, the hierarchical clustering method, was developed to estimate acute toxicity values for large, diverse datasets. This methodology has now been applied to the estimate solvent physical properties including sur...
Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model
NASA Astrophysics Data System (ADS)
Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.
2009-05-01
Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.
In an adverse outcome pathway (AOP), the target site dose participates in a molecular initiating event (MIE), which in turn triggers a sequence of key events leading to an adverse outcome (AO). Quantitative AOPs (QAOP) are needed if AOP characterization is to address risk as well...
ERIC Educational Resources Information Center
Harlow, Lisa L.; Burkholder, Gary J.; Morrow, Jennifer A.
2002-01-01
Used a structural modeling approach to evaluate relations among attitudes, initial skills, and performance in a Quantitative Methods course that involved students in active learning. Results largely confirmed hypotheses offering support for educational reform efforts that propose actively involving students in the learning process, especially in…
ERIC Educational Resources Information Center
Walker-Glenn, Michelle Lynn
2010-01-01
Although most high schools espouse school-wide literacy initiatives, few schools place equal emphasis on numeracy, or quantitative literacy. This lack of attention to quantitative skills is ironic in light of documented deficiencies in student mathematics achievement. While significant research exists regarding best practices for mathematics…
Battefeld, Arne; Tran, Baouyen T; Gavrilis, Jason; Cooper, Edward C; Kole, Maarten H P
2014-03-05
Rapid energy-efficient signaling along vertebrate axons is achieved through intricate subcellular arrangements of voltage-gated ion channels and myelination. One recently appreciated example is the tight colocalization of K(v)7 potassium channels and voltage-gated sodium (Na(v)) channels in the axonal initial segment and nodes of Ranvier. The local biophysical properties of these K(v)7 channels and the functional impact of colocalization with Na(v) channels remain poorly understood. Here, we quantitatively examined K(v)7 channels in myelinated axons of rat neocortical pyramidal neurons using high-resolution confocal imaging and patch-clamp recording. K(v)7.2 and 7.3 immunoreactivity steeply increased within the distal two-thirds of the axon initial segment and was mirrored by the conductance density estimates, which increased from ~12 (proximal) to 150 pS μm(-2) (distal). The axonal initial segment and nodal M-currents were similar in voltage dependence and kinetics, carried by K(v)7.2/7.3 heterotetramers, 4% activated at the resting membrane potential and rapidly activated with single-exponential time constants (~15 ms at 28 mV). Experiments and computational modeling showed that while somatodendritic K(v)7 channels are strongly activated by the backpropagating action potential to attenuate the afterdepolarization and repetitive firing, axonal K(v)7 channels are minimally recruited by the forward-propagating action potential. Instead, in nodal domains K(v)7.2/7.3 channels were found to increase Na(v) channel availability and action potential amplitude by stabilizing the resting membrane potential. Thus, K(v)7 clustering near axonal Na(v) channels serves specific and context-dependent roles, both restraining initiation and enhancing conduction of the action potential.
Battefeld, Arne; Tran, Baouyen T.; Gavrilis, Jason; Cooper, Edward C.
2014-01-01
Rapid energy-efficient signaling along vertebrate axons is achieved through intricate subcellular arrangements of voltage-gated ion channels and myelination. One recently appreciated example is the tight colocalization of Kv7 potassium channels and voltage-gated sodium (Nav) channels in the axonal initial segment and nodes of Ranvier. The local biophysical properties of these Kv7 channels and the functional impact of colocalization with Nav channels remain poorly understood. Here, we quantitatively examined Kv7 channels in myelinated axons of rat neocortical pyramidal neurons using high-resolution confocal imaging and patch-clamp recording. Kv7.2 and 7.3 immunoreactivity steeply increased within the distal two-thirds of the axon initial segment and was mirrored by the conductance density estimates, which increased from ∼12 (proximal) to 150 pS μm−2 (distal). The axonal initial segment and nodal M-currents were similar in voltage dependence and kinetics, carried by Kv7.2/7.3 heterotetramers, 4% activated at the resting membrane potential and rapidly activated with single-exponential time constants (∼15 ms at 28 mV). Experiments and computational modeling showed that while somatodendritic Kv7 channels are strongly activated by the backpropagating action potential to attenuate the afterdepolarization and repetitive firing, axonal Kv7 channels are minimally recruited by the forward-propagating action potential. Instead, in nodal domains Kv7.2/7.3 channels were found to increase Nav channel availability and action potential amplitude by stabilizing the resting membrane potential. Thus, Kv7 clustering near axonal Nav channels serves specific and context-dependent roles, both restraining initiation and enhancing conduction of the action potential. PMID:24599470
Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation.
Ribba, B; Grimm, H P; Agoram, B; Davies, M R; Gadkar, K; Niederer, S; van Riel, N; Timmis, J; van der Graaf, P H
2017-08-01
With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early Development to focus discussions on two critical methodological aspects of QSP model development: optimal structural granularity and parameter estimation. We here report in a perspective article a summary of presentations and discussions. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
NASA Astrophysics Data System (ADS)
Le Coz, Jérôme; Patalano, Antoine; Collins, Daniel; Guillén, Nicolás Federico; García, Carlos Marcelo; Smart, Graeme M.; Bind, Jochen; Chiaverini, Antoine; Le Boursicaud, Raphaël; Dramais, Guillaume; Braud, Isabelle
2016-10-01
New communication and digital image technologies have enabled the public to produce large quantities of flood observations and share them through social media. In addition to flood incident reports, valuable hydraulic data such as the extent and depths of inundated areas and flow rate estimates can be computed using messages, photos and videos produced by citizens. Such crowdsourced data help improve the understanding and modelling of flood hazard. Since little feedback on similar initiatives is available, we introduce three recent citizen science projects which have been launched independently by research organisations to quantitatively document flood flows in catchments and urban areas of Argentina, France, and New Zealand. Key drivers for success appear to be: a clear and simple procedure, suitable tools for data collecting and processing, an efficient communication plan, the support of local stakeholders, and the public awareness of natural hazards.
NASA Technical Reports Server (NTRS)
Holley, W. R.; Chatterjee, A.
1994-01-01
A theoretical framework is presented which provides a quantitative analysis of radiation induced translocations between the ab1 oncogene on CH9q34 and a breakpoint cluster region, bcr, on CH 22q11. Such translocations are associated frequently with chronic myelogenous leukemia. The theory is based on the assumption that incorrect or unfaithful rejoining of initial double strand breaks produced concurrently within the 200 kbp intron region upstream of the second abl exon, and the 16.5 kbp region between bcr exon 2 and exon 6 interact with each other, resulting in a fusion gene. for an x-ray dose of 100 Gy, there is good agreement between the theoretical estimate and the one available experimental result. The theory has been extended to provide dose response curves for these types of translocations. These curves are quadratic at low doses and become linear at high doses.
Cartographic quality of ERTS-1 images
NASA Technical Reports Server (NTRS)
Welch, R. I.
1973-01-01
Analyses of simulated and operational ERTS images have provided initial estimates of resolution, ground resolution, detectability thresholds and other measures of image quality of interest to earth scientists and cartographers. Based on these values, including an approximate ground resolution of 250 meters for both RBV and MSS systems, the ERTS-1 images appear suited to the production and/or revision of planimetric and photo maps of 1:500,000 scale and smaller for which map accuracy standards are compatible with the imaged detail. Thematic mapping, although less constrained by map accuracy standards, will be influenced by measurement thresholds and errors which have yet to be accurately determined for ERTS images. This study also indicates the desirability of establishing a quantitative relationship between image quality values and map products which will permit both engineers and cartographers/earth scientists to contribute to the design requirements of future satellite imaging systems.
Estimating explosion properties of normal hydrogen-rich core-collapse supernovae
NASA Astrophysics Data System (ADS)
Pejcha, Ondrej
2017-08-01
Recent parameterized 1D explosion models of hundreds of core-collapse supernova progenitors suggest that success and failure are intertwined in a complex pattern that is not a simple function of the progenitor initial mass. This rugged landscape is present also in other explosion properties, allowing for quantitative tests of the neutrino mechanism from observations of hundreds of supernovae discovered every year. We present a new self-consistent and versatile method that derives photospheric radius and temperature variations of normal hydrogen-rich core-collapse supernovae based on their photometric measurements and expansion velocities. We construct SED and bolometric light curves, determine explosion energies, ejecta and nickel masses while taking into account all uncertainties and covariances of the model. We describe the efforts to compare the inferences to the predictions of the neutrino mechanim. The model can be adapted to include more physical assumptions to utilize primarily photometric data coming from surveys such as LSST.
NASA Astrophysics Data System (ADS)
Gupta, Anshu; Krishnan, Badri; Nielsen, Alex B.; Schnetter, Erik
2018-04-01
The behavior of quasilocal black hole horizons in a binary black hole merger is studied numerically. We compute the horizon multipole moments, fluxes, and other quantities on black hole horizons throughout the merger. These lead to a better qualitative and quantitative understanding of the coalescence of two black holes: how the final black hole is formed, initially grows, and then settles down to a Kerr black hole. We calculate the rate at which the final black hole approaches equilibrium in a fully nonperturbative situation and identify a time at which the linear ringdown phase begins. Finally, we provide additional support for the conjecture that fields at the horizon are correlated with fields in the wave zone by comparing the in-falling gravitational wave flux at the horizon to the outgoing flux as estimated from the gravitational waveform.
Beirle, Steffen; Hörmann, Christoph; Penning de Vries, Malouse; Dörner, Stefan; Kern, Christoph; Wagner, Thomas
2014-01-01
We present an analysis of SO2 column densities derived from GOME-2 satellite measurements for the Kīlauea volcano (Hawai`i) for 2007–2012. During a period of enhanced degassing activity in March–November 2008, monthly mean SO2 emission rates and effective SO2 lifetimes are determined simultaneously from the observed downwind plume evolution and meteorological wind fields, without further model input. Kīlauea is particularly suited for quantitative investigations from satellite observations owing to the absence of interfering sources, the clearly defined downwind plumes caused by steady trade winds, and generally low cloud fractions. For March–November 2008, the effective SO2 lifetime is 1–2 days, and Kīlauea SO2 emission rates are 9–21 kt day−1, which is about 3 times higher than initially reported from ground-based monitoring systems.
Margraf, Jürgen
2017-01-01
Within committed relationships, a wide range of factors may challenge or facilitate sexual satisfaction. The aim of this study was to clarify which individual, partner-, and partnership-related aspects of a sexual relationship are crucial for the prediction of sexual satisfaction. The study included data of a representative sample of 964 couples from the general population. The actor-partner interdependence model was used to estimate actor and partner effects. Overall, predictors explained 57% of outcome variance. Actor effects were found for sexual function, sexual distress, frequency of sexual activity, desire discrepancy, sexual initiative, sexual communication, sociosexual orientation, masturbation, and life satisfaction. Gender-specific partner effects were found for sexual function and sexual distress. Neither age, nor relationship duration were significant predictors. To deepen our understanding of sexual satisfaction, it is necessary to take quantitative and qualitative aspects of sexual relationships into account and to consider actor-, partner-, and relationship-related predictors. PMID:28231314
NASA Technical Reports Server (NTRS)
Bell, L. D.; Boer, E.; Ostraat, M.; Brongersma, M. L.; Flagan, R. C.; Atwater, H. A.
2000-01-01
NASA requirements for computing and memory for microspacecraft emphasize high density, low power, small size, and radiation hardness. The distributed nature of storage elements in nanocrystal floating-gate memories leads to intrinsic fault tolerance and radiation hardness. Conventional floating-gate non-volatile memories are more susceptible to radiation damage. Nanocrystal-based memories also offer the possibility of faster, lower power operation. In the pursuit of filling these requirements, the following tasks have been accomplished: (1) Si nanocrystal charging has been accomplished with conducting-tip AFM; (2) Both individual nanocrystals on an oxide surface and nanocrystals formed by implantation have been charged; (3) Discharging is consistent with tunneling through a field-lowered oxide barrier; (4) Modeling of the response of the AFM to trapped charge has allowed estimation of the quantity of trapped charge; and (5) Initial attempts to fabricate competitive nanocrystal non-volatile memories have been extremely successful.
Distance measures and optimization spaces in quantitative fatty acid signature analysis
Bromaghin, Jeffrey F.; Rode, Karyn D.; Budge, Suzanne M.; Thiemann, Gregory W.
2015-01-01
Quantitative fatty acid signature analysis has become an important method of diet estimation in ecology, especially marine ecology. Controlled feeding trials to validate the method and estimate the calibration coefficients necessary to account for differential metabolism of individual fatty acids have been conducted with several species from diverse taxa. However, research into potential refinements of the estimation method has been limited. We compared the performance of the original method of estimating diet composition with that of five variants based on different combinations of distance measures and calibration-coefficient transformations between prey and predator fatty acid signature spaces. Fatty acid signatures of pseudopredators were constructed using known diet mixtures of two prey data sets previously used to estimate the diets of polar bears Ursus maritimus and gray seals Halichoerus grypus, and their diets were then estimated using all six variants. In addition, previously published diets of Chukchi Sea polar bears were re-estimated using all six methods. Our findings reveal that the selection of an estimation method can meaningfully influence estimates of diet composition. Among the pseudopredator results, which allowed evaluation of bias and precision, differences in estimator performance were rarely large, and no one estimator was universally preferred, although estimators based on the Aitchison distance measure tended to have modestly superior properties compared to estimators based on the Kullback-Leibler distance measure. However, greater differences were observed among estimated polar bear diets, most likely due to differential estimator sensitivity to assumption violations. Our results, particularly the polar bear example, suggest that additional research into estimator performance and model diagnostics is warranted.
Hormuth, David A; Skinner, Jack T; Does, Mark D; Yankeelov, Thomas E
2014-05-01
Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) can quantitatively and qualitatively assess physiological characteristics of tissue. Quantitative DCE-MRI requires an estimate of the time rate of change of the concentration of the contrast agent in the blood plasma, the vascular input function (VIF). Measuring the VIF in small animals is notoriously difficult as it requires high temporal resolution images limiting the achievable number of slices, field-of-view, spatial resolution, and signal-to-noise. Alternatively, a population-averaged VIF could be used to mitigate the acquisition demands in studies aimed to investigate, for example, tumor vascular characteristics. Thus, the overall goal of this manuscript is to determine how the kinetic parameters estimated by a population based VIF differ from those estimated by an individual VIF. Eight rats bearing gliomas were imaged before, during, and after an injection of Gd-DTPA. K(trans), ve, and vp were extracted from signal-time curves of tumor tissue using both individual and population-averaged VIFs. Extended model voxel estimates of K(trans) and ve in all animals had concordance correlation coefficients (CCC) ranging from 0.69 to 0.98 and Pearson correlation coefficients (PCC) ranging from 0.70 to 0.99. Additionally, standard model estimates resulted in CCCs ranging from 0.81 to 0.99 and PCCs ranging from 0.98 to 1.00, supporting the use of a population based VIF if an individual VIF is not available. Copyright © 2014 Elsevier Inc. All rights reserved.
On the reconciliation of missing heritability for genome-wide association studies
Chen, Guo-Bo
2016-01-01
The definition of heritability has been unique and clear, but its estimation and estimates vary across studies. Linear mixed model (LMM) and Haseman–Elston (HE) regression analyses are commonly used for estimating heritability from genome-wide association data. This study provides an analytical resolution that can be used to reconcile the differences between LMM and HE in the estimation of heritability given the genetic architecture, which is responsible for these differences. The genetic architecture was classified into three forms via thought experiments: (i) coupling genetic architecture that the quantitative trait loci (QTLs) in the linkage disequilibrium (LD) had a positive covariance; (ii) repulsion genetic architecture that the QTLs in the LD had a negative covariance; (iii) and neutral genetic architecture that the QTLs in the LD had a covariance with a summation of zero. The neutral genetic architecture is so far most embraced, whereas the coupling and the repulsion genetic architecture have not been well investigated. For a quantitative trait under the coupling genetic architecture, HE overestimated the heritability and LMM underestimated the heritability; under the repulsion genetic architecture, HE underestimated but LMM overestimated the heritability for a quantitative trait. These two methods gave identical results under the neutral genetic architecture. A general analytical result for the statistic estimated under HE is given regardless of genetic architecture. In contrast, the performance of LMM remained elusive, such as further depended on the ratio between the sample size and the number of markers, but LMM converged to HE with increased sample size. PMID:27436266
CH-47F Improved Cargo Helicopter (CH-47F)
2015-12-01
Confidence Level Confidence Level of cost estimate for current APB: 50% The Confidence Level of the CH-47F APB cost estimate, which was approved on April...M) Initial PAUC Development Estimate Changes PAUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total 10.316 -0.491 3.003 -0.164 2.273 7.378...SAR Baseline to Current SAR Baseline (TY $M) Initial APUC Development Estimate Changes APUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munis, R.H.; Marshall, S.J.; Bush, M.A.
1976-09-01
During the winter of 1973-74 a mobile infrared thermography system was used to survey campus buildings at Dartmouth College, Hanover, New Hampshire. Both qualitative and quantitative data are presented regarding heat flow through a small area of a wall of one brick dormitory building before and after installation of aluminum reflectors between radiators and the wall. These data were used to estimate annual cost savings for 22 buildings of similar construction having aluminum reflectors installed behind 1100 radiators. The data were then compared with the actual savings which were calculated from condensate meter data. The discrepancy between estimated and actualmore » annual cost savings is explained in detail along with all assumptions required for these calculations.« less
NASA Astrophysics Data System (ADS)
Trifonov, A. P.; Korchagin, Yu. E.; Korol'kov, S. V.
2018-05-01
We synthesize the quasi-likelihood, maximum-likelihood, and quasioptimal algorithms for estimating the arrival time and duration of a radio signal with unknown amplitude and initial phase. The discrepancies between the hardware and software realizations of the estimation algorithm are shown. The characteristics of the synthesized-algorithm operation efficiency are obtained. Asymptotic expressions for the biases, variances, and the correlation coefficient of the arrival-time and duration estimates, which hold true for large signal-to-noise ratios, are derived. The accuracy losses of the estimates of the radio-signal arrival time and duration because of the a priori ignorance of the amplitude and initial phase are determined.
Uniform gradient estimates on manifolds with a boundary and applications
NASA Astrophysics Data System (ADS)
Cheng, Li-Juan; Thalmaier, Anton; Thompson, James
2018-04-01
We revisit the problem of obtaining uniform gradient estimates for Dirichlet and Neumann heat semigroups on Riemannian manifolds with boundary. As applications, we obtain isoperimetric inequalities, using Ledoux's argument, and uniform quantitative gradient estimates, firstly for C^2_b functions with boundary conditions and then for the unit spectral projection operators of Dirichlet and Neumann Laplacians.
Comparison of estimated and measured sediment yield in the Gualala River
Matthew O’Connor; Jack Lewis; Robert Pennington
2012-01-01
This study compares quantitative erosion rate estimates developed at different spatial and temporal scales. It is motivated by the need to assess potential water quality impacts of a proposed vineyard development project in the Gualala River watershed. Previous erosion rate estimates were developed using sediment source assessment techniques by the North Coast Regional...
Neuroergonomics: Quantitative Modeling of Individual, Shared, and Team Neurodynamic Information.
Stevens, Ronald H; Galloway, Trysha L; Willemsen-Dunlap, Ann
2018-06-01
The aim of this study was to use the same quantitative measure and scale to directly compare the neurodynamic information/organizations of individual team members with those of the team. Team processes are difficult to separate from those of individual team members due to the lack of quantitative measures that can be applied to both process sets. Second-by-second symbolic representations were created of each team member's electroencephalographic power, and quantitative estimates of their neurodynamic organizations were calculated from the Shannon entropy of the symbolic data streams. The information in the neurodynamic data streams of health care ( n = 24), submarine navigation ( n = 12), and high school problem-solving ( n = 13) dyads was separated into the information of each team member, the information shared by team members, and the overall team information. Most of the team information was the sum of each individual's neurodynamic information. The remaining team information was shared among the team members. This shared information averaged ~15% of the individual information, with momentary levels of 1% to 80%. Continuous quantitative estimates can be made from the shared, individual, and team neurodynamic information about the contributions of different team members to the overall neurodynamic organization of a team and the neurodynamic interdependencies among the team members. Information models provide a generalizable quantitative method for separating a team's neurodynamic organization into that of individual team members and that shared among team members.
NASA Astrophysics Data System (ADS)
Mercado, Karla Patricia E.
Tissue engineering holds great promise for the repair or replacement of native tissues and organs. Further advancements in the fabrication of functional engineered tissues are partly dependent on developing new and improved technologies to monitor the properties of engineered tissues volumetrically, quantitatively, noninvasively, and nondestructively over time. Currently, engineered tissues are evaluated during fabrication using histology, biochemical assays, and direct mechanical tests. However, these techniques destroy tissue samples and, therefore, lack the capability for real-time, longitudinal monitoring. The research reported in this thesis developed nondestructive, noninvasive approaches to characterize the structural, biological, and mechanical properties of 3-D engineered tissues using high-frequency quantitative ultrasound and elastography technologies. A quantitative ultrasound technique, using a system-independent parameter known as the integrated backscatter coefficient (IBC), was employed to visualize and quantify structural properties of engineered tissues. Specifically, the IBC was demonstrated to estimate cell concentration and quantitatively detect differences in the microstructure of 3-D collagen hydrogels. Additionally, the feasibility of an ultrasound elastography technique called Single Tracking Location Acoustic Radiation Force Impulse (STL-ARFI) imaging was demonstrated for estimating the shear moduli of 3-D engineered tissues. High-frequency ultrasound techniques can be easily integrated into sterile environments necessary for tissue engineering. Furthermore, these high-frequency quantitative ultrasound techniques can enable noninvasive, volumetric characterization of the structural, biological, and mechanical properties of engineered tissues during fabrication and post-implantation.
PARAMETER ESTIMATION OF TWO-FLUID CAPILLARY PRESSURE-SATURATION AND PERMEABILITY FUNCTIONS
Capillary pressure and permeability functions are crucial to the quantitative description of subsurface flow and transport. Earlier work has demonstrated the feasibility of using the inverse parameter estimation approach in determining these functions if both capillary pressure ...
Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing
2016-01-01
In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: 1) the reconstruction algorithms do not make full use of projection statistics; and 2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10 to 40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378
NASA Astrophysics Data System (ADS)
Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing
2016-08-01
In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.
Probing collagen-enzyme mechanochemistry in native tissue with dynamic, enzyme-induced creep
Zareian, Ramin; Church, Kelli P.; Saeidi, Nima; Flynn, Brendan P.; Beale, John W.; Ruberti, Jeffrey W.
2012-01-01
Mechanical strain or stretch of collagen has been shown to be protective of fibrils against both thermal and enzymatic degradation. The details of this mechanochemical relationship could change our understanding of load-bearing tissue formation, growth, maintenance and disease in vertebrate animals. However, extracting a quantitative relationship between strain and the rate of enzymatic degradation is extremely difficult in bulk tissue due to confounding diffusion effects. In this investigation, we develop a dynamic, enzyme-induced creep assay and diffusion/reaction rate scaling arguments to extract a lower bound on the relationship between strain and the cutting rate of bacterial collagenase (BC) at low strains. The assay method permits continuous, forced probing of enzyme-induced strain which is very sensitive to degradation rate differences between specimens at low initial strain. The results, obtained on uniaxially-loaded strips of bovine corneal tissue (0.1, 0.25 or 0.5 N), demonstrate that small differences in strain alter the enzymatic cutting rate of the BC substantially. It was estimated that a change in tissue elongation of only 1.5% (at ~5% strain) reduces the maximum cutting-rate of the enzyme by more than half. Estimation of the average load per monomer in the tissue strips indicates that this protective “cutoff” occurs when the collagen monomers are transitioning from an entropic to an energetic mechanical regime. The continuous tracking of the enzymatic cleavage rate as a function of strain during the initial creep response indicates that the decrease in the cleavage rate of the BC is non-linear (initially-steep between 4.5 and 6.5% then flattens out from 6.5–9.5%). The high sensitivity to strain at low strain implies that even lightly-loaded collagenous tissue may exhibit significant strain-protection. The dynamic, enzyme-induced creep assay described herein has the potential to permit the rapid characterization of collagen/enzyme mechanochemistry in many different tissue types. PMID:20429513
Angiographic assessment of initial balloon angioplasty results.
Gardiner, Geoffrey A; Sullivan, Kevin L; Halpern, Ethan J; Parker, Laurence; Beck, Margaret; Bonn, Joseph; Levin, David C
2004-10-01
To determine the influence of three factors involved in the angiographic assessment of balloon angioplasty-interobserver variability, operator bias, and the definition used to determine success-on the primary (technical) results of angioplasty in the peripheral arteries. Percent stenosis in 107 lesions in lower-extremity arteries was graded by three independent, experienced vascular radiologists ("observers") before and after balloon angioplasty and their estimates were compared with the initial interpretations reported by the physician performing the procedure ("operator") and an automated quantitative computer analysis. Observer variability was measured with use of intraclass correlation coefficients and SD. Differences among the operator, observers, and the computer were analyzed with use of the Wilcoxon signed-rank test and analysis of variance. For each evaluator, the results in this series of lesions were interpreted with three different definitions of success. Estimation of residual stenosis varied by an average range of 22.76% with an average SD of 8.99. The intraclass correlation coefficients averaged 0.59 for residual stenosis after angioplasty for the three observers but decreased to 0.36 when the operator was included as the fourth evaluator. There was good to very good agreement among the three independent observers and the computer, but poor correlation with the operator (P = .001). The primary success rates for this series of lesions varied from a low of 47% to high of 99%, depending solely on which definition of success was used. Significant differences among the operator, the three observers, and the computer were not present when the definition of success was based on less than 50% residual stenosis. Observer variability and bias in the subjective evaluation of peripheral angioplasty can have a significant influence on the reported initial success rates. This effect can be largely eliminated with the use of residual stenosis of less than 50% to define success. Otherwise, meaningful evaluation of angioplasty results will require independent panels of evaluators or computerized measurements.
NASA Astrophysics Data System (ADS)
Raeder, K.; Anderson, J. L.; Lauritzen, P. H.; Hoar, T. J.; Collins, N.
2010-12-01
DART (www.image.ucar.edu/DAReS/DART) is a general purpose, freely available, ensemble Kalman filter, data assimilation system, which is being used to generate state-of-the-art, partially coupled, ocean-atmosphere re-analyses in support of the decadal predictions planned for the next IPCC report. The resulting gridded product is directly comparable to the state variables output by POP and CAM (oceanic and atmospheric components of NCAR's Community Earth System Model climate model) because those are the assimilating models. Other models could also benefit from comparison against these reanalyses, since the ocean analyses are at the leading edge of ocean state estimation, and the atmospheric analyses are competitive with operational centers'. Such comparisons can reveal model biases and predictability characteristics, and do so in a quantitative way, since the ensemble nature of the analyses provides an objective estimate of the analysis error. The analyses will also be used as initial conditions for the decadal forecasts because they are the most realistic available. The generation of such analyses has revealed errors in model formulation for several versions of the finite volume core CAM, which has led to model improvements in each case. New models can be incorporated into DART in a matter of weeks, allowing them to be compared directly against available observations. The observations currently used in the assimilations include, for the ocean; temperature and salinity from the World Ocean Database (floats, drifters, moorings, autonomous pinipeds, and others), and for the atmosphere; temperature and winds from radiosondes, satellite drift winds, ACARS and aircraft. Observations of ocean currents and atmospheric moisture and pressure are also available. Global Positioning System profiles of atmospheric temperature and moisture are available for recent years. All that is required to add new observations to the suite is the forward operator, which generates an estimate of the observation from the model state. In summary, DART provides a flexible, convenient, rigorous environment for evaluating models in the context of real observations.
Niu, Ben; Zhang, Hao; Giblin, Daryl; Rempel, Don L; Gross, Michael L
2015-05-01
Fast photochemical oxidation of proteins (FPOP) employs laser photolysis of hydrogen peroxide to give OH radicals that label amino acid side-chains of proteins on the microsecond time scale. A method for quantitation of hydroxyl radicals after laser photolysis is of importance to FPOP because it establishes a means to adjust the yield of •OH, offers the opportunity of tunable modifications, and provides a basis for kinetic measurements. The initial concentration of OH radicals has yet to be measured experimentally. We report here an approach using isotope dilution gas chromatography/mass spectrometry (GC/MS) to determine quantitatively the initial •OH concentration (we found ~0.95 mM from 15 mM H2O2) from laser photolysis and to investigate the quenching efficiencies for various •OH scavengers.
Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K
2013-08-01
A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.
Blew, Robert M; Lee, Vinson R; Farr, Joshua N; Schiferl, Daniel J; Going, Scott B
2014-02-01
Peripheral quantitative computed tomography (pQCT) is an essential tool for assessing bone parameters of the limbs, but subject movement and its impact on image quality remains a challenge to manage. The current approach to determine image viability is by visual inspection, but pQCT lacks a quantitative evaluation. Therefore, the aims of this study were to (1) examine the reliability of a qualitative visual inspection scale and (2) establish a quantitative motion assessment methodology. Scans were performed on 506 healthy girls (9-13 years) at diaphyseal regions of the femur and tibia. Scans were rated for movement independently by three technicians using a linear, nominal scale. Quantitatively, a ratio of movement to limb size (%Move) provided a measure of movement artifact. A repeat-scan subsample (n = 46) was examined to determine %Move's impact on bone parameters. Agreement between measurers was strong (intraclass correlation coefficient = 0.732 for tibia, 0.812 for femur), but greater variability was observed in scans rated 3 or 4, the delineation between repeat and no repeat. The quantitative approach found ≥95% of subjects had %Move <25 %. Comparison of initial and repeat scans by groups above and below 25% initial movement showed significant differences in the >25 % grouping. A pQCT visual inspection scale can be a reliable metric of image quality, but technicians may periodically mischaracterize subject motion. The presented quantitative methodology yields more consistent movement assessment and could unify procedure across laboratories. Data suggest a delineation of 25% movement for determining whether a diaphyseal scan is viable or requires repeat.
Blew, Robert M.; Lee, Vinson R.; Farr, Joshua N.; Schiferl, Daniel J.; Going, Scott B.
2013-01-01
Purpose Peripheral quantitative computed tomography (pQCT) is an essential tool for assessing bone parameters of the limbs, but subject movement and its impact on image quality remains a challenge to manage. The current approach to determine image viability is by visual inspection, but pQCT lacks a quantitative evaluation. Therefore, the aims of this study were to (1) examine the reliability of a qualitative visual inspection scale, and (2) establish a quantitative motion assessment methodology. Methods Scans were performed on 506 healthy girls (9–13yr) at diaphyseal regions of the femur and tibia. Scans were rated for movement independently by three technicians using a linear, nominal scale. Quantitatively, a ratio of movement to limb size (%Move) provided a measure of movement artifact. A repeat-scan subsample (n=46) was examined to determine %Move’s impact on bone parameters. Results Agreement between measurers was strong (ICC = .732 for tibia, .812 for femur), but greater variability was observed in scans rated 3 or 4, the delineation between repeat or no repeat. The quantitative approach found ≥95% of subjects had %Move <25%. Comparison of initial and repeat scans by groups above and below 25% initial movement, showed significant differences in the >25% grouping. Conclusions A pQCT visual inspection scale can be a reliable metric of image quality but technicians may periodically mischaracterize subject motion. The presented quantitative methodology yields more consistent movement assessment and could unify procedure across laboratories. Data suggest a delineation of 25% movement for determining whether a diaphyseal scan is viable or requires repeat. PMID:24077875
2017-06-30
Research and Development Program [SERDP] project #ER-2542) into the canister would provide enhancement of the quantitative estimation of the TWA...7 4. Advantages and limitations compared to other sampling techniques...Department of Defense EOD Explosive Ordnance Disposal EPA United States Environmental Protection Agency EQL Environmental Quantitation Limit EST
A more quantitative extraction of arsenic-containing compounds from seafood matrices is essential in developing better dietary exposure estimates. More quantitative extraction often implies a more chemically aggressive set of extraction conditions. However, these conditions may...
A Quantitative Needs Assessment Technique for Cross-Cultural Work Adjustment Training.
ERIC Educational Resources Information Center
Selmer, Lyn
2000-01-01
A study of 67 Swedish expatriate bosses and 104 local Hong Kong middle managers tested a quantitative needs assessment technique measuring work values. Two-thirds of middle managers' work values were not correctly estimated by their bosses, especially instrumental values (pay, benefits, security, working hours and conditions), indicating a need…
ERIC Educational Resources Information Center
Haworth, Claire M. A.; Plomin, Robert
2010-01-01
Objective: To consider recent findings from quantitative genetic research in the context of molecular genetic research, especially genome-wide association studies. We focus on findings that go beyond merely estimating heritability. We use learning abilities and disabilities as examples. Method: Recent twin research in the area of learning…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-17
... Evaluation and Research (CBER) and suggestions for further development. The public workshop will include... Evaluation and Research (HFM-210), Food and Drug Administration, 1401 Rockville Pike, suite 200N, Rockville... models to generate quantitative estimates of the benefits and risks of influenza vaccination. The public...
Quantitative test for concave aspheric surfaces using a Babinet compensator.
Saxena, A K
1979-08-15
A quantitative test for the evaluation of surface figures of concave aspheric surfaces using a Babinet compensator is reported. A theoretical estimate of the sensitivity is 0.002lambda for a minimum detectable phase change of 2 pi x 10(-3) rad over a segment length of 1.0 cm.
Gifford, Aliya; Walker, Ronald C.; Towse, Theodore F.; Brian Welch, E.
2015-01-01
Abstract. Beyond estimation of depot volumes, quantitative analysis of adipose tissue properties could improve understanding of how adipose tissue correlates with metabolic risk factors. We investigated whether the fat signal fraction (FSF) derived from quantitative fat–water magnetic resonance imaging (MRI) scans at 3.0 T correlates to CT Hounsfield units (HU) of the same tissue. These measures were acquired in the subcutaneous white adipose tissue (WAT) at the umbilical level of 21 healthy adult subjects. A moderate correlation exists between MRI- and CT-derived WAT values for all subjects, R2=0.54, p<0.0001, with a slope of −2.6, (95% CI [−3.3,−1.8]), indicating that a decrease of 1 HU equals a mean increase of 0.38% FSF. We demonstrate that FSF estimates obtained using quantitative fat–water MRI techniques correlate with CT HU values in subcutaneous WAT, and therefore, MRI-based FSF could be used as an alternative to CT HU for assessing metabolic risk factors. PMID:26702407
Sano, Yuko; Okuyama, Chio; Iehara, Tomoko; Matsushima, Shigenori; Yamada, Kei; Hosoi, Hajime; Nishimura, Tsunehiko
2012-07-01
The purpose of this study is to evaluate a new semi-quantitative estimation method using (123)I-MIBG retention ratio to assess response to chemotherapy for advanced neuroblastoma. Thirteen children with advanced neuroblastoma (International Neuroblastoma Risk Group Staging System: stage M) were examined for a total of 51 studies with (123)I-MIBG scintigraphy (before and during chemotherapy). We proposed a new semi-quantitative method using MIBG retention ratio (count obtained with delayed image/count obtained with early image with decay correction) to estimate MIBG accumulation. We analyzed total (123)I-MIBG retention ratio (TMRR: total body count obtained with delayed image/total body count obtained with early image with decay correction) and compared with a scoring method in terms of correlation with tumor markers. TMRR showed significantly higher correlations with urinary catecholamine metabolites before chemotherapy (VMA: r(2) = 0.45, P < 0.05, HVA: r(2) = 0.627, P < 0.01) than MIBG score (VMA: r(2) = 0.19, P = 0.082, HVA: r(2) = 0.25, P = 0.137). There were relatively good correlations between serial change of TMRR and those of urinary catecholamine metabolites (VMA: r(2) = 0.274, P < 0.001, HVA: r(2) = 0.448, P < 0.0001) compared with serial change of MIBG score and those of tumor markers (VMA: r(2) = 0.01, P = 0.537, HVA: 0.084, P = 0.697) during chemotherapy for advanced neuroblastoma. TMRR could be a useful semi-quantitative method for estimating early response to chemotherapy of advanced neuroblastoma because of its high correlation with urine catecholamine metabolites.
Heifetz, Eliyahu M; Soller, Morris
2015-07-07
High-resolution mapping of the loci (QTN) responsible for genetic variation in quantitative traits is essential for positional cloning of candidate genes, and for effective marker assisted selection. The confidence interval (QTL) flanking the point estimate of QTN-location is proportional to the number of individuals in the mapping population carrying chromosomes recombinant in the given interval. Consequently, many designs for high resolution QTN mapping are based on increasing the proportion of recombinants in the mapping population. The "Targeted Recombinant Progeny" (TRP) design is a new design for high resolution mapping of a target QTN in crosses between pure, or inbred lines. It is a three-generation procedure generating a large number of recombinant individuals within a QTL previously shown to contain a QTN. This is achieved by having individuals that carry chromosomes recombinant across the target QTL interval as parents of a large mapping population; most of whom will therefore carry recombinant chromosomes targeted to the given QTL. The TRP design is particularly useful for high resolution mapping of QTN that differentiate inbred or pure lines, and hence are not amenable to high resolution mapping by genome-wide association tests. In the absence of residual polygenic variation, population sizes required for achieving given mapping resolution by the TRP-F2 design relative to a standard F2 design ranged from 0.289 for a QTN with standardized allele substitution effect = 0.2, mapped to an initial QTL of 0.2 Morgan to 0.041 for equivalent QTN mapped to an initial QTL of 0.02 M. In the presence of residual polygenic variation, the relative effectiveness of the TRP design ranges from 1.068 to 0.151 for the same initial QTL intervals and QTN effect. Thus even in the presence of polygenic variation, the TRP can still provide major savings. Simulation showed that mapping by TRP should be based on 30-50 markers spanning the initial interval; and on at least 50 or more G2 families representing this number of recombination points,. The TRP design can be an effective procedure for achieving high and ultra-high mapping resolution of a target QTN previously mapped to a known confidence interval (QTL).
Quantitative Precipitation Nowcasting: A Lagrangian Pixel-Based Approach
2012-01-01
Sorooshian, T. Bellerby, and G. Huffman, 2010: REFAME: Rain Estimation Using Forward-Adjusted Advection of Microwave Estimates. J. of Hydromet ., 11...precipitation forecasting using information from radar and Numerical Weather Prediction models. J. of Hydromet ., 4(6):1168-1180. Germann, U., and I
Overview of T.E.S.T. (Toxicity Estimation Software Tool)
This talk provides an overview of T.E.S.T. (Toxicity Estimation Software Tool). T.E.S.T. predicts toxicity values and physical properties using a variety of different QSAR (quantitative structure activity relationship) approaches including hierarchical clustering, group contribut...
Researchers facilitated evaluation of chemicals that lack chronic oral toxicity values using a QSAR model to develop estimates of potential toxicity for chemicals used in HF fluids or found in flowback or produced water
Topics in Extrasolar Planet Characterization
NASA Astrophysics Data System (ADS)
Howe, Alex Ryan
I present four papers exploring different topics in the area of characterizing the atmospheric and bulk properties of extrasolar planets. In these papers, I present two new codes, in various forms, for modeling these objects. A code to generate theoretical models of transit spectra of exoplanets is featured in the first paper and is refined and expanded into the APOLLO code for spectral modeling and parameter retrieval in the fourth paper. Another code to model the internal structure and evolution of planets is featured in the second and third papers. The first paper presents transit spectra models of GJ 1214b and other super-Earth and mini-Neptune type planets--planets with a "solid", terrestrial composition and relatively small planets with a thick hydrogen-helium atmosphere, respectively--and fit them to observational data to estimate the atmospheric compositions and cloud properties of these planets. The second paper presents structural models of super-Earth and mini-Neptune type planets and estimates their bulk compositions from mass and radius estimates. The third paper refines these models with evolutionary calculations of thermal contraction and ultraviolet-driven mass loss. Here, we estimate the boundaries of the parameter space in which planets lose their initial hydrogen-helium atmospheres completely, and we also present formation and evolution scenarios for the planets in the Kepler-11 system. The fourth paper uses more refined transit spectra models, this time for hot jupiter type planets, to explore the methods to design optimal observing programs for the James Webb Space Telescope to quantitatively measure the atmospheric compositions and other properties of these planets.
Molecular Analysis of the In Situ Growth Rates of Subsurface Geobacter Species
Giloteaux, Ludovic; Barlett, Melissa; Chavan, Milind A.; Smith, Jessica A.; Williams, Kenneth H.; Wilkins, Michael; Long, Philip; Lovley, Derek R.
2013-01-01
Molecular tools that can provide an estimate of the in situ growth rate of Geobacter species could improve understanding of dissimilatory metal reduction in a diversity of environments. Whole-genome microarray analyses of a subsurface isolate of Geobacter uraniireducens, grown under a variety of conditions, identified a number of genes that are differentially expressed at different specific growth rates. Expression of two genes encoding ribosomal proteins, rpsC and rplL, was further evaluated with quantitative reverse transcription-PCR (qRT-PCR) in cells with doubling times ranging from 6.56 h to 89.28 h. Transcript abundance of rpsC correlated best (r2 = 0.90) with specific growth rates. Therefore, expression patterns of rpsC were used to estimate specific growth rates of Geobacter species during an in situ uranium bioremediation field experiment in which acetate was added to the groundwater to promote dissimilatory metal reduction. Initially, increased availability of acetate in the groundwater resulted in higher expression of Geobacter rpsC, and the increase in the number of Geobacter cells estimated with fluorescent in situ hybridization compared well with specific growth rates estimated from levels of in situ rpsC expression. However, in later phases, cell number increases were substantially lower than predicted from rpsC transcript abundance. This change coincided with a bloom of protozoa and increased attachment of Geobacter species to solid phases. These results suggest that monitoring rpsC expression may better reflect the actual rate that Geobacter species are metabolizing and growing during in situ uranium bioremediation than changes in cell abundance. PMID:23275510
Modeling landslide recurrence in Seattle, Washington, USA
Salciarini, Diana; Godt, Jonathan W.; Savage, William Z.; Baum, Rex L.; Conversini, Pietro
2008-01-01
To manage the hazard associated with shallow landslides, decision makers need an understanding of where and when landslides may occur. A variety of approaches have been used to estimate the hazard from shallow, rainfall-triggered landslides, such as empirical rainfall threshold methods or probabilistic methods based on historical records. The wide availability of Geographic Information Systems (GIS) and digital topographic data has led to the development of analytic methods for landslide hazard estimation that couple steady-state hydrological models with slope stability calculations. Because these methods typically neglect the transient effects of infiltration on slope stability, results cannot be linked with historical or forecasted rainfall sequences. Estimates of the frequency of conditions likely to cause landslides are critical for quantitative risk and hazard assessments. We present results to demonstrate how a transient infiltration model coupled with an infinite slope stability calculation may be used to assess shallow landslide frequency in the City of Seattle, Washington, USA. A module called CRF (Critical RainFall) for estimating deterministic rainfall thresholds has been integrated in the TRIGRS (Transient Rainfall Infiltration and Grid-based Slope-Stability) model that combines a transient, one-dimensional analytic solution for pore-pressure response to rainfall infiltration with an infinite slope stability calculation. Input data for the extended model include topographic slope, colluvial thickness, initial water-table depth, material properties, and rainfall durations. This approach is combined with a statistical treatment of rainfall using a GEV (General Extreme Value) probabilistic distribution to produce maps showing the shallow landslide recurrence induced, on a spatially distributed basis, as a function of rainfall duration and hillslope characteristics.
NASA Astrophysics Data System (ADS)
Torres, A. D.; Rasmussen, K. L.; Bodine, D. J.; Dougherty, E.
2015-12-01
Plains Elevated Convection At Night (PECAN) was a large field campaign that studied nocturnal mesoscale convective systems (MCSs), convective initiation, bores, and low-level jets across the central plains in the United States. MCSs are responsible for over half of the warm-season precipitation across the central U.S. plains. The rainfall from deep convection of these systems over land have been observed to be underestimated by satellite radar rainfall-retrieval algorithms by as much as 40 percent. These algorithms have a strong dependence on the generally unmeasured rain drop-size distribution (DSD). During the campaign, our group measured rainfall DSDs, precipitation fall velocities, and total precipitation in the convective and stratiform regions of MCSs using Ott Parsivel optical laser disdrometers. The disdrometers were co-located with mobile pod units that measured temperature, wind, and relative humidity for quality control purposes. Data from the operational NEXRAD radar in LaCrosse, Wisconsin and space-based radar measurements from a Global Precipitation Measurement satellite overpass on July 13, 2015 were used for the analysis. The focus of this study is to compare DSD measurements from the disdrometers to radars in an effort to reduce errors in existing rainfall-retrieval algorithms. The error analysis consists of substituting measured DSDs into existing quantitative precipitation estimation techniques (e.g. Z-R relationships and dual-polarization rain estimates) and comparing these estimates to ground measurements of total precipitation. The results from this study will improve climatological estimates of total precipitation in continental convection that are used in hydrological studies, climate models, and other applications.
Geophysical Parameter Estimation of Near Surface Materials Using Nuclear Magnetic Resonance
NASA Astrophysics Data System (ADS)
Keating, K.
2017-12-01
Proton nuclear magnetic resonance (NMR), a mature geophysical technology used in petroleum applications, has recently emerged as a promising tool for hydrogeophysicists. The NMR measurement, which can be made in the laboratory, in boreholes, and using a surface based instrument, are unique in that it is directly sensitive to water, via the initial signal magnitude, and thus provides a robust estimate of water content. In the petroleum industry rock physics models have been established that relate NMR relaxation times to pore size distributions and permeability. These models are often applied directly for hydrogeophysical applications, despite differences in the material in these two environments (e.g., unconsolidated versus consolidated, and mineral content). Furthermore, the rock physics models linking NMR relaxation times to pore size distributions do not account for partially saturated systems that are important for understanding flow in the vadose zone. In our research, we are developing and refining quantitative rock physics models that relate NMR parameters to hydrogeological parameters. Here we highlight the limitations of directly applying established rock physics models to estimate hydrogeological parameters from NMR measurements, and show some of the successes we have had in model improvement. Using examples drawn from both laboratory and field measurements, we focus on the use of NMR in partial saturated systems to estimate water content, pore-size distributions, and the water retention curve. Despite the challenges in interpreting the measurements, valuable information about hydrogeological parameters can be obtained from NMR relaxation data, and we conclude by outlining pathways for improving the interpretation of NMR data for hydrogeophysical investigations.
Chiu, Su-Chin; Lin, Te-Ming; Lin, Jyh-Miin; Chung, Hsiao-Wen; Ko, Cheng-Wen; Büchert, Martin; Bock, Michael
2017-09-01
To investigate possible errors in T1 and T2 quantification via MR fingerprinting with balanced steady-state free precession readout in the presence of intra-voxel phase dispersion and RF pulse profile imperfections, using computer simulations based on Bloch equations. A pulse sequence with TR changing in a Perlin noise pattern and a nearly sinusoidal pattern of flip angle following an initial 180-degree inversion pulse was employed. Gaussian distributions of off-resonance frequency were assumed for intra-voxel phase dispersion effects. Slice profiles of sinc-shaped RF pulses were computed to investigate flip angle profile influences. Following identification of the best fit between the acquisition signals and those established in the dictionary based on known parameters, estimation errors were reported. In vivo experiments were performed at 3T to examine the results. Slight intra-voxel phase dispersion with standard deviations from 1 to 3Hz resulted in prominent T2 under-estimations, particularly at large T2 values. T1 and off-resonance frequencies were relatively unaffected. Slice profile imperfections led to under-estimations of T1, which became greater as regional off-resonance frequencies increased, but could be corrected by including slice profile effects in the dictionary. Results from brain imaging experiments in vivo agreed with the simulation results qualitatively. MR fingerprinting using balanced SSFP readout in the presence of intra-voxel phase dispersion and imperfect slice profile leads to inaccuracies in quantitative estimations of the relaxation times. Copyright © 2017 Elsevier Inc. All rights reserved.
Using Performance Tasks to Improve Quantitative Reasoning in an Introductory Mathematics Course
ERIC Educational Resources Information Center
Kruse, Gerald; Drews, David
2013-01-01
A full-cycle assessment of our efforts to improve quantitative reasoning in an introductory math course is described. Our initial iteration substituted more open-ended performance tasks for the active learning projects than had been used. Using a quasi-experimental design, we compared multiple sections of the same course and found non-significant…
Use of single nucleotide polymorphisms (SNP) to fine-map quantitative trait loci (QTL) in swine
USDA-ARS?s Scientific Manuscript database
Mapping quantitative trait loci (QTL) in swine at the US Meat Animal Research Center has relied heavily on linkage mapping in either F2 or Backcross families. QTL identified in the initial scans typically have very broad confidence intervals and further refinement of the QTL’s position is needed bef...
ERIC Educational Resources Information Center
Santizo, Isabelle Poupard
2017-01-01
This quantitative study focuses on the relationship between foreign language learners' aptitude and proficiency test scores. Four groups of 136 beginning students received six months of Initial Acquisition Training (IAT) in four different language categories, according to the level of complexity for an English speaker: French (Category I),…
Interdisciplinary Program for Quantitative Flaw Definition.
1978-01-01
Ceramics .................... 284 UNIT C, TASK 4 - Microfocus X-Ray and Image Enhance- ment of Radiographic Data ....................... 292 UNIT C, TASK 5...Conventional Ultrasonic Inspection Methods Applied to Ceramics ..................... 294 iii 7! SC595.32SA OVERVIEW PROJECT I - QUANTITATIVE...parameters. Unit C was initiated in October of 1977 following encouraging nondestructive defect detectability studies in structural ceramics , using
ERIC Educational Resources Information Center
Franklin, Somer L.; Slate, John R.; Joyner, Sheila A.
2014-01-01
In this article, we analyzed research studies in the field of graduate education. In particular, we explored the issue of inequity in graduate education through three key lenses of social science analyses. Furthermore, we analyzed selected quantitative research studies that undertook a comparative examination of aggregate trends in enrollment and…
Caro-Vega, Yanink; del Rio, Carlos; Lima, Viviane Dias; Lopez-Cervantes, Malaquias; Crabtree-Ramirez, Brenda; Bautista-Arredondo, Sergio; Colchero, M Arantxa; Sierra-Madero, Juan
2015-01-01
To estimate the impact of late ART initiation on HIV transmission among men who have sex with men (MSM) in Mexico. An HIV transmission model was built to estimate the number of infections transmitted by HIV-infected men who have sex with men (MSM-HIV+) MSM-HIV+ in the short and long term. Sexual risk behavior data were estimated from a nationwide study of MSM. CD4+ counts at ART initiation from a representative national cohort were used to estimate time since infection. Number of MSM-HIV+ on treatment and suppressed were estimated from surveillance and government reports. Status quo scenario (SQ), and scenarios of early ART initiation and increased HIV testing were modeled. We estimated 14239 new HIV infections per year from MSM-HIV+ in Mexico. In SQ, MSM take an average 7.4 years since infection to initiate treatment with a median CD4+ count of 148 cells/mm3(25th-75th percentiles 52-266). In SQ, 68% of MSM-HIV+ are not aware of their HIV status and transmit 78% of new infections. Increasing the CD4+ count at ART initiation to 350 cells/mm3 shortened the time since infection to 2.8 years. Increasing HIV testing to cover 80% of undiagnosed MSM resulted in a reduction of 70% in new infections in 20 years. Initiating ART at 500 cells/mm3 and increasing HIV testing the reduction would be of 75% in 20 years. A substantial number of new HIV infections in Mexico are transmitted by undiagnosed and untreated MSM-HIV+. An aggressive increase in HIV testing coverage and initiating ART at a CD4 count of 500 cells/mm3 in this population would significantly benefit individuals and decrease the number of new HIV infections in Mexico.
A statistical framework for protein quantitation in bottom-up MS-based proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas
2009-08-15
ABSTRACT Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and confidence measures. Challenges include the presence of low-quality or incorrectly identified peptides and widespread, informative, missing data. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model for protein abundance in terms of peptide peak intensities, applicable to both label-based and label-free quantitation experiments. The model allows for both random and censoring missingness mechanisms and provides naturally for protein-level estimates and confidence measures. The model is also used to derive automated filtering and imputation routines. Three LC-MS datasets are used tomore » illustrate the methods. Availability: The software has been made available in the open-source proteomics platform DAnTE (Polpitiya et al. (2008)) (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu« less
qF-SSOP: real-time optical property corrected fluorescence imaging
Valdes, Pablo A.; Angelo, Joseph P.; Choi, Hak Soo; Gioux, Sylvain
2017-01-01
Fluorescence imaging is well suited to provide image guidance during resections in oncologic and vascular surgery. However, the distorting effects of tissue optical properties on the emitted fluorescence are poorly compensated for on even the most advanced fluorescence image guidance systems, leading to subjective and inaccurate estimates of tissue fluorophore concentrations. Here we present a novel fluorescence imaging technique that performs real-time (i.e., video rate) optical property corrected fluorescence imaging. We perform full field of view simultaneous imaging of tissue optical properties using Single Snapshot of Optical Properties (SSOP) and fluorescence detection. The estimated optical properties are used to correct the emitted fluorescence with a quantitative fluorescence model to provide quantitative fluorescence-Single Snapshot of Optical Properties (qF-SSOP) images with less than 5% error. The technique is rigorous, fast, and quantitative, enabling ease of integration into the surgical workflow with the potential to improve molecular guidance intraoperatively. PMID:28856038
Critical Parameters of the Initiation Zone for Spontaneous Dynamic Rupture Propagation
NASA Astrophysics Data System (ADS)
Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.; Ampuero, J. P.; Mai, P. M.
2014-12-01
Numerical simulations of rupture propagation are used to study both earthquake source physics and earthquake ground motion. Under linear slip-weakening friction, artificial procedures are needed to initiate a self-sustained rupture. The concept of an overstressed asperity is often applied, in which the asperity is characterized by its size, shape and overstress. The physical properties of the initiation zone may have significant impact on the resulting dynamic rupture propagation. A trial-and-error approach is often necessary for successful initiation because 2D and 3D theoretical criteria for estimating the critical size of the initiation zone do not provide general rules for designing 3D numerical simulations. Therefore, it is desirable to define guidelines for efficient initiation with minimal artificial effects on rupture propagation. We perform an extensive parameter study using numerical simulations of 3D dynamic rupture propagation assuming a planar fault to examine the critical size of square, circular and elliptical initiation zones as a function of asperity overstress and background stress. For a fixed overstress, we discover that the area of the initiation zone is more important for the nucleation process than its shape. Comparing our numerical results with published theoretical estimates, we find that the estimates by Uenishi & Rice (2004) are applicable to configurations with low background stress and small overstress. None of the published estimates are consistent with numerical results for configurations with high background stress. We therefore derive new equations to estimate the initiation zone size in environments with high background stress. Our results provide guidelines for defining the size of the initiation zone and overstress with minimal effects on the subsequent spontaneous rupture propagation.
Shahgaldi, Kambiz; Gudmundsson, Petri; Manouras, Aristomenis; Brodin, Lars-Åke; Winter, Reidar
2009-01-01
Background Visual assessment of left ventricular ejection fraction (LVEF) is often used in clinical routine despite general recommendations to use quantitative biplane Simpsons (BPS) measurements. Even thou quantitative methods are well validated and from many reasons preferable, the feasibility of visual assessment (eyeballing) is superior. There is to date only sparse data comparing visual EF assessment in comparison to quantitative methods available. The aim of this study was to compare visual EF assessment by two-dimensional echocardiography (2DE) and triplane echocardiography (TPE) using quantitative real-time three-dimensional echocardiography (RT3DE) as the reference method. Methods Thirty patients were enrolled in the study. Eyeballing EF was assessed using apical 4-and 2 chamber views and TP mode by two experienced readers blinded to all clinical data. The measurements were compared to quantitative RT3DE. Results There were an excellent correlation between eyeballing EF by 2D and TP vs 3DE (r = 0.91 and 0.95 respectively) without any significant bias (-0.5 ± 3.7% and -0.2 ± 2.9% respectively). Intraobserver variability was 3.8% for eyeballing 2DE, 3.2% for eyeballing TP and 2.3% for quantitative 3D-EF. Interobserver variability was 7.5% for eyeballing 2D and 8.4% for eyeballing TP. Conclusion Visual estimation of LVEF both using 2D and TP by an experienced reader correlates well with quantitative EF determined by RT3DE. There is an apparent trend towards a smaller variability using TP in comparison to 2D, this was however not statistically significant. PMID:19706183
NASA Astrophysics Data System (ADS)
Valdes, Pablo A.; Angelo, Joseph; Gioux, Sylvain
2015-03-01
Fluorescence imaging has shown promise as an adjunct to improve the extent of resection in neurosurgery and oncologic surgery. Nevertheless, current fluorescence imaging techniques do not account for the heterogeneous attenuation effects of tissue optical properties. In this work, we present a novel imaging system that performs real time quantitative fluorescence imaging using Single Snapshot Optical Properties (SSOP) imaging. We developed the technique and performed initial phantom studies to validate the quantitative capabilities of the system for intraoperative feasibility. Overall, this work introduces a novel real-time quantitative fluorescence imaging method capable of being used intraoperatively for neurosurgical guidance.
Radar QPE for hydrological design: Intensity-Duration-Frequency curves
NASA Astrophysics Data System (ADS)
Marra, Francesco; Morin, Efrat
2015-04-01
Intensity-duration-frequency (IDF) curves are widely used in flood risk management since they provide an easy link between the characteristics of a rainfall event and the probability of its occurrence. They are estimated analyzing the extreme values of rainfall records, usually basing on raingauge data. This point-based approach raises two issues: first, hydrological design applications generally need IDF information for the entire catchment rather than a point, second, the representativeness of point measurements decreases with the distance from measure location, especially in regions characterized by steep climatological gradients. Weather radar, providing high resolution distributed rainfall estimates over wide areas, has the potential to overcome these issues. Two objections usually restrain this approach: (i) the short length of data records and (ii) the reliability of quantitative precipitation estimation (QPE) of the extremes. This work explores the potential use of weather radar estimates for the identification of IDF curves by means of a long length radar archive and a combined physical- and quantitative- adjustment of radar estimates. Shacham weather radar, located in the eastern Mediterranean area (Tel Aviv, Israel), archives data since 1990 providing rainfall estimates for 23 years over a region characterized by strong climatological gradients. Radar QPE is obtained correcting the effects of pointing errors, ground echoes, beam blockage, attenuation and vertical variations of reflectivity. Quantitative accuracy is then ensured with a range-dependent bias adjustment technique and reliability of radar QPE is assessed by comparison with gauge measurements. IDF curves are derived from the radar data using the annual extremes method and compared with gauge-based curves. Results from 14 study cases will be presented focusing on the effects of record length and QPE accuracy, exploring the potential application of radar IDF curves for ungauged locations and providing insights on the use of radar QPE for hydrological design studies.
Garmann, D; McLeay, S; Shah, A; Vis, P; Maas Enriquez, M; Ploeger, B A
2017-07-01
The pharmacokinetics (PK), safety and efficacy of BAY 81-8973, a full-length, unmodified, recombinant human factor VIII (FVIII), were evaluated in the LEOPOLD trials. The aim of this study was to develop a population PK model based on pooled data from the LEOPOLD trials and to investigate the importance of including samples with FVIII levels below the limit of quantitation (BLQ) to estimate half-life. The analysis included 1535 PK observations (measured by the chromogenic assay) from 183 male patients with haemophilia A aged 1-61 years from the 3 LEOPOLD trials. The limit of quantitation was 1.5 IU dL -1 for the majority of samples. Population PK models that included or excluded BLQ samples were used for FVIII half-life estimations, and simulations were performed using both estimates to explore the influence on the time below a determined FVIII threshold. In the data set used, approximately 16.5% of samples were BLQ, which is not uncommon for FVIII PK data sets. The structural model to describe the PK of BAY 81-8973 was a two-compartment model similar to that seen for other FVIII products. If BLQ samples were excluded from the model, FVIII half-life estimations were longer compared with a model that included BLQ samples. It is essential to assess the importance of BLQ samples when performing population PK estimates of half-life for any FVIII product. Exclusion of BLQ data from half-life estimations based on population PK models may result in an overestimation of half-life and underestimation of time under a predetermined FVIII threshold, resulting in potential underdosing of patients. © 2017 Bayer AG. Haemophilia Published by John Wiley & Sons Ltd.
ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-01-01
Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-05-01
Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/
Slade, Jeffrey W.; Adams, Jean V.; Christie, Gavin C.; Cuddy, Douglas W.; Fodale, Michael F.; Heinrich, John W.; Quinlan, Henry R.; Weise, Jerry G.; Weisser, John W.; Young, Robert J.
2003-01-01
Before 1995, Great Lakes streams were selected for lampricide treatment based primarily on qualitative measures of the relative abundance of larval sea lampreys, Petromyzon marinus. New integrated pest management approaches required standardized quantitative measures of sea lamprey. This paper evaluates historical larval assessment techniques and data and describes how new standardized methods for estimating abundance of larval and metamorphosed sea lampreys were developed and implemented. These new methods have been used to estimate larval and metamorphosed sea lamprey abundance in about 100 Great Lakes streams annually and to rank them for lampricide treatment since 1995. Implementation of these methods has provided a quantitative means of selecting streams for treatment based on treatment cost and estimated production of metamorphosed sea lampreys, provided managers with a tool to estimate potential recruitment of sea lampreys to the Great Lakes and the ability to measure the potential consequences of not treating streams, resulting in a more justifiable allocation of resources. The empirical data produced can also be used to simulate the impacts of various control scenarios.
Reef fish communities are spooked by scuba surveys and may take hours to recover
Cheal, Alistair J.; Miller, Ian R.
2018-01-01
Ecological monitoring programs typically aim to detect changes in the abundance of species of conservation concern or which reflect system status. Coral reef fish assemblages are functionally important for reef health and these are most commonly monitored using underwater visual surveys (UVS) by divers. In addition to estimating numbers, most programs also collect estimates of fish lengths to allow calculation of biomass, an important determinant of a fish’s functional impact. However, diver surveys may be biased because fishes may either avoid or are attracted to divers and the process of estimating fish length could result in fish counts that differ from those made without length estimations. Here we investigated whether (1) general diver disturbance and (2) the additional task of estimating fish lengths affected estimates of reef fish abundance and species richness during UVS, and for how long. Initial estimates of abundance and species richness were significantly higher than those made on the same section of reef after diver disturbance. However, there was no evidence that estimating fish lengths at the same time as abundance resulted in counts different from those made when estimating abundance alone. Similarly, there was little consistent bias among observers. Estimates of the time for fish taxa that avoided divers after initial contact to return to initial levels of abundance varied from three to 17 h, with one group of exploited fishes showing initial attraction to divers that declined over the study period. Our finding that many reef fishes may disperse for such long periods after initial contact with divers suggests that monitoring programs should take great care to minimise diver disturbance prior to surveys. PMID:29844998
Strategies for Revising Judgment: How (and How Well) People Use Others' Opinions
ERIC Educational Resources Information Center
Soll, Jack B.; Larrick, Richard P.
2009-01-01
A basic issue in social influence is how best to change one's judgment in response to learning the opinions of others. This article examines the strategies that people use to revise their quantitative estimates on the basis of the estimates of another person. The authors note that people tend to use 2 basic strategies when revising estimates:…
NASA Astrophysics Data System (ADS)
Jaiswal, P.; van Westen, C. J.; Jetten, V.
2010-06-01
A quantitative approach for landslide risk assessment along transportation lines is presented and applied to a road and a railway alignment in the Nilgiri hills in southern India. The method allows estimating direct risk affecting the alignments, vehicles and people, and indirect risk resulting from the disruption of economic activities. The data required for the risk estimation were obtained from historical records. A total of 901 landslides were catalogued initiating from cut slopes along the railway and road alignment. The landslides were grouped into three magnitude classes based on the landslide type, volume, scar depth, run-out distance, etc and their probability of occurrence was obtained using frequency-volume distribution. Hazard, for a given return period, expressed as the number of landslides of a given magnitude class per kilometre of cut slopes, was obtained using Gumbel distribution and probability of landslide magnitude. In total 18 specific hazard scenarios were generated using the three magnitude classes and six return periods (1, 3, 5, 15, 25, and 50 years). The assessment of the vulnerability of the road and railway line was based on damage records whereas the vulnerability of different types of vehicles and people was subjectively assessed based on limited historic incidents. Direct specific loss for the alignments (railway line and road), vehicles (train, bus, lorry, car and motorbike) was expressed in monetary value (US), and direct specific loss of life of commuters was expressed in annual probability of death. Indirect specific loss (US) derived from the traffic interruption was evaluated considering alternative driving routes, and includes losses resulting from additional fuel consumption, additional travel cost, loss of income to the local business, and loss of revenue to the railway department. The results indicate that the total loss, including both direct and indirect loss, from 1 to 50 years return period, varies from US 90 840 to US 779 500 and the average annual total loss was estimated as US 35 000. The annual probability of a person most at risk travelling in a bus, lorry, car, motorbike and train is less than 10-4/annum in all the time periods considered. The detailed estimation of direct and indirect risk will facilitate developing landslide risk mitigation and management strategies for transportation lines in the study area.
A quantitative reconstruction software suite for SPECT imaging
NASA Astrophysics Data System (ADS)
Namías, Mauro; Jeraj, Robert
2017-11-01
Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.
Mackenbach, J P; Looman, C W
1988-09-01
Secular trends of mortality from 21 infectious diseases in the Netherlands were studied by inspection of age/sex-standardized mortality curves and by log-linear regression analysis. An attempt was made to obtain quantitative estimates for changes coinciding with the introduction of antibiotics. Two possible types of effect were considered: a sharp reduction of mortality at the moment of the introduction of antibiotics, and a longer lasting (acceleration of) mortality decline after the introduction. Changes resembling the first type of effect were possibly present for many infectious diseases, but were difficult to measure exactly, due to late effects on mortality of World War II. Changes resembling the second type of effect were present in 16 infectious diseases and were sometimes quite large. For example, estimated differences in per cent per annum mortality change were 10% or larger for puerperal fever, scarlet fever, rheumatic fever, erysipelas, otitis media, tuberculosis, and bacillary dysentery. No acceleration of mortality decline after the introduction of antibiotics was present in mortality from 'all other diseases'. Although the exact contribution of antibiotics to the observed changes cannot be inferred from this time trend analysis, the quantitative estimates of the changes show that even a partial contribution would represent a substantial effect of antibiotics on mortality from infectious diseases in the Netherlands.
Goudet, Jérôme; Büchi, Lucie
2006-02-01
To test whether quantitative traits are under directional or homogenizing selection, it is common practice to compare population differentiation estimates at molecular markers (F(ST)) and quantitative traits (Q(ST)). If the trait is neutral and its determinism is additive, then theory predicts that Q(ST) = F(ST), while Q(ST) > F(ST) is predicted under directional selection for different local optima, and Q(ST) < F(ST) is predicted under homogenizing selection. However, nonadditive effects can alter these predictions. Here, we investigate the influence of dominance on the relation between Q(ST) and F(ST) for neutral traits. Using analytical results and computer simulations, we show that dominance generally deflates Q(ST) relative to F(ST). Under inbreeding, the effect of dominance vanishes, and we show that for selfing species, a better estimate of Q(ST) is obtained from selfed families than from half-sib families. We also compare several sampling designs and find that it is always best to sample many populations (>20) with few families (five) rather than few populations with many families. Provided that estimates of Q(ST) are derived from individuals originating from many populations, we conclude that the pattern Q(ST) > F(ST), and hence the inference of directional selection for different local optima, is robust to the effect of nonadditive gene actions.
Method to monitor HC-SCR catalyst NOx reduction performance for lean exhaust applications
Viola, Michael B [Macomb Township, MI; Schmieg, Steven J [Troy, MI; Sloane, Thompson M [Oxford, MI; Hilden, David L [Shelby Township, MI; Mulawa, Patricia A [Clinton Township, MI; Lee, Jong H [Rochester Hills, MI; Cheng, Shi-Wai S [Troy, MI
2012-05-29
A method for initiating a regeneration mode in selective catalytic reduction device utilizing hydrocarbons as a reductant includes monitoring a temperature within the aftertreatment system, monitoring a fuel dosing rate to the selective catalytic reduction device, monitoring an initial conversion efficiency, selecting a determined equation to estimate changes in a conversion efficiency of the selective catalytic reduction device based upon the monitored temperature and the monitored fuel dosing rate, estimating changes in the conversion efficiency based upon the determined equation and the initial conversion efficiency, and initiating a regeneration mode for the selective catalytic reduction device based upon the estimated changes in conversion efficiency.
Begum, Sharmin; Uddin, Md Jashim; Platts-Mills, James A.; Liu, Jie; Kirkpatrick, Beth D.; Chowdhury, Anwarul H.; Jamil, Khondoker M.; Haque, Rashidul; Petri, William A.; Houpt, Eric R.
2014-01-01
Amid polio eradication efforts, detection of oral polio vaccine (OPV) virus in stool samples can provide information about rates of mucosal immunity and allow estimation of the poliovirus reservoir. We developed a multiplex one-step quantitative reverse transcription-PCR (qRT-PCR) assay for detection of OPV Sabin strains 1, 2, and 3 directly in stool samples with an external control to normalize samples for viral quantity and compared its performance with that of viral culture. We applied the assay to samples from infants in Dhaka, Bangladesh, after the administration of trivalent OPV (tOPV) at weeks 14 and 52 of life (on days 0 [pre-OPV], +4, +11, +18, and +25 relative to vaccination). When 1,350 stool samples were tested, the sensitivity and specificity of the quantitative PCR (qPCR) assay were 89 and 91% compared with culture. A quantitative relationship between culture+/qPCR+ and culture−/qPCR+ stool samples was observed. The kinetics of shedding revealed by qPCR and culture were similar. qPCR quantitative cutoffs based on the day +11 or +18 stool samples could be used to identify the culture-positive shedders, as well as the long-duration or high-frequency shedders. Interestingly, qPCR revealed that a small minority (7%) of infants contributed the vast majority (93 to 100%) of the total estimated viral excretion across all subtypes at each time point. This qPCR assay for OPV can simply and quantitatively detect all three Sabin strains directly in stool samples to approximate shedding both qualitatively and quantitatively. PMID:25378579
Peters, Susan; Kromhout, Hans; Portengen, Lützen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Vermeulen, Roel
2013-01-01
We describe the elaboration and sensitivity analyses of a quantitative job-exposure matrix (SYN-JEM) for respirable crystalline silica (RCS). The aim was to gain insight into the robustness of the SYN-JEM RCS estimates based on critical decisions taken in the elaboration process. SYN-JEM for RCS exposure consists of three axes (job, region, and year) based on estimates derived from a previously developed statistical model. To elaborate SYN-JEM, several decisions were taken: i.e. the application of (i) a single time trend; (ii) region-specific adjustments in RCS exposure; and (iii) a prior job-specific exposure level (by the semi-quantitative DOM-JEM), with an override of 0 mg/m(3) for jobs a priori defined as non-exposed. Furthermore, we assumed that exposure levels reached a ceiling in 1960 and remained constant prior to this date. We applied SYN-JEM to the occupational histories of subjects from a large international pooled community-based case-control study. Cumulative exposure levels derived with SYN-JEM were compared with those from alternative models, described by Pearson correlation ((Rp)) and differences in unit of exposure (mg/m(3)-year). Alternative models concerned changes in application of job- and region-specific estimates and exposure ceiling, and omitting the a priori exposure ranking. Cumulative exposure levels for the study subjects ranged from 0.01 to 60 mg/m(3)-years, with a median of 1.76 mg/m(3)-years. Exposure levels derived from SYN-JEM and alternative models were overall highly correlated (R(p) > 0.90), although somewhat lower when omitting the region estimate ((Rp) = 0.80) or not taking into account the assigned semi-quantitative exposure level (R(p) = 0.65). Modification of the time trend (i.e. exposure ceiling at 1950 or 1970, or assuming a decline before 1960) caused the largest changes in absolute exposure levels (26-33% difference), but without changing the relative ranking ((Rp) = 0.99). Exposure estimates derived from SYN-JEM appeared to be plausible compared with (historical) levels described in the literature. Decisions taken in the development of SYN-JEM did not critically change the cumulative exposure levels. The influence of region-specific estimates needs to be explored in future risk analyses.
Changes in blast zone albedo patterns around new martian impact craters
NASA Astrophysics Data System (ADS)
Daubar, I. J.; Dundas, C. M.; Byrne, S.; Geissler, P.; Bart, G. D.; McEwen, A. S.; Russell, P. S.; Chojnacki, M.; Golombek, M. P.
2016-03-01
"Blast zones" (BZs) around new martian craters comprise various albedo features caused by the initial impact, including diffuse halos, extended linear and arcuate rays, secondary craters, ejecta patterns, and dust avalanches. We examined these features for changes in repeat images separated by up to four Mars years. Here we present the first comprehensive survey of the qualitative and quantitative changes observed in impact blast zones over time. Such changes are most likely due to airfall of high-albedo dust restoring darkened areas to their original albedo, the albedo of adjacent non-impacted surfaces. Although some sites show drastic changes over short timescales, nearly half of the sites show no obvious changes over several Mars years. Albedo changes are more likely to occur at higher-latitude sites, lower-elevation sites, and at sites with smaller central craters. No correlation was seen between amount of change and Dust Cover Index, relative halo size, or historical regional albedo changes. Quantitative albedo measurements of the diffuse dark halos relative to their surroundings yielded estimates of fading lifetimes for these features. The average lifetime among sites with measurable fading is ∼15 Mars years; the median is ∼8 Mars years for a linear brightening. However, at approximately half of sites with three or more repeat images, a nonlinear function with rapid initial fading followed by a slow increase in albedo provides a better fit to the fading behavior; this would predict even longer lifetimes. The predicted lifetimes of BZs are comparable to those of slope streaks, and considered representative of fading by global atmospheric dust deposition; they last significantly longer than dust devil or rover tracks, albedo features that are erased by different processes. These relatively long lifetimes indicate that the measurement of the current impact rate by Daubar et al. (Daubar, I.J. et al. [2013]. Icarus 225, 506-516. http://dx.doi.org/10.1016/j.icarus.2013.04.009) does not suffer significantly from overall under-sampling due to blast zones fading before new impact sites can be initially discovered. However, the prevalence of changes seen around smaller craters may explain in part their shallower size frequency distribution.
Changes in blast zone albedo patterns around new martian impact craters
Daubar, Ingrid J.; Dundas, Colin; Byrne, Shane; Geissler, Paul; Bart, Gwen; McEwen, Alfred S.; Russell, Patrick; Chojnacki, Matthew; Golombek, M.P.
2016-01-01
“Blast zones” (BZs) around new martian craters comprise various albedo features caused by the initial impact, including diffuse halos, extended linear and arcuate rays, secondary craters, ejecta patterns, and dust avalanches. We examined these features for changes in repeat images separated by up to four Mars years. Here we present the first comprehensive survey of the qualitative and quantitative changes observed in impact blast zones over time. Such changes are most likely due to airfall of high-albedo dust restoring darkened areas to their original albedo, the albedo of adjacent non-impacted surfaces. Although some sites show drastic changes over short timescales, nearly half of the sites show no obvious changes over several Mars years. Albedo changes are more likely to occur at higher-latitude sites, lower-elevation sites, and at sites with smaller central craters. No correlation was seen between amount of change and Dust Cover Index, relative halo size, or historical regional albedo changes. Quantitative albedo measurements of the diffuse dark halos relative to their surroundings yielded estimates of fading lifetimes for these features. The average lifetime among sites with measurable fading is ∼15 Mars years; the median is ∼8 Mars years for a linear brightening. However, at approximately half of sites with three or more repeat images, a nonlinear function with rapid initial fading followed by a slow increase in albedo provides a better fit to the fading behavior; this would predict even longer lifetimes. The predicted lifetimes of BZs are comparable to those of slope streaks, and considered representative of fading by global atmospheric dust deposition; they last significantly longer than dust devil or rover tracks, albedo features that are erased by different processes. These relatively long lifetimes indicate that the measurement of the current impact rate by Daubar et al. (Daubar, I.J. et al. [2013]. Icarus 225, 506–516. http://dx.doi.org/10.1016/j.icarus.2013.04.009) does not suffer significantly from overall under-sampling due to blast zones fading before new impact sites can be initially discovered. However, the prevalence of changes seen around smaller craters may explain in part their shallower size frequency distribution.
Kang, Bo-Sik; Lee, Jang-Eun; Park, Hyun-Jin
2014-06-01
In Korean rice wine (makgeolli) model, we tried to develop a prediction model capable of eliciting a quantitative relationship between initial amino acids in makgeolli mash and major aromatic compounds, such as fusel alcohols, their acetate esters, and ethyl esters of fatty acids, in makgeolli brewed. Mass-spectrometry-based electronic nose (MS-EN) was used to qualitatively discriminate between makgeollis made from makgeolli mashes with different amino acid compositions. Following this measurement, headspace solid-phase microextraction coupled to gas chromatography-mass spectrometry (GC-MS) combined with partial least-squares regression (PLSR) method was employed to quantitatively correlate amino acid composition of makgeolli mash with major aromatic compounds evolved during makgeolli fermentation. In qualitative prediction with MS-EN analysis, the makgeollis were well discriminated according to the volatile compounds derived from amino acids of makgeolli mash. Twenty-seven ion fragments with mass-to-charge ratio (m/z) of 55 to 98 amu were responsible for the discrimination. In GC-MS combined with PLSR method, a quantitative approach between the initial amino acids of makgeolli mash and the fusel compounds of makgeolli demonstrated that coefficient of determination (R(2)) of most of the fusel compounds ranged from 0.77 to 0.94 in good correlation, except for 2-phenylethanol (R(2) = 0.21), whereas R(2) for ethyl esters of MCFAs including ethyl caproate, ethyl caprylate, and ethyl caprate was 0.17 to 0.40 in poor correlation. The amino acids have been known to affect the aroma in alcoholic beverages. In this study, we demonstrated that an electronic nose qualitatively differentiated Korean rice wines (makgeollis) by their volatile compounds evolved from amino acids with rapidity and reproducibility and successively, a quantitative correlation with acceptable R2 between amino acids and fusel compounds could be established via HS-SPME GC-MS combined with partial least-squares regression. Our approach for predicting the quantities of volatile compounds in the finished product from initial condition of fermentation will give an insight to food researchers to modify and optimize the qualities of the corresponding products. © 2014 Institute of Food Technologists®
Hinds, P S; Scandrett-Hibden, S; McAulay, L S
1990-04-01
The reliability and validity of qualitative research findings are viewed with scepticism by some scientists. This scepticism is derived from the belief that qualitative researchers give insufficient attention to estimating reliability and validity of data, and the differences between quantitative and qualitative methods in assessing data. The danger of this scepticism is that relevant and applicable research findings will not be used. Our purpose is to describe an evaluative strategy for use with qualitative data, a strategy that is a synthesis of quantitative and qualitative assessment methods. Results of the strategy and factors that influence its use are also described.
Mager, P P; Rothe, H
1990-10-01
Multicollinearity of physicochemical descriptors leads to serious consequences in quantitative structure-activity relationship (QSAR) analysis, such as incorrect estimators and test statistics of regression coefficients of the ordinary least-squares (OLS) model applied usually to QSARs. Beside the diagnosis of the known simple collinearity, principal component regression analysis (PCRA) also allows the diagnosis of various types of multicollinearity. Only if the absolute values of PCRA estimators are order statistics that decrease monotonically, the effects of multicollinearity can be circumvented. Otherwise, obscure phenomena may be observed, such as good data recognition but low predictive model power of a QSAR model.
FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts.
Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto
2010-01-01
Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used.
A model for the characterization of the spatial properties in vestibular neurons
NASA Technical Reports Server (NTRS)
Angelaki, D. E.; Bush, G. A.; Perachio, A. A.
1992-01-01
Quantitative study of the static and dynamic response properties of some otolith-sensitive neurons has been difficult in the past partly because their responses to different linear acceleration vectors exhibited no "null" plane and a dependence of phase on stimulus orientation. The theoretical formulation of the response ellipse provides a quantitative way to estimate the spatio-temporal properties of such neurons. Its semi-major axis gives the direction of the polarization vector (i.e., direction of maximal sensitivity) and it estimates the neuronal response for stimulation along that direction. In addition, the semi-minor axis of the ellipse provides an estimate of the neuron's maximal sensitivity in the "null" plane. In this paper, extracellular recordings from otolith-sensitive vestibular nuclei neurons in decerebrate rats were used to demonstrate the practical application of the method. The experimentally observed gain and phase dependence on the orientation angle of the acceleration vector in a head-horizontal plane was described and satisfactorily fit by the response ellipse model. In addition, the model satisfactorily fits neuronal responses in three-dimensions and unequivocally demonstrates that the response ellipse formulation is the general approach to describe quantitatively the spatial properties of vestibular neurons.
[Quantitative estimation source of urban atmospheric CO2 by carbon isotope composition].
Liu, Wei; Wei, Nan-Nan; Wang, Guang-Hua; Yao, Jian; Zeng, You-Shi; Fan, Xue-Bo; Geng, Yan-Hong; Li, Yan
2012-04-01
To effectively reduce urban carbon emissions and verify the effectiveness of currently project for urban carbon emission reduction, quantitative estimation sources of urban atmospheric CO2 correctly is necessary. Since little fractionation of carbon isotope exists in the transportation from pollution sources to the receptor, the carbon isotope composition can be used for source apportionment. In the present study, a method was established to quantitatively estimate the source of urban atmospheric CO2 by the carbon isotope composition. Both diurnal and height variations of concentrations of CO2 derived from biomass, vehicle exhaust and coal burning were further determined for atmospheric CO2 in Jiading district of Shanghai. Biomass-derived CO2 accounts for the largest portion of atmospheric CO2. The concentrations of CO2 derived from the coal burning are larger in the night-time (00:00, 04:00 and 20:00) than in the daytime (08:00, 12:00 and 16:00), and increase with the increase of height. Those derived from the vehicle exhaust decrease with the height increase. The diurnal and height variations of sources reflect the emission and transport characteristics of atmospheric CO2 in Jiading district of Shanghai.
Gutierrez-Navarro, Omar; Campos-Delgado, Daniel U; Arce-Santana, Edgar R; Maitland, Kristen C; Cheng, Shuna; Jabbour, Joey; Malik, Bilal; Cuenca, Rodrigo; Jo, Javier A
2014-05-19
Multispectral fluorescence lifetime imaging (m-FLIM) can potentially allow identifying the endogenous fluorophores present in biological tissue. Quantitative description of such data requires estimating the number of components in the sample, their characteristic fluorescent decays, and their relative contributions or abundances. Unfortunately, this inverse problem usually requires prior knowledge about the data, which is seldom available in biomedical applications. This work presents a new methodology to estimate the number of potential endogenous fluorophores present in biological tissue samples from time-domain m-FLIM data. Furthermore, a completely blind linear unmixing algorithm is proposed. The method was validated using both synthetic and experimental m-FLIM data. The experimental m-FLIM data include in-vivo measurements from healthy and cancerous hamster cheek-pouch epithelial tissue, and ex-vivo measurements from human coronary atherosclerotic plaques. The analysis of m-FLIM data from in-vivo hamster oral mucosa identified healthy from precancerous lesions, based on the relative concentration of their characteristic fluorophores. The algorithm also provided a better description of atherosclerotic plaques in term of their endogenous fluorophores. These results demonstrate the potential of this methodology to provide quantitative description of tissue biochemical composition.
Estimation of total bacteria by real-time PCR in patients with periodontal disease.
Brajović, Gavrilo; Popović, Branka; Puletić, Miljan; Kostić, Marija; Milasin, Jelena
2016-01-01
Periodontal diseases are associated with the presence of elevated levels of bacteria within the gingival crevice. The aim of this study was to evaluate a total amount of bacteria in subgingival plaque samples in patients with a periodontal disease. A quantitative evaluation of total bacteria amount using quantitative real-time polymerase chain reaction (qRT-PCR) was performed on 20 samples of patients with ulceronecrotic periodontitis and on 10 samples of healthy subjects. The estimation of total bacterial amount was based on gene copy number for 16S rRNA that was determined by comparing to Ct values/gene copy number of the standard curve. A statistically significant difference between average gene copy number of total bacteria in periodontal patients (2.55 x 10⁷) and healthy control (2.37 x 10⁶) was found (p = 0.01). Also, a trend of higher numbers of the gene copy in deeper periodontal lesions (> 7 mm) was confirmed by a positive value of coefficient of correlation (r = 0.073). The quantitative estimation of total bacteria based on gene copy number could be an important additional tool in diagnosing periodontitis.
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso
2016-01-01
ABSTRACT Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. IMPORTANCE We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. PMID:27940547
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2017-02-15
Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. Copyright © 2017 Koyama et al.
Sustaining Opportunity in Rural Community Colleges
ERIC Educational Resources Information Center
Torres, Vasti; Viterito, Arthur; Heeter, Aimee; Hernandez, Ebelia; Santiague, Lilia; Johnson, Susan
2013-01-01
This assessment considers the sustainability of initiatives begun as a result of participation in the Rural Community College Initiative (RCCI). Case studies were conducted at eight community colleges, and quantitative data was gathered from the U.S. Census, the Department of Labor, and the Integrated Postsecondary Education Data System (IPEDS).…
Educational Neuroscience: New Horizons for Research in Mathematics Education
ERIC Educational Resources Information Center
Campbell, Stephen R.
2006-01-01
This paper outlines an initiative in mathematics education research that aims to augment qualitative methods of research into mathematical cognition and learning with quantitative methods of psychometrics and psychophysiology. Background and motivation are provided for this initiative, which is coming to be referred to as educational neuroscience.…
Activities for Engaging Schools in Health Promotion
ERIC Educational Resources Information Center
Bardi, Mohammad; Burbank, Andrea; Choi, Wayne; Chow, Lawrence; Jang, Wesley; Roccamatisi, Dawn; Timberley-Berg, Tonia; Sanghera, Mandeep; Zhang, Margaret; Macnab, Andrew J.
2014-01-01
Purpose: The purpose of this paper is to describe activities used to initiate health promotion in the school setting. Design/Methodology/Approach: Description of successful pilot Health Promoting School (HPS) initiatives in Canada and Uganda and the validated measures central to each program. Evaluation methodologies: quantitative data from the…
The Choice of Initial Web Search Strategies: A Comparison between Finnish and American Searchers.
ERIC Educational Resources Information Center
Iivonen, Mirja; White, Marilyn Domas
2001-01-01
Describes a study that used qualitative and quantitative methodologies to analyze differences between Finnish and American Web searchers in their choice of initial search strategies (direct address, subject directory, and search engines) and their reasoning underlying their choices. Considers implications for considering cultural differences in…
Multi-Dimensional Assessment of Professional Competence during Initial Pilot Training
ERIC Educational Resources Information Center
Larson, Douglas Andrew
2017-01-01
A twenty-year forecast predicting significant increases in global air transportation portends a need to increase the capacity and effectiveness of initial pilot training. In addition to quantitative concerns related to the supply of new pilots, industry leaders have expressed dissatisfaction with the qualitative output of current aviation training…
We used quantitative microbial risk assessment (QMRA) to estimate the risk of gastrointestinal (GI) illness associated with swimming in recreational waters containing different concentrations of human-associated fecal qPCR markers from raw sewage– HF183 and HumM2. The volume/volu...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-26
..., we have decided it is not feasible to provide meaningful quantitative estimates of the incremental... potential economic impacts to oil spill response. This revision (i.e., replacing quantitative costs with a... and separate PCE in this final designation. As more research is completed, and we learn more of the...
Decay of Correlations, Quantitative Recurrence and Logarithm Law for Contracting Lorenz Attractors
NASA Astrophysics Data System (ADS)
Galatolo, Stefano; Nisoli, Isaia; Pacifico, Maria Jose
2018-03-01
In this paper we prove that a class of skew products maps with non uniformly hyperbolic base has exponential decay of correlations. We apply this to obtain a logarithm law for the hitting time associated to a contracting Lorenz attractor at all the points having a well defined local dimension, and a quantitative recurrence estimation.
Dispersal of Invasive Forest Insects via Recreational Firewood: A Quantitative Analysis
Frank H. Koch; Denys Yemshanov; Roger D. Magarey; William D. Smith
2012-01-01
Recreational travel is a recognized vector for the spread of invasive species in North America. However, there has been little quantitative analysis of the risks posed by such travel and the associated transport of firewood. In this study, we analyzed the risk of forest insect spread with firewood and estimated related dispersal parameters for application in...