Radi, Marjan; Dezfouli, Behnam; Abu Bakar, Kamalrulnizam; Abd Razak, Shukor
2014-01-01
Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications. PMID:24678277
Estimating satellite pose and motion parameters using a novelty filter and neural net tracker
NASA Technical Reports Server (NTRS)
Lee, Andrew J.; Casasent, David; Vermeulen, Pieter; Barnard, Etienne
1989-01-01
A system for determining the position, orientation and motion of a satellite with respect to a robotic spacecraft using video data is advanced. This system utilizes two levels of pose and motion estimation: an initial system which provides coarse estimates of pose and motion, and a second system which uses the coarse estimates and further processing to provide finer pose and motion estimates. The present paper emphasizes the initial coarse pose and motion estimation sybsystem. This subsystem utilizes novelty detection and filtering for locating novel parts and a neural net tracker to track these parts over time. Results of using this system on a sequence of images of a spin stabilized satellite are presented.
Critical Parameters of the Initiation Zone for Spontaneous Dynamic Rupture Propagation
NASA Astrophysics Data System (ADS)
Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.; Ampuero, J. P.; Mai, P. M.
2014-12-01
Numerical simulations of rupture propagation are used to study both earthquake source physics and earthquake ground motion. Under linear slip-weakening friction, artificial procedures are needed to initiate a self-sustained rupture. The concept of an overstressed asperity is often applied, in which the asperity is characterized by its size, shape and overstress. The physical properties of the initiation zone may have significant impact on the resulting dynamic rupture propagation. A trial-and-error approach is often necessary for successful initiation because 2D and 3D theoretical criteria for estimating the critical size of the initiation zone do not provide general rules for designing 3D numerical simulations. Therefore, it is desirable to define guidelines for efficient initiation with minimal artificial effects on rupture propagation. We perform an extensive parameter study using numerical simulations of 3D dynamic rupture propagation assuming a planar fault to examine the critical size of square, circular and elliptical initiation zones as a function of asperity overstress and background stress. For a fixed overstress, we discover that the area of the initiation zone is more important for the nucleation process than its shape. Comparing our numerical results with published theoretical estimates, we find that the estimates by Uenishi & Rice (2004) are applicable to configurations with low background stress and small overstress. None of the published estimates are consistent with numerical results for configurations with high background stress. We therefore derive new equations to estimate the initiation zone size in environments with high background stress. Our results provide guidelines for defining the size of the initiation zone and overstress with minimal effects on the subsequent spontaneous rupture propagation.
Cost effectiveness of the Oregon quitline "free patch initiative".
Fellows, Jeffrey L; Bush, Terry; McAfee, Tim; Dickerson, John
2007-12-01
We estimated the cost effectiveness of the Oregon tobacco quitline's "free patch initiative" compared to the pre-initiative programme. Using quitline utilisation and cost data from the state, intervention providers and patients, we estimated annual programme use and costs for media promotions and intervention services. We also estimated annual quitline registration calls and the number of quitters and life years saved for the pre-initiative and free patch initiative programmes. Service utilisation and 30-day abstinence at six months were obtained from 959 quitline callers. We compared the cost effectiveness of the free patch initiative (media and intervention costs) to the pre-initiative service offered to insured and uninsured callers. We conducted sensitivity analyses on key programme costs and outcomes by estimating a best case and worst case scenario for each intervention strategy. Compared to the pre-intervention programme, the free patch initiative doubled registered calls, increased quitting fourfold and reduced total costs per quit by $2688. We estimated annual paid media costs were $215 per registered tobacco user for the pre-initiative programme and less than $4 per caller during the free patch initiative. Compared to the pre-initiative programme, incremental quitline promotion and intervention costs for the free patch initiative were $86 (range $22-$353) per life year saved. Compared to the pre-initiative programme, the free patch initiative was a highly cost effective strategy for increasing quitting in the population.
Prioritizing Scientific Initiatives.
ERIC Educational Resources Information Center
Bahcall, John N.
1991-01-01
Discussed is the way in which a limited number of astronomy research initiatives were chosen and prioritized based on a consensus of members from the Astronomy and Astrophysics Survey Committee. A list of recommended equipment initiatives and estimated costs is provided. (KR)
NREL Screens Universities for Solar and Battery Storage Potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
In support of the U.S. Department of Energy's SunShot initiative, NREL provided solar photovoltaic (PV) screenings in 2016 for eight universities seeking to go solar. NREL conducted an initial technoeconomic assessment of PV and storage feasibility at the selected universities using the REopt model, an energy planning platform that can be used to evaluate RE options, estimate costs, and suggest a mix of RE technologies to meet defined assumptions and constraints. NREL provided each university with customized results, including the cost-effectiveness of PV and storage, recommended system size, estimated capital cost to implement the technology, and estimated life cycle costmore » savings.« less
32 CFR Appendix D to Part 169a - Commercial Activities Management Information System (CAMIS)
Code of Federal Regulations, 2011 CFR
2011-07-01
... to provide an initial estimate of the manpower associated with the activity (or activities). The initial estimate of the manpower in this section of the CCR will be in all cases those manpower figures... Medical Program of the Uniformed Services (CHAMPUS) [3D1] E—Defense Advanced Research Projects Agency F...
32 CFR Appendix D to Part 169a - Commercial Activities Management Information System (CAMIS)
Code of Federal Regulations, 2014 CFR
2014-07-01
... to provide an initial estimate of the manpower associated with the activity (or activities). The initial estimate of the manpower in this section of the CCR will be in all cases those manpower figures... Medical Program of the Uniformed Services (CHAMPUS) [3D1] E—Defense Advanced Research Projects Agency F...
32 CFR Appendix D to Part 169a - Commercial Activities Management Information System (CAMIS)
Code of Federal Regulations, 2013 CFR
2013-07-01
... to provide an initial estimate of the manpower associated with the activity (or activities). The initial estimate of the manpower in this section of the CCR will be in all cases those manpower figures... Medical Program of the Uniformed Services (CHAMPUS) [3D1] E—Defense Advanced Research Projects Agency F...
32 CFR Appendix D to Part 169a - Commercial Activities Management Information System (CAMIS)
Code of Federal Regulations, 2012 CFR
2012-07-01
... to provide an initial estimate of the manpower associated with the activity (or activities). The initial estimate of the manpower in this section of the CCR will be in all cases those manpower figures... Medical Program of the Uniformed Services (CHAMPUS) [3D1] E—Defense Advanced Research Projects Agency F...
NASA Astrophysics Data System (ADS)
Sun, Li-Sha; Kang, Xiao-Yun; Zhang, Qiong; Lin, Lan-Xin
2011-12-01
Based on symbolic dynamics, a novel computationally efficient algorithm is proposed to estimate the unknown initial vectors of globally coupled map lattices (CMLs). It is proved that not all inverse chaotic mapping functions are satisfied for contraction mapping. It is found that the values in phase space do not always converge on their initial values with respect to sufficient backward iteration of the symbolic vectors in terms of global convergence or divergence (CD). Both CD property and the coupling strength are directly related to the mapping function of the existing CML. Furthermore, the CD properties of Logistic, Bernoulli, and Tent chaotic mapping functions are investigated and compared. Various simulation results and the performances of the initial vector estimation with different signal-to-noise ratios (SNRs) are also provided to confirm the proposed algorithm. Finally, based on the spatiotemporal chaotic characteristics of the CML, the conditions of estimating the initial vectors using symbolic dynamics are discussed. The presented method provides both theoretical and experimental results for better understanding and characterizing the behaviours of spatiotemporal chaotic systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
KRASNITZ,A.; VENUGOPALAN,R.
The dynamics of low-x partons in the transverse plane of a high-energy nuclear collision is classical, and therefore admits a fully non-perturbative numerical treatment. The authors report results of a recent study estimating the initial energy density in the central region of a collision. Preliminary estimates of the number of gluons per unit rapidity, and the initial transverse momentum distribution of gluons, are also provided.
Ray, Joel G.; Bartsch, Emily; Park, Alison L.; Shah, Prakesh S.; Dzakpasu, Susie
2017-01-01
Background: Hypertensive disorders, especially preeclampsia, are the leading reason for provider-initiated preterm birth. We estimated how universal acetylsalicylic acid (ASA) prophylaxis might reduce rates of provider-initiated preterm birth associated with preeclampsia and intrauterine growth restriction, which are related conditions. Methods: We performed a cohort study of singleton hospital births in 2013 in Canada, excluding Quebec. We estimated the proportion of term births and provider-initiated preterm births affected by preeclampsia and/or intrauterine growth restriction, and the corresponding mean maternal and newborn hospital length of stay. We projected the potential number of cases reduced and corresponding hospital length of stay if ASA prophylaxis lowered cases of preeclampsia and intrauterine growth restriction by a relative risk reduction (RRR) of 10% (lowest) or 53% (highest), as suggested by randomized clinical trials. Results: Of the 269 303 singleton live births and stillbirths in our cohort, 4495 (1.7%) were provider-initiated preterm births. Of the 4495, 1512 (33.6%) had a diagnosis of preeclampsia and/or intrauterine growth restriction. The mean maternal length of stay was 2.0 (95% confidence interval [CI] 2.0-2.0) days among term births unaffected by either condition and 7.3 (95% CI 6.1-8.6) days among provider-initiated preterm births with both conditions. The corresponding values for mean newborn length of stay were 1.9 (95% CI 1.8-1.9) days and 21.8 (95% CI 17.4-26.2) days. If ASA conferred a 53% RRR against preeclampsia and/or intrauterine growth restriction, 3365 maternal and 11 591 newborn days in hospital would be averted. If ASA conferred a 10% RRR, 635 maternal and 2187 newborn days in hospital would be averted. Interpretation: A universal ASA prophylaxis strategy could substantially reduce the burden of long maternal and newborn hospital stays associated with provider-initiated preterm birth. However, until there is compelling evidence that administration of ASA to all, or most, pregnant women reduces the risk of preeclampsia and/or intrauterine growth restriction, clinicians should continue to follow current clinical practice guidelines. PMID:28646095
Estimating mangrove in Florida: trials monitoring rare ecosystems
Mark J. Brown
2015-01-01
Mangrove species are keystone components in coastal ecosystems and are the interface between forest land and sea. Yet, estimates of their area have varied widely. Forest Inventory and Analysis (FIA) data from ground-based sample plots provide one estimate of the resource. Initial FIA estimates of the mangrove resource in Florida varied dramatically from those compiled...
Woolley, Thomas E; Belmonte-Beitia, Juan; Calvo, Gabriel F; Hopewell, John W; Gaffney, Eamonn A; Jones, Bleddyn
2018-06-01
To estimate, from experimental data, the retreatment radiation 'tolerances' of the spinal cord at different times after initial treatment. A model was developed to show the relationship between the biological effective doses (BEDs) for two separate courses of treatment with the BED of each course being expressed as a percentage of the designated 'retreatment tolerance' BED value, denoted [Formula: see text] and [Formula: see text]. The primate data of Ang et al. ( 2001 ) were used to determine the fitted parameters. However, based on rodent data, recovery was assumed to commence 70 days after the first course was complete, and with a non-linear relationship to the magnitude of the initial BED (BED init ). The model, taking into account the above processes, provides estimates of the retreatment tolerance dose after different times. Extrapolations from the experimental data can provide conservative estimates for the clinic, with a lower acceptable myelopathy incidence. Care must be taken to convert the predicted [Formula: see text] value into a formal BED value and then a practical dose fractionation schedule. Used with caution, the proposed model allows estimations of retreatment doses with elapsed times ranging from 70 days up to three years after the initial course of treatment.
Assessing Tuberculosis Case Fatality Ratio: A Meta-Analysis
Straetemans, Masja; Glaziou, Philippe; Bierrenbach, Ana L.; Sismanidis, Charalambos; van der Werf, Marieke J.
2011-01-01
Background Recently, the tuberculosis (TB) Task Force Impact Measurement acknowledged the need to review the assumptions underlying the TB mortality estimates published annually by the World Health Organization (WHO). TB mortality is indirectly measured by multiplying estimated TB incidence with estimated case fatality ratio (CFR). We conducted a meta-analysis to estimate the TB case fatality ratio in TB patients having initiated TB treatment. Methods We searched for eligible studies in the PubMed and Embase databases through March 4th 2011 and by reference listing of relevant review articles. Main analyses included the estimation of the pooled percentages of: a) TB patients dying due to TB after having initiated TB treatment and b) TB patients dying during TB treatment. Pooled percentages were estimated using random effects regression models on the combined patient population from all studies. Main Results We identified 69 relevant studies of which 22 provided data on mortality due to TB and 59 provided data on mortality during TB treatment. Among HIV infected persons the pooled percentage of TB patients dying due to TB was 9.2% (95% Confidence Interval (CI): 3.7%–14.7%) and among HIV uninfected persons 3.0% (95% CI: −1.2%–7.4%) based on the results of eight and three studies respectively providing data for this analyses. The pooled percentage of TB patients dying during TB treatment was 18.8% (95% CI: 14.8%–22.8%) among HIV infected patients and 3.5% (95% CI: 2.0%–4.92%) among HIV uninfected patients based on the results of 27 and 19 studies respectively. Conclusion The results of the literature review are useful in generating prior distributions of CFR in countries with vital registration systems and have contributed towards revised estimates of TB mortality This literature review did not provide us with all data needed for a valid estimation of TB CFR in TB patients initiating TB treatment. PMID:21738585
Quantitative Compactness Estimates for Hamilton-Jacobi Equations
NASA Astrophysics Data System (ADS)
Ancona, Fabio; Cannarsa, Piermarco; Nguyen, Khai T.
2016-02-01
We study quantitative compactness estimates in {W^{1,1}_{loc}} for the map {S_t}, {t > 0} that is associated with the given initial data {u_0in Lip (R^N)} for the corresponding solution {S_t u_0} of a Hamilton-Jacobi equation u_t+Hbig(nabla_{x} ubig)=0, qquad t≥ 0,quad xinR^N, with a uniformly convex Hamiltonian {H=H(p)}. We provide upper and lower estimates of order {1/\\varepsilon^N} on the Kolmogorov {\\varepsilon}-entropy in {W^{1,1}} of the image through the map S t of sets of bounded, compactly supported initial data. Estimates of this type are inspired by a question posed by Lax (Course on Hyperbolic Systems of Conservation Laws. XXVII Scuola Estiva di Fisica Matematica, Ravello, 2002) within the context of conservation laws, and could provide a measure of the order of "resolution" of a numerical method implemented for this equation.
Study of solid rocket motors for a space shuttle booster. Volume 2, book 3: Cost estimating data
NASA Technical Reports Server (NTRS)
Vanderesch, A. H.
1972-01-01
Cost estimating data for the 156 inch diameter, parallel burn solid rocket propellant engine selected for the space shuttle booster are presented. The costing aspects on the baseline motor are initially considered. From the baseline, sufficient data is obtained to provide cost estimates of alternate approaches.
Alternatives to the Moving Average
Paul C. van Deusen
2001-01-01
There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...
Estimation of chaotic coupled map lattices using symbolic vector dynamics
NASA Astrophysics Data System (ADS)
Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya
2010-01-01
In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.
The Pregnant Women with HIV Attitude Scale: development and initial psychometric evaluation.
Tyer-Viola, Lynda A; Duffy, Mary E
2010-08-01
This paper is a report of the development and initial psychometric evaluation of the Pregnant Women with HIV Attitude Scale. Previous research has identified that attitudes toward persons with HIV/AIDS have been judgmental and could affect clinical care and outcomes. Stigma towards persons with HIV has persisted as a barrier to nursing care globally. Women are more vulnerable during pregnancy. An instrument to specifically measure obstetric care provider's attitudes toward this population is needed to target identified gaps in providing respectful care. Existing literature and instruments were analysed and two existing measures, the Attitudes about People with HIV Scale and the Attitudes toward Women with HIV Scale, were combined to create an initial item pool to address attitudes toward HIV-positive pregnant women. The data were collected in 2003 with obstetric nurses attending a national conference in the United States of America (N = 210). Content validity was used for item pool development and principal component analysis and analysis of variance were used to determine construct validity. Reliability was analysed using Cronbach's Alpha. The new measure demonstrated high internal consistency (alpha estimates = 0.89). Principal component analysis yielded a two-component structure that accounted for 45% of the total variance: Mothering-Choice (alpha estimates = 0.89) and Sympathy-Rights (alpha estimates = 0.72). These data provided initial evidence of the psychometric properties of the Pregnant Women with HIV Attitude Scale. Further analysis is required of the validity of the constructs of this scale and its reliability with various obstetric care providers.
NASA Technical Reports Server (NTRS)
Cole, Stuart K.; Reeves, John D.; Williams-Byrd, Julie A.; Greenberg, Marc; Comstock, Doug; Olds, John R.; Wallace, Jon; DePasquale, Dominic; Schaffer, Mark
2013-01-01
NASA is investing in new technologies that include 14 primary technology roadmap areas, and aeronautics. Understanding the cost for research and development of these technologies and the time it takes to increase the maturity of the technology is important to the support of the ongoing and future NASA missions. Overall, technology estimating may help provide guidance to technology investment strategies to help improve evaluation of technology affordability, and aid in decision support. The research provides a summary of the framework development of a Technology Estimating process where four technology roadmap areas were selected to be studied. The framework includes definition of terms, discussion for narrowing the focus from 14 NASA Technology Roadmap areas to four, and further refinement to include technologies, TRL range of 2 to 6. Included in this paper is a discussion to address the evaluation of 20 unique technology parameters that were initially identified, evaluated and then subsequently reduced for use in characterizing these technologies. A discussion of data acquisition effort and criteria established for data quality are provided. The findings obtained during the research included gaps identified, and a description of a spreadsheet-based estimating tool initiated as a part of the Technology Estimating process.
Weiser, John; Brooks, John T.; Skarbinski, Jacek; West, Brady T.; Duke, Christopher C.; Gremel, Garrett W.; Beer, Linda
2017-01-01
Introduction HIV treatment guidelines recommend initiating antiretroviral therapy (ART) regardless of CD4 cell (CD4) count, barring contraindications or barriers to treatment. An estimated 6% of persons receiving HIV care in 2013 were not prescribed ART. We examined reasons for this gap in the care continuum. Methods During 2013–2014, we surveyed a probability sample of HIV care providers, of whom 1234 returned surveys (64.0% adjusted response rate). We estimated percentages of providers who followed guidelines and their characteristics, and who deferred ART prescribing for any reason. Results Barring contraindications, 71.2% of providers initiated ART regardless of CD4 count. Providers less likely to initiate had caseloads ≤20 vs. >200 patients [adjusted prevalence ratios (aPR) 0.69, 95% confidence interval (CI): 0.47 to 1.02, P = 0.03], practiced at non–Ryan White HIV/AIDS Program-funded facilities (aPR 0.85, 95% CI: 0.74 to 0.98, P = 0.02), or reported pharmaceutical assistance programs provided insufficient medication to meet patients’ needs (aPR 0.79, 95% CI: 0.65 to 0.98, P = 0.02). In all, 17.0% never deferred prescribing ART, 69.6% deferred for 1%–10% of patients, and 13.3% deferred for >10%. Among providers who had deferred ART, 59.4% cited patient refusal as a reason in >50% of cases, 31.1% reported adherence concerns because of mental health disorders or substance abuse, and 21.4% reported adherence concerns because of social problems, eg, homelessness, as factors in >50% of cases when deferring ART. Conclusions An estimated 29% of HIV care providers had not adopted recommendations to initiate ART regardless of CD4 count, barring contraindications, or barriers to treatment. Low-volume providers and those at non–Ryan White HIV/AIDS Program-funded facilities were less likely to follow this guideline. Among all providers, leading reasons for deferring ART included patient refusal and adherence concerns. PMID:28002186
Cooper, Caren B
2014-09-01
Accurate phenology data, such as the timing of migration and reproduction, is important for understanding how climate change influences birds. Given contradictory findings among localized studies regarding mismatches in timing of reproduction and peak food supply, broader-scale information is needed to understand how whole species respond to environmental change. Citizen science-participation of the public in genuine research-increases the geographic scale of research. Recent studies, however, showed weekend bias in reported first-arrival dates for migratory songbirds in databases created by citizen-science projects. I investigated whether weekend bias existed for clutch-initiation dates for common species in US citizen-science projects. Participants visited nests on Saturdays more frequently than other days. When participants visited nests during the laying stage, biased timing of visits did not translate into bias in estimated clutch-initiation dates, based on back-dating with the assumption of one egg laid per day. Participants, however, only visited nests during the laying stage for 25% of attempts of cup-nesting species and 58% of attempts in nest boxes. In some years, in lieu of visit data, participants provided their own estimates of clutch-initiation dates and were asked "did you visit the nest during the laying period?" Those participants who answered the question provided estimates of clutch-initiation dates with no day-of-week bias, irrespective of their answer. Those who did not answer the question were more likely to estimate clutch initiation on a Saturday. Data from citizen-science projects are useful in phenological studies when temporal biases can be checked and corrected through protocols and/or analytical methods.
NASA Astrophysics Data System (ADS)
Cooper, Caren B.
2014-09-01
Accurate phenology data, such as the timing of migration and reproduction, is important for understanding how climate change influences birds. Given contradictory findings among localized studies regarding mismatches in timing of reproduction and peak food supply, broader-scale information is needed to understand how whole species respond to environmental change. Citizen science—participation of the public in genuine research—increases the geographic scale of research. Recent studies, however, showed weekend bias in reported first-arrival dates for migratory songbirds in databases created by citizen-science projects. I investigated whether weekend bias existed for clutch-initiation dates for common species in US citizen-science projects. Participants visited nests on Saturdays more frequently than other days. When participants visited nests during the laying stage, biased timing of visits did not translate into bias in estimated clutch-initiation dates, based on back-dating with the assumption of one egg laid per day. Participants, however, only visited nests during the laying stage for 25 % of attempts of cup-nesting species and 58 % of attempts in nest boxes. In some years, in lieu of visit data, participants provided their own estimates of clutch-initiation dates and were asked "did you visit the nest during the laying period?" Those participants who answered the question provided estimates of clutch-initiation dates with no day-of-week bias, irrespective of their answer. Those who did not answer the question were more likely to estimate clutch initiation on a Saturday. Data from citizen-science projects are useful in phenological studies when temporal biases can be checked and corrected through protocols and/or analytical methods.
NASA Technical Reports Server (NTRS)
Peters, C.; Kampe, F. (Principal Investigator)
1980-01-01
The mathematical description and implementation of the statistical estimation procedure known as the Houston integrated spatial/spectral estimator (HISSE) is discussed. HISSE is based on a normal mixture model and is designed to take advantage of spectral and spatial information of LANDSAT data pixels, utilizing the initial classification and clustering information provided by the AMOEBA algorithm. The HISSE calculates parametric estimates of class proportions which reduce the error inherent in estimates derived from typical classify and count procedures common to nonparametric clustering algorithms. It also singles out spatial groupings of pixels which are most suitable for labeling classes. These calculations are designed to aid the analyst/interpreter in labeling patches with a crop class label. Finally, HISSE's initial performance on an actual LANDSAT agricultural ground truth data set is reported.
Bernard R. Parresol; Charles E. Thomas
1996-01-01
In the wood utilization industry, both stem profile and biomass are important quantities. The two have traditionally been estimated separately. The introduction of a density-integral method allows for coincident estimation of stem profile and biomass, based on the calculus of mass theory, and provides an alternative to weight-ratio methodology. In the initial...
NASA Technical Reports Server (NTRS)
Cohn, S. E.
1982-01-01
Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.
Uncertainty Estimation in Tsunami Initial Condition From Rapid Bayesian Finite Fault Modeling
NASA Astrophysics Data System (ADS)
Benavente, R. F.; Dettmer, J.; Cummins, P. R.; Urrutia, A.; Cienfuegos, R.
2017-12-01
It is well known that kinematic rupture models for a given earthquake can present discrepancies even when similar datasets are employed in the inversion process. While quantifying this variability can be critical when making early estimates of the earthquake and triggered tsunami impact, "most likely models" are normally used for this purpose. In this work, we quantify the uncertainty of the tsunami initial condition for the great Illapel earthquake (Mw = 8.3, 2015, Chile). We focus on utilizing data and inversion methods that are suitable to rapid source characterization yet provide meaningful and robust results. Rupture models from teleseismic body and surface waves as well as W-phase are derived and accompanied by Bayesian uncertainty estimates from linearized inversion under positivity constraints. We show that robust and consistent features about the rupture kinematics appear when working within this probabilistic framework. Moreover, by using static dislocation theory, we translate the probabilistic slip distributions into seafloor deformation which we interpret as a tsunami initial condition. After considering uncertainty, our probabilistic seafloor deformation models obtained from different data types appear consistent with each other providing meaningful results. We also show that selecting just a single "representative" solution from the ensemble of initial conditions for tsunami propagation may lead to overestimating information content in the data. Our results suggest that rapid, probabilistic rupture models can play a significant role during emergency response by providing robust information about the extent of the disaster.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.
2008-05-15
We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less
NASA Technical Reports Server (NTRS)
Tomaine, R. L.
1976-01-01
Flight test data from a large 'crane' type helicopter were collected and processed for the purpose of identifying vehicle rigid body stability and control derivatives. The process consisted of using digital and Kalman filtering techniques for state estimation and Extended Kalman filtering for parameter identification, utilizing a least squares algorithm for initial derivative and variance estimates. Data were processed for indicated airspeeds from 0 m/sec to 152 m/sec. Pulse, doublet and step control inputs were investigated. Digital filter frequency did not have a major effect on the identification process, while the initial derivative estimates and the estimated variances had an appreciable effect on many derivative estimates. The major derivatives identified agreed fairly well with analytical predictions and engineering experience. Doublet control inputs provided better results than pulse or step inputs.
40 CFR 63.2860 - What notifications must I submit and when?
Code of Federal Regulations, 2013 CFR
2013-07-01
.... (2) The notification of actual startup date must also include whether you have elected to operate under an initial startup period subject to § 63.2850(c)(2) and provide an estimate and justification for the anticipated duration of the initial startup period. (c) Significant modification notifications...
40 CFR 63.2860 - What notifications must I submit and when?
Code of Federal Regulations, 2014 CFR
2014-07-01
.... (2) The notification of actual startup date must also include whether you have elected to operate under an initial startup period subject to § 63.2850(c)(2) and provide an estimate and justification for the anticipated duration of the initial startup period. (c) Significant modification notifications...
40 CFR 63.2860 - What notifications must I submit and when?
Code of Federal Regulations, 2012 CFR
2012-07-01
.... (2) The notification of actual startup date must also include whether you have elected to operate under an initial startup period subject to § 63.2850(c)(2) and provide an estimate and justification for the anticipated duration of the initial startup period. (c) Significant modification notifications...
Holford, Theodore R; Levy, David T; Meza, Rafael
2016-04-01
Characterizing smoking history patterns summarizes life course exposure for birth cohorts, essential for evaluating the impact of tobacco control on health. Limited attention has been given to patterns among African Americans. Life course smoking histories of African Americans and whites were estimated beginning with the 1890 birth cohort. Estimates of smoking initiation and cessation probabilities, and intensity can be used as a baseline for studying smoking intervention strategies that target smoking exposure. US National Health Interview Surveys conducted from 1965 to 2012 yielded cross-sectional information on current smoking behavior among African Americans and whites. Additional detail for smokers including age at initiation, age at cessation and smoking intensity were available in some surveys and these were used to construct smoking histories for participants up to the date that they were interviewed. Age-period-cohort models with constrained natural splines provided estimates of current, former and never-smoker prevalence in cohorts beginning in 1890. This approach yielded yearly estimates of initiation, cessation and smoking intensity by age for each birth cohort. Smoking initiation probabilities tend to be lower among African Americans compared to whites, and cessation probabilities also were generally lower. Higher initiation leads to higher smoking prevalence among whites in younger ages, but lower cessation leads to higher prevalence at older ages in blacks, when adverse health effects of smoking become most apparent. These estimates provide a summary that can be used to better understand the effects of changes in smoking behavior following publication of the Surgeon General's Report in 1964. A novel method of estimating smoking histories was applied to data from the National Health Interview Surveys, which provided an extensive summary of the smoking history in this population following publication of the Surgeon General's Report in 1964. The results suggest that some of the existing disparities in smoking-related disease may be due to the lower cessation rates in African Americans compared to whites. However, the number of cigarettes smoked is also lower among African Americans. Further work is needed to determine mechanisms by which smoking duration and intensity can account for racial disparities in smoking-related diseases. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Hughes, Eric J.; Krotkov, Nickolay; da Silva, Arlindo; Colarco, Peter
2015-01-01
Simulation of volcanic emissions in climate models requires information that describes the eruption of the emissions into the atmosphere. While the total amount of gases and aerosols released from a volcanic eruption can be readily estimated from satellite observations, information about the source parameters, like injection altitude, eruption time and duration, is often not directly known. The AeroCOM volcanic emissions inventory provides estimates of eruption source parameters and has been used to initialize volcanic emissions in reanalysis projects, like MERRA. The AeroCOM volcanic emission inventory provides an eruptions daily SO2 flux and plume top altitude, yet an eruption can be very short lived, lasting only a few hours, and emit clouds at multiple altitudes. Case studies comparing the satellite observed dispersal of volcanic SO2 clouds to simulations in MERRA have shown mixed results. Some cases show good agreement with observations Okmok (2008), while for other eruptions the observed initial SO2 mass is half of that in the simulations, Sierra Negra (2005). In other cases, the initial SO2 amount agrees with the observations but shows very different dispersal rates, Soufriere Hills (2006). In the aviation hazards community, deriving accurate source terms is crucial for monitoring and short-term forecasting (24-h) of volcanic clouds. Back trajectory methods have been developed which use satellite observations and transport models to estimate the injection altitude, eruption time, and eruption duration of observed volcanic clouds. These methods can provide eruption timing estimates on a 2-hour temporal resolution and estimate the altitude and depth of a volcanic cloud. To better understand the differences between MERRA simulations and volcanic SO2 observations, back trajectory methods are used to estimate the source term parameters for a few volcanic eruptions and compared to their corresponding entry in the AeroCOM volcanic emission inventory. The nature of these mixed results is discussed with respect to the source term estimates.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-27
... in the public docket without change including any personal information provided, unless the comment...: 144 (total). Frequency of response: Initially, quarterly, and semiannually. Total estimated burden: 23... year), includes $800 annualized capital or operation & maintenance costs. Changes in the Estimates...
Specifying and Refining a Complex Measurement Model.
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
This paper aims to describe a Bayesian approach to modeling and estimating cognitive models both in terms of statistical machinery and actual instrument development. Such a method taps the knowledge of experts to provide initial estimates for the probabilistic relationships among the variables in a multivariate latent variable model and refines…
Initial dynamic load estimates during configuration design
NASA Technical Reports Server (NTRS)
Schiff, Daniel
1987-01-01
This analysis includes the structural response to shock and vibration and evaluates the maximum deflections and material stresses and the potential for the occurrence of elastic instability, fatigue and fracture. The required computations are often performed by means of finite element analysis (FEA) computer programs in which the structure is simulated by a finite element model which may contain thousands of elements. The formulation of a finite element model can be time consuming, and substantial additional modeling effort may be necessary if the structure requires significant changes after initial analysis. Rapid methods for obtaining rough estimates of the structural response to shock and vibration are presented for the purpose of providing guidance during the initial mechanical design configuration stage.
Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino
2018-02-22
CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.
EA 18G Growler Aircraft (EA 18G)
2015-12-01
10051.9 N/A 13186.9 8636.4 11550.1 15672.4 1 APB Breach Confidence Level Confidence Level of cost estimate for current APB: 50% The current...estimate recommendation aims to provide sufficient resources to execute the program under normal conditions, encountering average levels of technical...TY $M) Initial PAUC Development Estimate Changes PAUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total 93.573 4.150 1.442 -0.319 0.947 -0.348
Extended Kalman Doppler tracking and model determination for multi-sensor short-range radar
NASA Astrophysics Data System (ADS)
Mittermaier, Thomas J.; Siart, Uwe; Eibert, Thomas F.; Bonerz, Stefan
2016-09-01
A tracking solution for collision avoidance in industrial machine tools based on short-range millimeter-wave radar Doppler observations is presented. At the core of the tracking algorithm there is an Extended Kalman Filter (EKF) that provides dynamic estimation and localization in real-time. The underlying sensor platform consists of several homodyne continuous wave (CW) radar modules. Based on In-phase-Quadrature (IQ) processing and down-conversion, they provide only Doppler shift information about the observed target. Localization with Doppler shift estimates is a nonlinear problem that needs to be linearized before the linear KF can be applied. The accuracy of state estimation depends highly on the introduced linearization errors, the initialization and the models that represent the true physics as well as the stochastic properties. The important issue of filter consistency is addressed and an initialization procedure based on data fitting and maximum likelihood estimation is suggested. Models for both, measurement and process noise are developed. Tracking results from typical three-dimensional courses of movement at short distances in front of a multi-sensor radar platform are presented.
Method and system for controlling a permanent magnet machine
Walters, James E.
2003-05-20
Method and system for controlling the start of a permanent magnet machine are provided. The method allows to assign a parameter value indicative of an estimated initial rotor position of the machine. The method further allows to energize the machine with a level of current being sufficiently high to start rotor motion in a desired direction in the event the initial rotor position estimate is sufficiently close to the actual rotor position of the machine. A sensing action allows to sense whether any incremental changes in rotor position occur in response to the energizing action. In the event no changes in rotor position are sensed, the method allows to incrementally adjust the estimated rotor position by a first set of angular values until changes in rotor position are sensed. In the event changes in rotor position are sensed, the method allows to provide a rotor alignment signal as rotor motion continues. The alignment signal allows to align the estimated rotor position relative to the actual rotor position. This alignment action allows for operating the machine over a wide speed range.
ERIC Educational Resources Information Center
Smith, Gary R.
The purpose of this study was to provide an empirical estimate of the number of newly certified teachers in Michigan who did not enter teaching in the year in which they were initially certified and also to estimate the time this group remained active in seeking teaching positions. The study was limited by lack of data on private school teacher…
Autonomous navigation system based on GPS and magnetometer data
NASA Technical Reports Server (NTRS)
Julie, Thienel K. (Inventor); Richard, Harman R. (Inventor); Bar-Itzhack, Itzhack Y. (Inventor)
2004-01-01
This invention is drawn to an autonomous navigation system using Global Positioning System (GPS) and magnetometers for low Earth orbit satellites. As a magnetometer is reliable and always provides information on spacecraft attitude, rate, and orbit, the magnetometer-GPS configuration solves GPS initialization problem, decreasing the convergence time for navigation estimate and improving the overall accuracy. Eventually the magnetometer-GPS configuration enables the system to avoid costly and inherently less reliable gyro for rate estimation. Being autonomous, this invention would provide for black-box spacecraft navigation, producing attitude, orbit, and rate estimates without any ground input with high accuracy and reliability.
Analytic model to estimate thermonuclear neutron yield in z-pinches using the magnetic Noh problem
NASA Astrophysics Data System (ADS)
Allen, Robert C.
The objective was to build a model which could be used to estimate neutron yield in pulsed z-pinch experiments, benchmark future z-pinch simulation tools and to assist scaling for breakeven systems. To accomplish this, a recent solution to the magnetic Noh problem was utilized which incorporates a self-similar solution with cylindrical symmetry and azimuthal magnetic field (Velikovich, 2012). The self-similar solution provides the conditions needed to calculate the time dependent implosion dynamics from which batch burn is assumed and used to calculate neutron yield. The solution to the model is presented. The ion densities and time scales fix the initial mass and implosion velocity, providing estimates of the experimental results given specific initial conditions. Agreement is shown with experimental data (Coverdale, 2007). A parameter sweep was done to find the neutron yield, implosion velocity and gain for a range of densities and time scales for DD reactions and a curve fit was done to predict the scaling as a function of preshock conditions.
NASA Astrophysics Data System (ADS)
Cua, G.; Fischer, M.; Heaton, T.; Wiemer, S.
2009-04-01
The Virtual Seismologist (VS) algorithm is a Bayesian approach to regional, network-based earthquake early warning (EEW). Bayes' theorem as applied in the VS algorithm states that the most probable source estimates at any given time is a combination of contributions from relatively static prior information that does not change over the timescale of earthquake rupture and a likelihood function that evolves with time to take into account incoming pick and amplitude observations from the on-going earthquake. Potentially useful types of prior information include network topology or station health status, regional hazard maps, earthquake forecasts, and the Gutenberg-Richter magnitude-frequency relationship. The VS codes provide magnitude and location estimates once picks are available at 4 stations; these source estimates are subsequently updated each second. The algorithm predicts the geographical distribution of peak ground acceleration and velocity using the estimated magnitude and location and appropriate ground motion prediction equations; the peak ground motion estimates are also updated each second. Implementation of the VS algorithm in California and Switzerland is funded by the Seismic Early Warning for Europe (SAFER) project. The VS method is one of three EEW algorithms whose real-time performance is being evaluated and tested by the California Integrated Seismic Network (CISN) EEW project. A crucial component of operational EEW algorithms is the ability to distinguish between noise and earthquake-related signals in real-time. We discuss various empirical approaches that allow the VS algorithm to operate in the presence of noise. Real-time operation of the VS codes at the Southern California Seismic Network (SCSN) began in July 2008. On average, the VS algorithm provides initial magnitude, location, origin time, and ground motion distribution estimates within 17 seconds of the earthquake origin time. These initial estimate times are dominated by the time for 4 acceptable picks to be available, and thus are heavily influenced by the station density in a given region; these initial estimate times also include the effects of telemetry delay, which ranges between 6 and 15 seconds at the SCSN, and processing time (~1 second). Other relevant performance statistics include: 95% of initial real-time location estimates are within 20 km of the actual epicenter, 97% of initial real-time magnitude estimates are within one magnitude unit of the network magnitude. Extension of real-time VS operations to networks in Northern California is an on-going effort. In Switzerland, the VS codes have been run on offline waveform data from over 125 earthquakes recorded by the Swiss Digital Seismic Network (SDSN) and the Swiss Strong Motion Network (SSMS). We discuss the performance of the VS algorithm on these datasets in terms of magnitude, location, and ground motion estimation.
McClellan, Sean R; Panattoni, Laura; Chan, Albert S; Tai-Seale, Ming
2016-03-01
Few studies have examined the association between patient-initiated electronic messaging (e-messaging) and clinical outcomes in fee-for-service settings. To estimate the association between patient-initiated e-messages and quality of care among patients with diabetes and hypertension. Longitudinal observational study from 2009 to 2013. In March 2011, the medical group eliminated a $60/year patient user fee for e-messaging and established a provider payment of $3-5 per patient-initiated e-message. Quality of care for patients initiating e-messages was compared before and after March 2011, relative to nonmessaging patients. Propensity score weighting accounted for differences between e-messaging and nonmessaging patients in generalized estimating equations. Large multispecialty practice in California compensating providers' fee-for-service. Patients with diabetes (N=4232) or hypertension (N=15,463) who had activated their online portal but not e-messaged before e-messaging became free. Quality of care included HEDIS-based process measures for hemoglobin (Hb) A1c, blood pressure, low-density lipoprotein (LDL), nephropathy, and retinopathy tests, and outcome measures for HbA1c, blood pressure, and LDL. E-messaging was measured as counts of patient-initiated e-message threads sent to providers. Patients were categorized into quartiles by e-messaging frequency. The probability of annually completing indicated tests increased by 1%-7% for e-messaging patients, depending on the outcome and e-messaging frequency. E-messaging was associated with small improvements in HbA1c and LDL for some patients with diabetes. Patient-initiated e-messaging may increase the likelihood of completing recommended tests, but may not be sufficient to improve clinical outcomes for most patients with diabetes or hypertension without additional interventions.
Space shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.
Facente, Shelley N; Grebe, Eduard; Burk, Katie; Morris, Meghan D; Murphy, Edward L; Mirzazadeh, Ali; Smith, Aaron A; Sanchez, Melissa A; Evans, Jennifer L; Nishimura, Amy; Raymond, Henry F
2018-01-01
Initiated in 2016, End Hep C SF is a comprehensive initiative to eliminate hepatitis C (HCV) infection in San Francisco. The introduction of direct-acting antivirals to treat and cure HCV provides an opportunity for elimination. To properly measure progress, an estimate of baseline HCV prevalence, and of the number of people in various subpopulations with active HCV infection, is required to target and measure the impact of interventions. Our analysis was designed to incorporate multiple relevant data sources and estimate HCV burden for the San Francisco population as a whole, including specific key populations at higher risk of infection. Our estimates are based on triangulation of data found in case registries, medical records, observational studies, and published literature from 2010 through 2017. We examined subpopulations based on sex, age and/or HCV risk group. When multiple sources of data were available for subpopulation estimates, we calculated a weighted average using inverse variance weighting. Credible ranges (CRs) were derived from 95% confidence intervals of population size and prevalence estimates. We estimate that 21,758 residents of San Francisco are HCV seropositive (CR: 10,274-42,067), representing an overall seroprevalence of 2.5% (CR: 1.2%- 4.9%). Of these, 16,408 are estimated to be viremic (CR: 6,505-37,407), though this estimate includes treated cases; up to 12,257 of these (CR: 2,354-33,256) are people who are untreated and infectious. People who injected drugs in the last year represent 67.9% of viremic HCV infections. We estimated approximately 7,400 (51%) more HCV seropositive cases than are included in San Francisco's HCV surveillance case registry. Our estimate provides a useful baseline against which the impact of End Hep C SF can be measured.
1982-09-01
characteristics) for one or more aircraft which had been tempor- arily excluded from the data base. Provided these results proved satis- factory , all of the...8217 . ; ;AI - U .*- .. ." ... , -Lt" U :% 170.,, ..’ ,:, -:iZ,: . APPENDIX G FACTOR ANALYSIS INITIAL 1 71 239= RUN UKI FACTOR ANALISIS
Changes in Soil Carbon Storage After Cultivation
Mann, L. K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2004-01-01
Previously published data from 625 paired soil samples were used to predict carbon in cultivated soil as a function of initial carbon content. A 30-cm sampling depth provided a less variable estimate (r2 = 0.9) of changes in carbon than a 15-cm sampling depth (r2 = 0.6). Regression analyses of changes in carbon storage in relation to years of cultivation confirmed that the greatest rates of change occurred in the first 20 y. An initial carbon effect was present in all analyses: soils very low in carbon tended to gain slight amounts of carbon after cultivation, but soils high in carbon lost at least 20% during cultivation. Carbon losses from most agricultural soils are estimated to average less than 20% of initial values or less than 1.5 kg/m2 within the top 30 cm. These estimates should not be applied to depths greater than 30 cm and would be improved with more bulk density information and equivalent sample volumes.
Contemporary post surgical management of differentiated thyroid carcinoma.
Tala, H; Tuttle, R M
2010-08-01
Risk assessment is the cornerstone of contemporary management of thyroid cancer. Following thyroid surgery, an initial risk assessment of recurrence and disease-specific mortality is made using important intra-operative findings, histologic characteristics of the tumor, molecular profile of the tumor, post-operative serum thyroglobulin and any available cross-sectional imaging studies. This initial risk assessment is used to guide recommendations regarding the need for remnant ablation, external beam irradiation, systemic therapy, degree of TSH suppression, and follow-up disease detection strategy over the first 2 years after initial therapy. While this initial risk stratification provides valuable information, it is a static representation of the patient in the first few weeks post-operatively that does not change over time. Depending on how the patient responds to our initial therapies, the risk of recurrence and death may change significantly during follow-up. In order to account for differences in response to therapy in individual patients and to incorporate the impact of treatment on our initial risk estimates, we recommend a re-stratification of risk at the 2-year point of follow-up. This re-stratification provides an updated risk estimate that can be used to guide ongoing management recommendations including the frequency and intensity of follow-up, degree of ongoing TSH suppression, and need for additional therapies. Ongoing management recommendations must be tailored to realistic, evolving risk estimates that are actively updated during follow-up. By individualizing therapy on the basis of initial and ongoing risk assessments, we can maximize the beneficial effects of aggressive therapy in patients with thyroid cancer who are likely to benefit from it, while minimizing potential complications and side effects in low-risk patients destined to have a full healthy and productive life after minimal therapeutic intervention. Copyright (c) 2010 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koehler, J.; Sylte, W.W.
1997-12-31
The deposition of atmospheric polyaromatic hydrocarbons (PAHs) into San Diego Bay was evaluated at an initial study level. This study was part of an overall initial estimate of PAH waste loading to San Diego Bay from all environmental pathways. The study of air pollutant deposition to water bodies has gained increased attention both as a component of Total Maximum Daily Load (TMDL) determinations required under the Clean Water Act and pursuant to federal funding authorized by the 1990 Clean Air Act Amendments to study the atmospheric deposition of hazardous air pollutants to the Great Waters, which includes coastal waters. Tomore » date, studies under the Clean Air Act have included the Great Lakes, Chesapeake Bay, Lake Champlain, and Delaware Bay. Given the limited resources of this initial study for San Diego Bay, the focus was on maximizing the use of existing data and information. The approach developed included the statistical evaluation of measured atmospheric PAH concentrations in the San Diego area, the extrapolation of EPA study results of atmospheric PAH concentrations above Lake Michigan to supplement the San Diego data, the estimation of dry and wet deposition with published calculation methods considering local wind and rainfall data, and the comparison of resulting PAH deposition estimates for San Diego Bay with estimated PAH emissions from ship and commercial boat activity in the San Diego area. The resulting PAH deposition and ship emission estimates were within the same order of magnitude. Since a significant contributor to the atmospheric deposition of PAHs to the Bay is expected to be from shipping traffic, this result provides a check on the order of magnitude on the PAH deposition estimate. Also, when compared against initial estimates of PAH loading to San Diego Bay from other environmental pathways, the atmospheric deposition pathway appears to be a significant contributor.« less
Carbon Fluxes at the AmazonFACE Research Site
NASA Astrophysics Data System (ADS)
Norby, R.; De Araujo, A. C.; Cordeiro, A. L.; Fleischer, K.; Fuchslueger, L.; Garcia, S.; Hofhansl, F.; Garcia, M. N.; Grandis, A.; Oblitas, E.; Pereira, I.; Pieres, N. M.; Schaap, K.; Valverde-Barrantes, O.
2017-12-01
The free-air CO2 enrichment (FACE) experiment to be implemented in the Amazon rain forest requires strong pretreatment characterization so that eventual responses to elevated CO2 can be detected against a background of substantial species diversity and spatial heterogeneity. Two 30-m diameter plots have been laid out for initial characterization in a 30-m tall, old-growth, terra firme forest. Intensive measurements have been made of aboveground tree growth, leaf area, litter production, and fine-root production; these data sets together support initial estimates of plot-scale net primary productivity (NPP). Leaf-level measurements of photosynthesis throughout the canopy and over a daily time course in both the wet and dry season, coupled with meterological monitoring, support an initial estimate of gross primary productivity (GPP) and carbon-use efficiency (CUE = NPP/GPP). Monthly monitoring of CO2 efflux from the soil, partitioned into autotrophic and heterotrophic components, supports an estimate of net ecosystem production (NEP). Our estimate of NPP in the two plots (1.2 and 1.4 kg C m-2 yr-1) is 16-38% greater than previously reported for the site, primarily due to our more complete documentation of fine-root production, including root production deeper than 30 cm. The estimate of CUE of the ecosystem (0.52) is greater than most others in Amazonia; this discrepancy reflects large uncertainty in GPP, which derived from just two days of measurement, or to underestimates of the fine-root component of NPP in previous studies. Estimates of NEP (0 and 0.14 kg C m-2 yr-1) are generally consistent with a landscape-level estimate from flux tower data. Our C flux estimates, albeit very preliminary, provide initial benchmarks for a 12-model a priori evaluation of this forest. The model means of GPP, NPP, and NEP are mostly consistent with our field measurements. Predictions of C flux responses to elevated CO2 from the models become hypotheses to be tested in the FACE experiment. Although carbon fluxes on small plots cannot be expected to represent the fluxes across the wider and more diverse region, our integrated measurements, coupled with a model framework, provide a strong foundation for understanding the mechanistic basis of responses and for extending results of experimental CO2 fertilization to the wider region.
Using the knowledge-to-action framework to guide the timing of dialysis initiation.
Sood, Manish M; Manns, Braden; Nesrallah, Gihad
2014-05-01
The optimal time at which to initiate chronic dialysis remains unknown. Using a contemporary knowledge translation approach (the knowledge-to-action framework), a pan-Canadian collaboration (CANN-NET) set out to study the scope of the problem, then develop and disseminate evidence-based guidelines addressing the timing of dialysis initiation. The purpose of this review is to summarize the key findings and describe the planned Canadian knowledge translation strategy for improving knowledge and practices pertaining to the timing dialysis initiation. New research has provided considerable insights regarding the initiation of dialysis. A Canadian cohort study identified significant variation in the estimated glomerular filtration rate level at dialysis initiation, and a survey of providers identified related knowledge gaps that might be amenable to knowledge translation interventions. A recent knowledge synthesis/guideline concluded that early dialysis initiation is costly, and provides no measureable clinical benefits. A systematic knowledge translation intervention including a multifaceted approach may aid in reducing variation in practice and improving the quality of care. Utilizing the knowledge-to-action framework, we identified practice variation and key barriers to the optimal timing for dialysis initiation that may be amenable to knowledge translation strategies.
Numerical Simulation of Stress evolution and earthquake sequence of the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Dong, Peiyu; Hu, Caibo; Shi, Yaolin
2015-04-01
The India-Eurasia's collision produces N-S compression and results in large thrust fault in the southern edge of the Tibetan Plateau. Differential eastern flow of the lower crust of the plateau leads to large strike-slip faults and normal faults within the plateau. From 1904 to 2014, more than 30 earthquakes of Mw > 6.5 occurred sequentially in this distinctive tectonic environment. How did the stresses evolve during the last 110 years, how did the earthquakes interact with each other? Can this knowledge help us to forecast the future seismic hazards? In this essay, we tried to simulate the evolution of the stress field and the earthquake sequence in the Tibetan plateau within the last 110 years with a 2-D finite element model. Given an initial state of stress, the boundary condition was constrained by the present-day GPS observation, which was assumed as a constant rate during the 110 years. We calculated stress evolution year by year, and earthquake would occur if stress exceed the crustal strength. Stress changes due to each large earthquake in the sequence was calculated and contributed to the stress evolution. A key issue is the choice of initial stress state of the modeling, which is actually unknown. Usually, in the study of earthquake triggering, people assume the initial stress is zero, and only calculate the stress changes by large earthquakes - the Coulomb failure stress changes (Δ CFS). To some extent, this simplified method is a powerful tool because it can reveal which fault or which part of a fault becomes more risky or safer relatively. Nonetheless, it has not utilized all information available to us. The earthquake sequence reveals, though far from complete, some information about the stress state in the region. If the entire region is close to a self-organized critical or subcritical state, earthquake stress drop provides an estimate of lower limit of initial state. For locations no earthquakes occurred during the period, initial stress has to be lower than certain value. For locations where large earthquakes occurred during the 110 years, the initial stresses can be inverted if the strength is estimated and the tectonic loading is assumed constant. Therefore, although initial stress state is unknown, we can try to make estimate of a range of it. In this study, we estimated a reasonable range of initial stress, and then based on Coulomb-Mohr criterion to regenerate the earthquake sequence, starting from the Daofu earthquake of 1904. We calculated the stress field evolution of the sequence, considering both the tectonic loading and interaction between the earthquakes. Ultimately we got a sketch of the present stress. Of course, a single model with certain initial stress is just one possible model. Consequently the potential seismic hazards distribution based on a single model is not convincing. We made test on hundreds of possible initial stress state, all of them can produce the historical earthquake sequence occurred, and summarized all kinds of calculated probabilities of the future seismic activity. Although we cannot provide the exact state in the future, but we can narrow the estimate of regions where is in high probability of risk. Our primary results indicate that the Xianshuihe fault and adjacent area is one of such zones with higher risk than other regions in the future. During 2014, there were 6 earthquakes (M > 5.0) happened in this region, which correspond with our result in some degree. We emphasized the importance of the initial stress field for the earthquake sequence, and provided a probabilistic assessment for future seismic hazards. This study may bring some new insights to estimate the initial stress, earthquake triggering, and the stress field evolution .
NASA Astrophysics Data System (ADS)
Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung
2016-07-01
A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.
Henselmans, I; Smets, E M A; Han, P K J; de Haes, H C J C; Laarhoven, H W M van
2017-10-01
To examine how communication about life expectancy is initiated in consultations about palliative chemotherapy, and what prognostic information is presented. Patients with advanced cancer (n=41) with a median life expectancy <1year and oncologists (n=6) and oncologists-in-training (n=7) meeting with them in consultations (n=62) to discuss palliative chemotherapy were included. Verbatim transcripts of audio-recorded consultations were analyzed using MAXqda10. Life expectancy was addressed in 19 of 62 of the consultations. In all cases, patients took the initiative, most often through direct questions. Estimates were provided in 12 consultations in various formats: the likelihood of experiencing a significant event, point estimates or general time scales of "months to years", often with an emphasis on the "years". The indeterminacy of estimates was consistently stressed. Also their potential inadequacy was regularly addressed, often by describing beneficial prognostic predictors for the specific patient. Oncologists did not address the reliability or precision of estimates. Oncologists did not initiate talk about life expectancy, they used different formats, emphasized the positive and stressed unpredictability, yet not ambiguity of estimates. Prognostic communication should be part of the medical curriculum. Further research should address the effect of different formats of information provision. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Scharnagl, B.; Vrugt, J. A.; Vereecken, H.; Herbst, M.
2010-02-01
A major drawback of current soil organic carbon (SOC) models is that their conceptually defined pools do not necessarily correspond to measurable SOC fractions in real practice. This not only impairs our ability to rigorously evaluate SOC models but also makes it difficult to derive accurate initial states of the individual carbon pools. In this study, we tested the feasibility of inverse modelling for estimating pools in the Rothamsted carbon model (ROTHC) using mineralization rates observed during incubation experiments. This inverse approach may provide an alternative to existing SOC fractionation methods. To illustrate our approach, we used a time series of synthetically generated mineralization rates using the ROTHC model. We adopted a Bayesian approach using the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to infer probability density functions of the various carbon pools at the start of incubation. The Kullback-Leibler divergence was used to quantify the information content of the mineralization rate data. Our results indicate that measured mineralization rates generally provided sufficient information to reliably estimate all carbon pools in the ROTHC model. The incubation time necessary to appropriately constrain all pools was about 900 days. The use of prior information on microbial biomass carbon significantly reduced the uncertainty of the initial carbon pools, decreasing the required incubation time to about 600 days. Simultaneous estimation of initial carbon pools and decomposition rate constants significantly increased the uncertainty of the carbon pools. This effect was most pronounced for the intermediate and slow pools. Altogether, our results demonstrate that it is particularly difficult to derive reasonable estimates of the humified organic matter pool and the inert organic matter pool from inverse modelling of mineralization rates observed during incubation experiments.
Adaptive Estimation and Heuristic Optimization of Nonlinear Spacecraft Attitude Dynamics
2016-09-15
Algorithm GPS Global Positioning System HOUF Higher Order Unscented Filter IC initial conditions IMM Interacting Multiple Model IMU Inertial Measurement Unit ...sources ranging from inertial measurement units to star sensors are used to construct observations for attitude estimation algorithms. The sensor...parameters. A single vector measurement will provide two independent parameters, as a unit vector constraint removes a DOF making the problem underdetermined
Functional Linear Model with Zero-value Coefficient Function at Sub-regions.
Zhou, Jianhui; Wang, Nae-Yuh; Wang, Naisyin
2013-01-01
We propose a shrinkage method to estimate the coefficient function in a functional linear regression model when the value of the coefficient function is zero within certain sub-regions. Besides identifying the null region in which the coefficient function is zero, we also aim to perform estimation and inferences for the nonparametrically estimated coefficient function without over-shrinking the values. Our proposal consists of two stages. In stage one, the Dantzig selector is employed to provide initial location of the null region. In stage two, we propose a group SCAD approach to refine the estimated location of the null region and to provide the estimation and inference procedures for the coefficient function. Our considerations have certain advantages in this functional setup. One goal is to reduce the number of parameters employed in the model. With a one-stage procedure, it is needed to use a large number of knots in order to precisely identify the zero-coefficient region; however, the variation and estimation difficulties increase with the number of parameters. Owing to the additional refinement stage, we avoid this necessity and our estimator achieves superior numerical performance in practice. We show that our estimator enjoys the Oracle property; it identifies the null region with probability tending to 1, and it achieves the same asymptotic normality for the estimated coefficient function on the non-null region as the functional linear model estimator when the non-null region is known. Numerically, our refined estimator overcomes the shortcomings of the initial Dantzig estimator which tends to under-estimate the absolute scale of non-zero coefficients. The performance of the proposed method is illustrated in simulation studies. We apply the method in an analysis of data collected by the Johns Hopkins Precursors Study, where the primary interests are in estimating the strength of association between body mass index in midlife and the quality of life in physical functioning at old age, and in identifying the effective age ranges where such associations exist.
Landis, Kathryn; Bednarczyk, Robert A; Gaydos, Laura M
2018-05-08
Vaccination is a safe and effective way to prevent Human Papillomavirus (HPV) infection and related cancers; however, HPV vaccine uptake remains low in the US. After the 2011 Advisory Committee on Immunization Practices (ACIP) recommendation for routine HPV vaccination of adolescent males, several studies have examined predictors for initiating the vaccine series in this population of interest, particularly with regard to provider recommendations. This study examined racial and ethnic differences for HPV vaccine initiation and provider recommendation in male adolescents. Based on prior HPV vaccine uptake estimates and healthcare utilization data, we hypothesized that minority adolescents would be more likely to initiate HPV vaccines, but less likely to receive a provider recommendation compared to white counterparts. We analyzed the 2014 National Immunization Survey-Teen (NIS-Teen), which included 10,753 male adolescents with provider-verified vaccination data in 50 US states, using multivariate logistic regression models to evaluate racial/ethnic differences in HPV vaccine initiation and provider recommendation. The odds of HPV vaccine initiation were 76 percent higher for Hispanic adolescents and 43 percent higher for non-Hispanic Other or Multiple race adolescents compared to white adolescents. Approximately half of parents reported receiving a provider recommendation for vaccination, with no significant difference in the odds of receiving a provider recommendation across racial/ethnic groups. Despite similar frequency of recommendations across racial and ethnic groups, male adolescents who are racial/ethnic minorities are more likely to initiate vaccination. Future research should focus on developing tailored interventions to increase HPV vaccine receipt among males of all racial/ethnic groups. Copyright © 2018 Elsevier Ltd. All rights reserved.
Development of the Plutonium-DTPA Biokinetic Model.
Konzen, Kevin; Brey, Richard
2015-06-01
Estimating radionuclide intakes from bioassays following chelation treatment presents a challenge to the dosimetrist due to the observed excretion enhancement of the particular radionuclide of concern where no standard biokinetic model exists. This document provides a Pu-DTPA biokinetic model that may be used for making such determination for plutonium intakes. The Pu-DTPA biokinetic model is intended to supplement the standard recommended biokinetic models. The model was used to evaluate several chelation strategies that resulted in providing recommendations for effective treatment. These recommendations supported early treatment for soluble particle inhalations and an initial 3-day series of DTPA treatments for wounds. Several late chelation strategies were also compared where reduced treatment frequencies proved to be as effective as multiple treatments. The Pu-DTPA biokinetic model can be used to assist in estimating initial intakes of transuranic radionuclides and for studying the effects of different treatment strategies.
Development of the Plutonium-DTPA biokinetic model
Konzen, Kevin; Brey, Richard
2015-06-01
Estimating radionuclide intakes from bioassays following chelation treatment presents a challenge to the dosimetrist due to the observed excretion enhancement of the particular radionuclide of concern, where no standard biokinetic model exists. This document provides a Pu-DTPA biokinetic model that may be used for making such determination for plutonium intakes. The Pu-DTPA biokinetic model is intended to supplement the standard recommended biokinetic models. The model was used to evaluate several chelation strategies that resulted in providing recommendations for effective treatment. These recommendations supported early treatment for soluble particle inhalations and an initial 3-day treatment series of DTPA treatments for wounds.more » Several late chelation strategies were also compared where reduced treatment frequencies proved to be as effective as multiple treatments. Furthermore, the Pu-DTPA biokinetic model can be used to assist in estimating initial intakes of transuranic radionuclides, and for studying the effects of different treatment strategies.« less
Encircling the dark: constraining dark energy via cosmic density in spheres
NASA Astrophysics Data System (ADS)
Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.
2016-08-01
The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Kim, Hyokyung; Liao, Liang; Jones, Jeffrey A.; Kwiatkowski, John M.
2015-01-01
It has long been recognized that path-integrated attenuation (PIA) can be used to improve precipitation estimates from high-frequency weather radar data. One approach that provides an estimate of this quantity from airborne or spaceborne radar data is the surface reference technique (SRT), which uses measurements of the surface cross section in the presence and absence of precipitation. Measurements from the dual-frequency precipitation radar (DPR) on the Global Precipitation Measurement (GPM) satellite afford the first opportunity to test the method for spaceborne radar data at Ka band as well as for the Ku-band-Ka-band combination. The study begins by reviewing the basis of the single- and dual-frequency SRT. As the performance of the method is closely tied to the behavior of the normalized radar cross section (NRCS or sigma(0)) of the surface, the statistics of sigma(0) derived from DPR measurements are given as a function of incidence angle and frequency for ocean and land backgrounds over a 1-month period. Several independent estimates of the PIA, formed by means of different surface reference datasets, can be used to test the consistency of the method since, in the absence of error, the estimates should be identical. Along with theoretical considerations, the comparisons provide an initial assessment of the performance of the single- and dual-frequency SRT for the DPR. The study finds that the dual-frequency SRT can provide improvement in the accuracy of path attenuation estimates relative to the single-frequency method, particularly at Ku band.
Cost analysis for the implementation of a medication review with follow-up service in Spain.
Noain, Aranzazu; Garcia-Cardenas, Victoria; Gastelurrutia, Miguel Angel; Malet-Larrea, Amaia; Martinez-Martinez, Fernando; Sabater-Hernandez, Daniel; Benrimoj, Shalom I
2017-08-01
Background Medication review with follow-up (MRF) is a professional pharmacy service proven to be cost-effective. Its broader implementation is limited, mainly due to the lack of evidence-based implementation programs that include economic and financial analysis. Objective To analyse the costs and estimate the price of providing and implementing MRF. Setting Community pharmacy in Spain. Method Elderly patients using poly-pharmacy received a community pharmacist-led MRF for 6 months. The cost analysis was based on the time-driven activity based costing model and included the provider costs, initial investment costs and maintenance expenses. The service price was estimated using the labour costs, costs associated with service provision, potential number of patients receiving the service and mark-up. Main outcome measures Costs and potential price of MRF. Results A mean time of 404.4 (SD 232.2) was spent on service provision and was extrapolated to annual costs. Service provider cost per patient ranged from €196 (SD 90.5) to €310 (SD 164.4). The mean initial investment per pharmacy was €4594 and the mean annual maintenance costs €3,068. Largest items contributing to cost were initial staff training, continuing education and renting of the patient counselling area. The potential service price ranged from €237 to €628 per patient a year. Conclusion Time spent by the service provider accounted for 75-95% of the final cost, followed by initial investment costs and maintenance costs. Remuneration for professional pharmacy services provision must cover service costs and appropriate profit, allowing for their long-term sustainability.
ERIC Educational Resources Information Center
Chingos, Matthew M.; Peterson, Paul E.
2015-01-01
We provide the first experimental estimates of the long-term impacts of a voucher to attend private school by linking data from a privately sponsored voucher initiative in New York City, which awarded the scholarships by lottery to low-income families, to administrative records on college enrollment and degree attainment. We find no significant…
Strain measurement based battery testing
Xu, Jeff Qiang; Steiber, Joe; Wall, Craig M.; Smith, Robert; Ng, Cheuk
2017-05-23
A method and system for strain-based estimation of the state of health of a battery, from an initial state to an aged state, is provided. A strain gauge is applied to the battery. A first strain measurement is performed on the battery, using the strain gauge, at a selected charge capacity of the battery and at the initial state of the battery. A second strain measurement is performed on the battery, using the strain gauge, at the selected charge capacity of the battery and at the aged state of the battery. The capacity degradation of the battery is estimated as the difference between the first and second strain measurements divided by the first strain measurement.
Ben Hamida, Amen; Rafful, Claudia; Jain, Sonia; Sun, Shelly; Gonzalez-Zuniga, Patricia; Rangel, Gudelia; Strathdee, Steffanie A; Werb, Dan
2018-02-01
Although most people who inject drugs (PWID) report receiving assistance during injection initiation events, little research has focused on risk factors among PWID for providing injection initiation assistance. We therefore sought to determine the influence of non-injection drug use among PWID on their risk to initiate others. We used generalized estimating equation (GEE) models on longitudinal data among a prospective cohort of PWID in Tijuana, Mexico (Proyecto El Cuete IV), while controlling for potential confounders. At baseline, 534 participants provided data on injection initiation assistance. Overall, 14% reported ever initiating others, with 4% reporting this behavior recently (i.e., in the past 6 months). In a multivariable GEE model, recent non-injection drug use was independently associated with providing injection initiation assistance (adjusted odds ratio [AOR] = 2.42, 95% confidence interval [CI] = 1.39-4.20). Further, in subanalyses examining specific drug types, recent non-injection use of cocaine (AOR = 9.31, 95% CI = 3.98-21.78), heroin (AOR = 4.00, 95% CI = 1.88-8.54), and methamphetamine (AOR = 2.03, 95% CI = 1.16-3.55) were all significantly associated with reporting providing injection initiation assistance. Our findings may have important implications for the development of interventional approaches to reduce injection initiation and related harms. Further research is needed to validate findings and inform future approaches to preventing entry into drug injecting.
Study on communications costs for Columbus utilization
NASA Astrophysics Data System (ADS)
Nielsen, Svend Moller; Sorensen, Nicolaj
1988-09-01
On the basis of a hypothetical communications scenario established for cost calculations, the expected communications costs for Columbus utilization in the year 1995 and onwards to the year 2025, are estimated to provide initial considerations for a charging policy in relation to potential Columbus users. A hypothetical sample of five European countries is established, and current telecommunications tariffs for the data, voice, and video communications required for the Columbus utilization in and between these five countries and the USA are identified. Technological, political, and commercial development trends are analyzed as to their likely influences on future telecommunications tariff development. Communications costs for the study period are estimated, assuming telecommunications administrations to be providers of service and considering estimated equipment and operations costs. Alternative communications solutions are indicated.
Asquith, William H.; Roussel, Meghan C.
2007-01-01
Estimation of representative hydrographs from design storms, which are known as design hydrographs, provides for cost-effective, riskmitigated design of drainage structures such as bridges, culverts, roadways, and other infrastructure. During 2001?07, the U.S. Geological Survey (USGS), in cooperation with the Texas Department of Transportation, investigated runoff hydrographs, design storms, unit hydrographs,and watershed-loss models to enhance design hydrograph estimation in Texas. Design hydrographs ideally should mimic the general volume, peak, and shape of observed runoff hydrographs. Design hydrographs commonly are estimated in part by unit hydrographs. A unit hydrograph is defined as the runoff hydrograph that results from a unit pulse of excess rainfall uniformly distributed over the watershed at a constant rate for a specific duration. A time-distributed, watershed-loss model is required for modeling by unit hydrographs. This report develops a specific time-distributed, watershed-loss model known as an initial-abstraction, constant-loss model. For this watershed-loss model, a watershed is conceptualized to have the capacity to store or abstract an absolute depth of rainfall at and near the beginning of a storm. Depths of total rainfall less than this initial abstraction do not produce runoff. The watershed also is conceptualized to have the capacity to remove rainfall at a constant rate (loss) after the initial abstraction is satisfied. Additional rainfall inputs after the initial abstraction is satisfied contribute to runoff if the rainfall rate (intensity) is larger than the constant loss. The initial abstraction, constant-loss model thus is a two-parameter model. The initial-abstraction, constant-loss model is investigated through detailed computational and statistical analysis of observed rainfall and runoff data for 92 USGS streamflow-gaging stations (watersheds) in Texas with contributing drainage areas from 0.26 to 166 square miles. The analysis is limited to a previously described, watershed-specific, gamma distribution model of the unit hydrograph. In particular, the initial-abstraction, constant-loss model is tuned to the gamma distribution model of the unit hydrograph. A complex computational analysis of observed rainfall and runoff for the 92 watersheds was done to determine, by storm, optimal values of initial abstraction and constant loss. Optimal parameter values for a given storm were defined as those values that produced a modeled runoff hydrograph with volume equal to the observed runoff hydrograph and also minimized the residual sum of squares of the two hydrographs. Subsequently, the means of the optimal parameters were computed on a watershed-specific basis. These means for each watershed are considered the most representative, are tabulated, and are used in further statistical analyses. Statistical analyses of watershed-specific, initial abstraction and constant loss include documentation of the distribution of each parameter using the generalized lambda distribution. The analyses show that watershed development has substantial influence on initial abstraction and limited influence on constant loss. The means and medians of the 92 watershed-specific parameters are tabulated with respect to watershed development; although they have considerable uncertainty, these parameters can be used for parameter prediction for ungaged watersheds. The statistical analyses of watershed-specific, initial abstraction and constant loss also include development of predictive procedures for estimation of each parameter for ungaged watersheds. Both regression equations and regression trees for estimation of initial abstraction and constant loss are provided. The watershed characteristics included in the regression analyses are (1) main-channel length, (2) a binary factor representing watershed development, (3) a binary factor representing watersheds with an abundance of rocky and thin-soiled terrain, and (4) curve numb
Wang, Hexiang; Barton, Justin E.; Schuster, Eugenio
2015-09-01
The accuracy of the internal states of a tokamak, which usually cannot be measured directly, is of crucial importance for feedback control of the plasma dynamics. A first-principles-driven plasma response model could provide an estimation of the internal states given the boundary conditions on the magnetic axis and at plasma boundary. However, the estimation would highly depend on initial conditions, which may not always be known, disturbances, and non-modeled dynamics. Here in this work, a closed-loop state observer for the poloidal magnetic flux is proposed based on a very limited set of real-time measurements by following an Extended Kalman Filteringmore » (EKF) approach. Comparisons between estimated and measured magnetic flux profiles are carried out for several discharges in the DIII-D tokamak. The experimental results illustrate the capability of the proposed observer in dealing with incorrect initial conditions and measurement noise.« less
Mechanisms of Pulsed Laser Induced Damage to Optical Coatings
1986-07-01
photoionization of absorption centers . . . . . . . . . . . . . . . . . . . . 82 5 Electron densities achieved at 12.xiO" cm from color center initiation due...lends validity to this =~ del . It also provides an order of magnitide estimate of thýe range of the otherwise unknor optical absorption coefficient and...very high temperaturas can be reacheS in the center of the film while the boundaries remain nearly at their initial temperataure. In this case a
NASA Astrophysics Data System (ADS)
Scharnagl, Benedikt; Vrugt, Jasper A.; Vereecken, Harry; Herbst, Michael
2010-05-01
Turnover of soil organic matter is usually described with multi-compartment models. However, a major drawback of these models is that the conceptually defined compartments (or pools) do not necessarily correspond to measurable soil organic carbon (SOC) fractions in real practice. This not only impairs our ability to rigorously evaluate SOC models but also makes it difficult to derive accurate initial states. In this study, we tested the usefulness and applicability of inverse modeling to derive the various carbon pool sizes in the Rothamsted carbon model (ROTHC) using a synthetic time series of mineralization rates from laboratory incubation. To appropriately account for data and model uncertainty we considered a Bayesian approach using the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. This Markov chain Monte Carlo scheme derives the posterior probability density distribution of the initial pool sizes at the start of incubation from observed mineralization rates. We used the Kullback-Leibler divergence to quantify the information contained in the data and to illustrate the effect of increasing incubation times on the reliability of the pool size estimates. Our results show that measured mineralization rates generally provide sufficient information to reliably estimate the sizes of all active pools in the ROTHC model. However, with about 900 days of incubation, these experiments are excessively long. The use of prior information on microbial biomass provided a way forward to significantly reduce uncertainty and required duration of incubation to about 600 days. Explicit consideration of model parameter uncertainty in the estimation process further impaired the identifiability of initial pools, especially for the more slowly decomposing pools. Our illustrative case studies show how Bayesian inverse modeling can be used to provide important insights into the information content of incubation experiments. Moreover, the outcome of this virtual experiment helps to explain the results of related real-world studies on SOC dynamics.
Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.
Shireman, Emilie; Steinley, Douglas; Brusco, Michael J
2017-02-01
Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.
NASA Technical Reports Server (NTRS)
Mizell, Carolyn; Malone, Linda
2007-01-01
It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.
A first look at lightning energy determined from GLM
NASA Astrophysics Data System (ADS)
Bitzer, P. M.; Burchfield, J. C.; Brunner, K. N.
2017-12-01
The Geostationary Lightning Mapper (GLM) was launched in November 2016 onboard GOES-16 has been undergoing post launch and product post launch testing. While these have typically focused on lightning metrics such as detection efficiency, false alarm rate, and location accuracy, there are other attributes of the lightning discharge that are provided by GLM data. Namely, the optical energy radiated by lightning may provide information useful for lightning physics and the relationship of lightning energy to severe weather development. This work presents initial estimates of the lightning optical energy detected by GLM during this initial testing, with a focus on observations during field campaign during spring 2017 in Huntsville. This region is advantageous for the comparison due to the proliferation of ground-based lightning instrumentation, including a lightning mapping array, interferometer, HAMMA (an array of electric field change meters), high speed video cameras, and several long range VLF networks. In addition, the field campaign included airborne observations of the optical emission and electric field changes. The initial estimates will be compared with previous observations using TRMM-LIS. In addition, a comparison between the operational and scientific GLM data sets will also be discussed.
Effective force control by muscle synergies.
Berger, Denise J; d'Avella, Andrea
2014-01-01
Muscle synergies have been proposed as a way for the central nervous system (CNS) to simplify the generation of motor commands and they have been shown to explain a large fraction of the variation in the muscle patterns across a variety of conditions. However, whether human subjects are able to control forces and movements effectively with a small set of synergies has not been tested directly. Here we show that muscle synergies can be used to generate target forces in multiple directions with the same accuracy achieved using individual muscles. We recorded electromyographic (EMG) activity from 13 arm muscles and isometric hand forces during a force reaching task in a virtual environment. From these data we estimated the force associated to each muscle by linear regression and we identified muscle synergies by non-negative matrix factorization. We compared trajectories of a virtual mass displaced by the force estimated using the entire set of recorded EMGs to trajectories obtained using 4-5 muscle synergies. While trajectories were similar, when feedback was provided according to force estimated from recorded EMGs (EMG-control) on average trajectories generated with the synergies were less accurate. However, when feedback was provided according to recorded force (force-control) we did not find significant differences in initial angle error and endpoint error. We then tested whether synergies could be used as effectively as individual muscles to control cursor movement in the force reaching task by providing feedback according to force estimated from the projection of the recorded EMGs into synergy space (synergy-control). Human subjects were able to perform the task immediately after switching from force-control to EMG-control and synergy-control and we found no differences between initial movement direction errors and endpoint errors in all control modes. These results indicate that muscle synergies provide an effective strategy for motor coordination.
Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y
Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less
Zielinski, R.A.; Otton, J.K.; Budahn, J.R.
2001-01-01
Radium-bearing barite (radiobarite) is a common constituent of scale and sludge deposits that form in oil-field production equipment. The barite forms as a precipitate from radium-bearing, saline formation water that is pumped to the surface along with oil. Radioactivity levels in some oil-field equipment and in soils contaminated by scale and sludge can be sufficiently high to pose a potential health threat. Accurate determinations of radium isotopes (226Ra+228Ra) in soils are required to establish the level of soil contamination and the volume of soil that may exceed regulatory limits for total radium content. In this study the radium isotopic data are used to provide estimates of the age of formation of the radiobarite contaminant. Age estimates require that highly insoluble radiobarite approximates a chemically closed system from the time of its formation. Age estimates are based on the decay of short-lived 228Ra (half-life=5.76 years) compared to 226Ra (half-life=1600 years). Present activity ratios of 228Ra/226Ra in radiobarite-rich scale or highly contaminated soil are compared to initial ratios at the time of radiobarite precipitation. Initial ratios are estimated by measurements of saline water or recent barite precipitates at the site or by considering a range of probable initial ratios based on reported values in modern oil-field brines. At sites that contain two distinct radiobarite sources of different age, the soils containing mixtures of sources can be identified, and mixing proportions quantified using radium concentration and isotopic data. These uses of radium isotope data provide more description of contamination history and can possibly address liability issues. Copyright ?? 2000 .
Leachate concentrations from water leach and column leach tests on fly ash-stabilized soils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bin-Shafique, S.; Benson, C.H.; Edil, T.B.
2006-01-15
Batch water leaching tests (WLTs) and column leaching tests (CLTs) were conducted on coal-combustion fly ashes, soil, and soil-fly ash mixtures to characterize leaching of Cd, Cr, Se, and Ag. The concentrations of these metals were also measured in the field at two sites where soft fine-grained soils were mechanically stabilized with fly ash. Concentrations in leachate from the WLTs on soil-fly ash mixtures are different from those on fly ash alone and cannot be accurately estimated based on linear dilution calculations using concentrations from WLTs on fly ash alone. The concentration varies nonlinearly with fly ash content due tomore » the variation in pH with fly ash content. Leachate concentrations are low when the pH of the leachate or the cation exchange capacity (CEC) of the soil is high. Initial concentrations from CLTs are higher than concentrations from WLTs due to differences in solid-liquid ratio, pH, and solid-liquid contact. However, both exhibit similar trends with fly ash content, leachate pH, and soil properties. Scaling factors can be applied to WLT concentrations (50 for Ag and Cd, 10 for Cr and Se) to estimate initial concentrations for CLTs. Concentrations in leachate collected from the field sites were generally similar or slightly lower than concentrations measured in CLTs on the same materials. Thus, CLTs appear to provide a good indication of conditions that occur in the field provided that the test conditions mimic the field conditions. In addition, initial concentrations in the field can be conservatively estimated from WLT concentrations using the aforementioned scaling factors provided that the pH of the infiltrating water is near neutral.« less
Hsu, HE; Rydzak, CE; Cotich, KL; Wang, B; Sax, PE; Losina, E; Freedberg, KA; Goldie, SJ; Lu, Z; Walensky, RP
2010-01-01
Objectives We quantified the benefits (life expectancy gains) and harms (efavirenz-related teratogenicity) associated with using efavirenz in HIV-infected women of childbearing age in the United States. Methods We used data from the Women’s Interagency HIV Study in an HIV disease simulation model to estimate life expectancy in women who receive an efavirenz-based initial antiretroviral regimen compared with those who delay efavirenz use and receive a boosted protease inhibitor-based initial regimen. To estimate excess risk of teratogenic events with and without efavirenz exposure per 100,000 women, we incorporated literature-based rates of pregnancy, live births, and teratogenic events into a decision analytic model. We assumed a teratogenicity risk of 2.90 events/100 live births in women exposed to efavirenz during pregnancy and 2.68/100 live births in unexposed women. Results Survival for HIV-infected women who received an efavirenz-based initial antiretroviral therapy regimen was 0.89 years greater than for women receiving non-efavirenz-based initial therapy (28.91 vs. 28.02 years). The rate of teratogenic events was 77.26/100,000 exposed women, compared with 72.46/100,000 unexposed women. Survival estimates were sensitive to variations in treatment efficacy and AIDS-related mortality. Estimates of excess teratogenic events were most sensitive to pregnancy rates and number of teratogenic events/100 live births in efavirenz-exposed women. Conclusions Use of non-efavirenz-based initial antiretroviral therapy in HIV-infected women of childbearing age may reduce life expectancy gains from antiretroviral treatment, but may also prevent teratogenic events. Decision-making regarding efavirenz use presents a tradeoff between these two risks; this study can inform discussions between patients and health care providers. PMID:20561082
Hsu, H E; Rydzak, C E; Cotich, K L; Wang, B; Sax, P E; Losina, E; Freedberg, K A; Goldie, S J; Lu, Z; Walensky, R P
2011-02-01
The aim of the study was to quantify the benefits (life expectancy gains) and risks (efavirenz-related teratogenicity) associated with using efavirenz in HIV-infected women of childbearing age in the USA. We used data from the Women's Interagency HIV Study in an HIV disease simulation model to estimate life expectancy in women who receive an efavirenz-based initial antiretroviral regimen compared with those who delay efavirenz use and receive a boosted protease inhibitor-based initial regimen. To estimate excess risk of teratogenic events with and without efavirenz exposure per 100,000 women, we incorporated literature-based rates of pregnancy, live births, and teratogenic events into a decision analytic model. We assumed a teratogenicity risk of 2.90 events/100 live births in women exposed to efavirenz during pregnancy and 2.68/100 live births in unexposed women. Survival for HIV-infected women who received an efavirenz-based initial antiretroviral therapy (ART) regimen was 0.89 years greater than for women receiving non-efavirenz-based initial therapy (28.91 vs. 28.02 years). The rate of teratogenic events was 77.26/100,000 exposed women, compared with 72.46/100,000 unexposed women. Survival estimates were sensitive to variations in treatment efficacy and AIDS-related mortality. Estimates of excess teratogenic events were most sensitive to pregnancy rates and number of teratogenic events/100 live births in efavirenz-exposed women. Use of non-efavirenz-based initial ART in HIV-infected women of childbearing age may reduce life expectancy gains from antiretroviral treatment, but may also prevent teratogenic events. Decision-making regarding efavirenz use presents a trade-off between these two risks; this study can inform discussions between patients and health care providers.
A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models
NASA Astrophysics Data System (ADS)
Keller, J. D.; Bach, L.; Hense, A.
2012-12-01
The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.
Butterworth, S J
2014-01-01
Super-Typhoon Haiyan struck the Philippines on 7 November 2013. The initial reports estimated 10 000 fatalities and four million displaced persons. As the United Kingdom's initial response to this disaster, HMS DARING was diverted from her deployment to take part in humanitarian aid, named Operation PATWIN. This article will outline the medical aspects of the relief effort undertaken and aim to identify any lessons that may inform future operations.
1980-11-01
Dela Bnrted) Item 19 Continued: system design design handbooks maintenance manpower simulation de’ision options cost estimating relationships prediction...determine the extent to which human resources data (HRD) are used in early system design. The third was to assess the availability and ade - quacy of...relationships, regression analysis, comparability analysis, expected value techniques) to provide initial data values in the very early stages of weapon system
The influence of children's pain memories on subsequent pain experience.
Noel, Melanie; Chambers, Christine T; McGrath, Patrick J; Klein, Raymond M; Stewart, Sherry H
2012-08-01
Healthy children are often required to repeatedly undergo painful medical procedures (eg, immunizations). Although memory is often implicated in children's reactions to future pain, there is a dearth of research directly examining the relationship between the 2. The current study investigated the influence of children's memories for a novel pain stimulus on their subsequent pain experience. One hundred ten healthy children (60 boys) between the ages of 8 and 12 years completed a laboratory pain task and provided pain ratings. Two weeks later, children provided pain ratings based on their memories as well as their expectancies about future pain. One month following the initial laboratory visit, children again completed the pain task and provided pain ratings. Results showed that children's memory of pain intensity was a better predictor of subsequent pain reporting than their actual initial reporting of pain intensity, and mediated the relationship between initial and subsequent pain reporting. Children who had negatively estimated pain memories developed expectations of greater pain prior to a subsequent pain experience and showed greater increases in pain ratings over time than children who had accurate or positively estimated pain memories. These findings highlight the influence of pain memories on healthy children's expectations of future pain and subsequent pain experiences and extend predictive models of subsequent pain reporting. Copyright © 2012 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Lichten, S. M.
1991-01-01
Data from the Global Positioning System (GPS) were used to determine precise polar motion estimates. Conservatively calculated formal errors of the GPS least squares solution are approx. 10 cm. The GPS estimates agree with independently determined polar motion values from very long baseline interferometry (VLBI) at the 5 cm level. The data were obtained from a partial constellation of GPS satellites and from a sparse worldwide distribution of ground stations. The accuracy of the GPS estimates should continue to improve as more satellites and ground receivers become operational, and eventually a near real time GPS capability should be available. Because the GPS data are obtained and processed independently from the large radio antennas at the Deep Space Network (DSN), GPS estimation could provide very precise measurements of Earth orientation for calibration of deep space tracking data and could significantly relieve the ever growing burden on the DSN radio telescopes to provide Earth platform calibrations.
78 FR 52808 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-26
.... Please direct your written comments to Thomas Bayer, Chief Information Officer, Securities and Exchange....) (``Securities Act''). The primary purpose of the registration process is to provide disclosure of financial and... policy-making roles. The Commission estimates that there are 162 initial registration statements and 29...
Prospects for genomic selection in cassava breeding
USDA-ARS?s Scientific Manuscript database
Cassava (Manihot esculenta Crantz) is a clonally propagated staple food crop in the tropics. Genomic selection (GS) has been implemented at three breeding institutions in Africa in order to reduce cycle times. Initial studies provided promising estimates of predictive abilities. Here, we expand on p...
Guided filter-based fusion method for multiexposure images
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei
2016-11-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.
Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System
Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei
2018-01-01
The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751
Estimation of the phase response curve from Parkinsonian tremor.
Saifee, Tabish A; Edwards, Mark J; Kassavetis, Panagiotis; Gilbertson, Tom
2016-01-01
Phase response curves (PRCs), characterizing the response of an oscillator to weak external perturbation, have been estimated from a broad range of biological oscillators, including single neurons in vivo. PRC estimates, in turn, provide an intuitive insight into how oscillatory systems become entrained and how they can be desynchronized. Here, we explore the application of PRC theory to the case of Parkinsonian tremor. Initial attempts to establish a causal effect of subthreshold transcranial magnetic stimulation applied to primary motor cortex on the filtered tremor phase were unsuccessful. We explored the possible explanations of this and demonstrate that assumptions made when estimating the PRC in a traditional setting, such as a single neuron, are not arbitrary when applied to the case of tremor PRC estimation. We go on to extract the PRC of Parkinsonian tremor using an iterative method that requires varying the definition of the tremor cycle and estimating the PRC at multiple peristimulus time samples. Justification for this method is supported by estimates of PRC from simulated single neuron data. We provide an approach to estimating confidence limits for tremor PRC and discuss the interpretational caveats introduced by tremor harmonics and the intrinsic variability of the tremor's period. Copyright © 2016 the American Physiological Society.
Estimation of the phase response curve from Parkinsonian tremor
Saifee, Tabish A.; Edwards, Mark J.; Kassavetis, Panagiotis
2015-01-01
Phase response curves (PRCs), characterizing the response of an oscillator to weak external perturbation, have been estimated from a broad range of biological oscillators, including single neurons in vivo. PRC estimates, in turn, provide an intuitive insight into how oscillatory systems become entrained and how they can be desynchronized. Here, we explore the application of PRC theory to the case of Parkinsonian tremor. Initial attempts to establish a causal effect of subthreshold transcranial magnetic stimulation applied to primary motor cortex on the filtered tremor phase were unsuccessful. We explored the possible explanations of this and demonstrate that assumptions made when estimating the PRC in a traditional setting, such as a single neuron, are not arbitrary when applied to the case of tremor PRC estimation. We go on to extract the PRC of Parkinsonian tremor using an iterative method that requires varying the definition of the tremor cycle and estimating the PRC at multiple peristimulus time samples. Justification for this method is supported by estimates of PRC from simulated single neuron data. We provide an approach to estimating confidence limits for tremor PRC and discuss the interpretational caveats introduced by tremor harmonics and the intrinsic variability of the tremor's period. PMID:26561596
Support to LANL: Cost estimation. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report summarizes the activities and progress by ICF Kaiser Engineers conducted on behalf of Los Alamos National Laboratories (LANL) for the US Department of Energy, Office of Waste Management (EM-33) in the area of improving methods for Cost Estimation. This work was conducted between October 1, 1992 and September 30, 1993. ICF Kaiser Engineers supported LANL in providing the Office of Waste Management with planning and document preparation services for a Cost and Schedule Estimating Guide (Guide). The intent of the Guide was to use Activity-Based Cost (ABC) estimation as a basic method in preparing cost estimates for DOEmore » planning and budgeting documents, including Activity Data Sheets (ADSs), which form the basis for the Five Year Plan document. Prior to the initiation of the present contract with LANL, ICF Kaiser Engineers was tasked to initiate planning efforts directed toward a Guide. This work, accomplished from June to September, 1992, included visits to eight DOE field offices and consultation with DOE Headquarters staff to determine the need for a Guide, the desired contents of a Guide, and the types of ABC estimation methods and documentation requirements that would be compatible with current or potential practices and expertise in existence at DOE field offices and their contractors.« less
In order to predict the margin between the dose needed for adverse chemical effects and actual human exposure rates, data on hazard, exposure, and toxicokinetics are needed. In vitro methods, biomonitoring, and mathematical modeling have provided initial estimates for many extant...
78 FR 65717 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-01
... sending an email to: [email protected] ; and (ii) Thomas Bayer, Chief Information Officer....) (``Securities Act''). The primary purpose of the registration process is to provide disclosure of financial and... policy-making roles. The Commission estimates that there are 162 initial registration statements and 29...
NASA Astrophysics Data System (ADS)
Heinlein, S. N.
2013-12-01
Remote sensing data sets are widely used for evaluation of surface manifestations of active tectonics. This study utilizes ASTER GDEM and Landsat ETM+ data sets with Google Earth images draped over terrain models. This study evaluates 1) the surrounding surface geomorphology of the study area with these data sets and 2) the morphology of the Kumroch Fault using diffusion modeling to estimate constant diffusivity (κ) and estimate slip rates by means of real ground data measured across fault scarps by Kozhurin et al. (2006). Models of the evolution of fault scarp morphology provide time elapsed since slip initiated on a faults surface and may therefore provide more accurate estimates of slip rate than the rate calculated by dividing scarp offset by the age of the ruptured surface. Profile modeling of scarps collected by Kozhurin et al. (2006) formed by several events distributed through time and were evaluated using a constant slip rate (CSR) solution which yields a value A/κ (1/2 slip rate/diffusivity). Time elapsed since slip initiated on the fault is determined by establishing a value for κ and measuring total scarp offset. CSR nonlinear modeling estimated of κ range from 8m2/ka - 14m2/ka on the Kumroch Fault which indicates a slip rates of 0.6 mm/yr - 1.0 mm/yr since 3.4 ka -3.7 ka. This method provides a quick and inexpensive way to gather data for a regional tectonic study and establish estimated rates of tectonic activity. Analyses of the remote sensing data are providing new insight into the role of active tectonics within the region. Results from fault scarp diffusion models of Mattson and Bruhn (2001) and DuRoss and Bruhn (2004) and Kozhurin et al. (2006), Kozhurin (2007), Kozhurin et al. (2008) and Pinegina et al. 2012 trench profiles of the KF as calibrated age fault scarp diffusion rates were estimated. (-) mean that no data could be determined.
Akazawa, Manabu; Stearns, Sally C; Biddle, Andrea K
2008-01-01
Objective To assess costs, effectiveness, and cost-effectiveness of inhaled corticosteroids (ICS) augmenting bronchodilator treatment for chronic obstructive pulmonary disease (COPD). Data Sources Claims between 1997 and 2005 from a large managed care database. Study Design Individual-level, fixed-effects regression models estimated the effects of initiating ICS on medical expenses and likelihood of severe exacerbation. Bootstrapping provided estimates of the incremental cost per severe exacerbation avoided. Data Extraction Methods COPD patients aged 40 or older with ≥15 months of continuous eligibility were identified. Monthly observations for 1 year before and up to 2 years following initiation of bronchodilators were constructed. Principal Findings ICS treatment reduced monthly risk of severe exacerbation by 25 percent. Total costs with ICS increased for 16 months, but declined thereafter. ICS use was cost saving 46 percent of the time, with an incremental cost-effectiveness ratio of $2,973 per exacerbation avoided; for patients ≥50 years old, ICS was cost saving 57 percent of time. Conclusions ICS treatment reduces exacerbations, with an increase in total costs initially for the full sample. Compared with younger patients with COPD, patients aged 50 or older have reduced costs and improved outcomes. The estimated cost per severe exacerbation avoided, however, may be high for either group because of uncertainty as reflected by the large standard errors of the parameter estimates. PMID:18671750
Desai, Kamal; Gupta, Swati B; Dubberke, Erik R; Prabhu, Vimalanand S; Browne, Chantelle; Mast, T Christopher
2016-06-18
Despite a large increase in Clostridium difficile infection (CDI) severity, morbidity and mortality in the US since the early 2000s, CDI burden estimates have had limited generalizability and comparability due to widely varying clinical settings, populations, or study designs. A decision-analytic model incorporating key input parameters important in CDI epidemiology was developed to estimate the annual number of initial and recurrent CDI cases, attributable and all-cause deaths, economic burden in the general population, and specific number of high-risk patients in different healthcare settings and the community in the US. Economic burden was calculated adopting a societal perspective using a bottom-up approach that identified healthcare resources consumed in the management of CDI. Annually, a total of 606,058 (439,237 initial and 166,821 recurrent) episodes of CDI were predicted in 2014: 34.3 % arose from community exposure. Over 44,500 CDI-attributable deaths in 2014 were estimated to occur. High-risk susceptible individuals representing 5 % of the total hospital population accounted for 23 % of hospitalized CDI patients. The economic cost of CDI was $5.4 billion ($4.7 billion (86.7 %) in healthcare settings; $725 million (13.3 %) in the community), mostly due to hospitalization. A modeling framework provides more comprehensive and detailed national-level estimates of CDI cases, recurrences, deaths and cost in different patient groups than currently available from separate individual studies. As new treatments for CDI are developed, this model can provide reliable estimates to better focus healthcare resources to those specific age-groups, risk-groups, and care settings in the US where they are most needed. (Trial Identifier ClinicaTrials.gov: NCT01241552).
1983-05-01
observed end-of-course scores for tasks .- trained to criterion. e MGA software was calibrated to provide retention estimates at two levels of...exceed the MGA estimates. Thirty-five out of forty, or 87.5,o0 of the tasks met this expectation. . * For these first trial data, MGA software predicts...Objective: The objective of this effort was to perform an operational test of the capability of MGA Skill Training and Retention (STAR©) software to
Parameter estimation in plasmonic QED
NASA Astrophysics Data System (ADS)
Jahromi, H. Rangani
2018-03-01
We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.
NASA Astrophysics Data System (ADS)
Matthews, Thomas P.; Anastasio, Mark A.
2017-12-01
The initial pressure and speed of sound (SOS) distributions cannot both be stably recovered from photoacoustic computed tomography (PACT) measurements alone. Adjunct ultrasound computed tomography (USCT) measurements can be employed to estimate the SOS distribution. Under the conventional image reconstruction approach for combined PACT/USCT systems, the SOS is estimated from the USCT measurements alone and the initial pressure is estimated from the PACT measurements by use of the previously estimated SOS. This approach ignores the acoustic information in the PACT measurements and may require many USCT measurements to accurately reconstruct the SOS. In this work, a joint reconstruction method where the SOS and initial pressure distributions are simultaneously estimated from combined PACT/USCT measurements is proposed. This approach allows accurate estimation of both the initial pressure distribution and the SOS distribution while requiring few USCT measurements.
Experimental Estimation of Mutation Rates in a Wheat Population With a Gene Genealogy Approach
Raquin, Anne-Laure; Depaulis, Frantz; Lambert, Amaury; Galic, Nathalie; Brabant, Philippe; Goldringer, Isabelle
2008-01-01
Microsatellite markers are extensively used to evaluate genetic diversity in natural or experimental evolving populations. Their high degree of polymorphism reflects their high mutation rates. Estimates of the mutation rates are therefore necessary when characterizing diversity in populations. As a complement to the classical experimental designs, we propose to use experimental populations, where the initial state is entirely known and some intermediate states have been thoroughly surveyed, thus providing a short timescale estimation together with a large number of cumulated meioses. In this article, we derived four original gene genealogy-based methods to assess mutation rates with limited bias due to relevant model assumptions incorporating the initial state, the number of new alleles, and the genetic effective population size. We studied the evolution of genetic diversity at 21 microsatellite markers, after 15 generations in an experimental wheat population. Compared to the parents, 23 new alleles were found in generation 15 at 9 of the 21 loci studied. We provide evidence that they arose by mutation. Corresponding estimates of the mutation rates ranged from 0 to 4.97 × 10−3 per generation (i.e., year). Sequences of several alleles revealed that length polymorphism was only due to variation in the core of the microsatellite. Among different microsatellite characteristics, both the motif repeat number and an independent estimation of the Nei diversity were correlated with the novel diversity. Despite a reduced genetic effective size, global diversity at microsatellite markers increased in this population, suggesting that microsatellite diversity should be used with caution as an indicator in biodiversity conservation issues. PMID:18689900
Experimental estimation of mutation rates in a wheat population with a gene genealogy approach.
Raquin, Anne-Laure; Depaulis, Frantz; Lambert, Amaury; Galic, Nathalie; Brabant, Philippe; Goldringer, Isabelle
2008-08-01
Microsatellite markers are extensively used to evaluate genetic diversity in natural or experimental evolving populations. Their high degree of polymorphism reflects their high mutation rates. Estimates of the mutation rates are therefore necessary when characterizing diversity in populations. As a complement to the classical experimental designs, we propose to use experimental populations, where the initial state is entirely known and some intermediate states have been thoroughly surveyed, thus providing a short timescale estimation together with a large number of cumulated meioses. In this article, we derived four original gene genealogy-based methods to assess mutation rates with limited bias due to relevant model assumptions incorporating the initial state, the number of new alleles, and the genetic effective population size. We studied the evolution of genetic diversity at 21 microsatellite markers, after 15 generations in an experimental wheat population. Compared to the parents, 23 new alleles were found in generation 15 at 9 of the 21 loci studied. We provide evidence that they arose by mutation. Corresponding estimates of the mutation rates ranged from 0 to 4.97 x 10(-3) per generation (i.e., year). Sequences of several alleles revealed that length polymorphism was only due to variation in the core of the microsatellite. Among different microsatellite characteristics, both the motif repeat number and an independent estimation of the Nei diversity were correlated with the novel diversity. Despite a reduced genetic effective size, global diversity at microsatellite markers increased in this population, suggesting that microsatellite diversity should be used with caution as an indicator in biodiversity conservation issues.
Xu, Nan; Spreng, R Nathan; Doerschuk, Peter C
2017-01-01
Resting-state functional MRI (rs-fMRI) is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD) signal from different regions of interest (ROIs). However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1) Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2) On simulated data designed to display the "common driver" problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3) On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain.
48 CFR 52.216-10 - Incentive Fee.
Code of Federal Regulations, 2010 CFR
2010-10-01
... determined as provided in this contract. (b) Target cost and target fee. The target cost and target fee... (d) below. (1) Target cost, as used in this contract, means the estimated cost of this contract as initially negotiated, adjusted in accordance with paragraph (d) below. (2) Target fee, as used in this...
48 CFR 52.216-10 - Incentive Fee.
Code of Federal Regulations, 2011 CFR
2011-10-01
... determined as provided in this contract. (b) Target cost and target fee. The target cost and target fee... (d) below. (1) Target cost, as used in this contract, means the estimated cost of this contract as initially negotiated, adjusted in accordance with paragraph (d) below. (2) Target fee, as used in this...
Electrically heated particulate filter restart strategy
Gonze, Eugene V [Pinckney, MI; Ament, Frank [Troy, MI
2011-07-12
A control system that controls regeneration of a particulate filter is provided. The system generally includes a propagation module that estimates a propagation status of combustion of particulate matter in the particulate filter. A regeneration module controls current to the particulate filter to re-initiate regeneration based on the propagation status.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Select Committee on Aging.
The purpose of this Congressional study is to underscore the continuing contribution of the family in providing care to the frail and disabled elderly. This study has been developed to distill information that currently exists, to provide new data based on national estimates and to highlight both public and private sector initiatives targeted at…
NASA Technical Reports Server (NTRS)
Finley, Tom D.; Wong, Douglas T.; Tripp, John S.
1993-01-01
A newly developed technique for enhanced data reduction provides an improved procedure that allows least squares minimization to become possible between data sets with an unequal number of data points. This technique was applied in the Crew and Equipment Translation Aid (CETA) experiment on the STS-37 Shuttle flight in April 1991 to obtain the velocity profile from the acceleration data. The new technique uses a least-squares method to estimate the initial conditions and calibration constants. These initial conditions are estimated by least-squares fitting the displacements indicated by the Hall-effect sensor data to the corresponding displacements obtained from integrating the acceleration data. The velocity and displacement profiles can then be recalculated from the corresponding acceleration data using the estimated parameters. This technique, which enables instantaneous velocities to be obtained from the test data instead of only average velocities at varying discrete times, offers more detailed velocity information, particularly during periods of large acceleration or deceleration.
Variable Selection for Support Vector Machines in Moderately High Dimensions
Zhang, Xiang; Wu, Yichao; Wang, Lan; Li, Runze
2015-01-01
Summary The support vector machine (SVM) is a powerful binary classification tool with high accuracy and great flexibility. It has achieved great success, but its performance can be seriously impaired if many redundant covariates are included. Some efforts have been devoted to studying variable selection for SVMs, but asymptotic properties, such as variable selection consistency, are largely unknown when the number of predictors diverges to infinity. In this work, we establish a unified theory for a general class of nonconvex penalized SVMs. We first prove that in ultra-high dimensions, there exists one local minimizer to the objective function of nonconvex penalized SVMs possessing the desired oracle property. We further address the problem of nonunique local minimizers by showing that the local linear approximation algorithm is guaranteed to converge to the oracle estimator even in the ultra-high dimensional setting if an appropriate initial estimator is available. This condition on initial estimator is verified to be automatically valid as long as the dimensions are moderately high. Numerical examples provide supportive evidence. PMID:26778916
Sequential bearings-only-tracking initiation with particle filtering method.
Liu, Bin; Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, James M.; Prescott, Ryan; Dawson, Jericah M.
2014-11-01
Sandia National Laboratories has prepared a ROM cost estimate for budgetary planning for the IDC Reengineering Phase 2 & 3 effort, based on leveraging a fully funded, Sandia executed NDC Modernization project. This report provides the ROM cost estimate and describes the methodology, assumptions, and cost model details used to create the ROM cost estimate. ROM Cost Estimate Disclaimer Contained herein is a Rough Order of Magnitude (ROM) cost estimate that has been provided to enable initial planning for this proposed project. This ROM cost estimate is submitted to facilitate informal discussions in relation to this project and is NOTmore » intended to commit Sandia National Laboratories (Sandia) or its resources. Furthermore, as a Federally Funded Research and Development Center (FFRDC), Sandia must be compliant with the Anti-Deficiency Act and operate on a full-cost recovery basis. Therefore, while Sandia, in conjunction with the Sponsor, will use best judgment to execute work and to address the highest risks and most important issues in order to effectively manage within cost constraints, this ROM estimate and any subsequent approved cost estimates are on a 'full-cost recovery' basis. Thus, work can neither commence nor continue unless adequate funding has been accepted and certified by DOE.« less
Darrington, Richard T; Jiao, Jim
2004-04-01
Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.
The Mirror Illusion’s Effects on Body State Estimation
Soliman, Tamer M.; Buxbaum, Laurel J.; Jax, Steven A.
2016-01-01
The mirror illusion uses a standard mirror to create a compelling illusion in which movements of one limb seem to be made by the other hidden limb. In this paper we adapt a motor control framework to examine which estimates of the body’s configuration are affected by the illusion. We propose that the illusion primarily alters estimates related to upcoming states of the body (the desired state and the predicted state), with smaller effects on the estimate of the body state prior to movement initiation. Support for this proposal is provided both by behavioral effects of the illusion as well as neuroimaging evidence from one neural region, V6A, that is critically involved in the mirror illusion and limb state estimation more generally. PMID:27390062
Unsupervised Indoor Localization Based on Smartphone Sensors, iBeacon and Wi-Fi.
Chen, Jing; Zhang, Yi; Xue, Wei
2018-04-28
In this paper, we propose UILoc, an unsupervised indoor localization scheme that uses a combination of smartphone sensors, iBeacons and Wi-Fi fingerprints for reliable and accurate indoor localization with zero labor cost. Firstly, compared with the fingerprint-based method, the UILoc system can build a fingerprint database automatically without any site survey and the database will be applied in the fingerprint localization algorithm. Secondly, since the initial position is vital to the system, UILoc will provide the basic location estimation through the pedestrian dead reckoning (PDR) method. To provide accurate initial localization, this paper proposes an initial localization module, a weighted fusion algorithm combined with a k-nearest neighbors (KNN) algorithm and a least squares algorithm. In UILoc, we have also designed a reliable model to reduce the landmark correction error. Experimental results show that the UILoc can provide accurate positioning, the average localization error is about 1.1 m in the steady state, and the maximum error is 2.77 m.
Nishiura, Hiroshi; Chowell, Gerardo; Safan, Muntaser; Castillo-Chavez, Carlos
2010-01-07
In many parts of the world, the exponential growth rate of infections during the initial epidemic phase has been used to make statistical inferences on the reproduction number, R, a summary measure of the transmission potential for the novel influenza A (H1N1) 2009. The growth rate at the initial stage of the epidemic in Japan led to estimates for R in the range 2.0 to 2.6, capturing the intensity of the initial outbreak among school-age children in May 2009. An updated estimate of R that takes into account the epidemic data from 29 May to 14 July is provided. An age-structured renewal process is employed to capture the age-dependent transmission dynamics, jointly estimating the reproduction number, the age-dependent susceptibility and the relative contribution of imported cases to secondary transmission. Pitfalls in estimating epidemic growth rates are identified and used for scrutinizing and re-assessing the results of our earlier estimate of R. Maximum likelihood estimates of R using the data from 29 May to 14 July ranged from 1.21 to 1.35. The next-generation matrix, based on our age-structured model, predicts that only 17.5% of the population will experience infection by the end of the first pandemic wave. Our earlier estimate of R did not fully capture the population-wide epidemic in quantifying the next-generation matrix from the estimated growth rate during the initial stage of the pandemic in Japan. In order to quantify R from the growth rate of cases, it is essential that the selected model captures the underlying transmission dynamics embedded in the data. Exploring additional epidemiological information will be useful for assessing the temporal dynamics. Although the simple concept of R is more easily grasped by the general public than that of the next-generation matrix, the matrix incorporating detailed information (e.g., age-specificity) is essential for reducing the levels of uncertainty in predictions and for assisting public health policymaking. Model-based prediction and policymaking are best described by sharing fundamental notions of heterogeneous risks of infection and death with non-experts to avoid potential confusion and/or possible misuse of modelling results.
Pre- and postprocessing techniques for determining goodness of computational meshes
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley; Westermann, T.; Bass, J. M.
1993-01-01
Research in error estimation, mesh conditioning, and solution enhancement for finite element, finite difference, and finite volume methods has been incorporated into AUDITOR, a modern, user-friendly code, which operates on 2D and 3D unstructured neutral files to improve the accuracy and reliability of computational results. Residual error estimation capabilities provide local and global estimates of solution error in the energy norm. Higher order results for derived quantities may be extracted from initial solutions. Within the X-MOTIF graphical user interface, extensive visualization capabilities support critical evaluation of results in linear elasticity, steady state heat transfer, and both compressible and incompressible fluid dynamics.
BOREAS RSS-8 BIOME-BGC Model Simulations at Tower Flux Sites in 1994
NASA Technical Reports Server (NTRS)
Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Kimball, John
2000-01-01
BIOME-BGC is a general ecosystem process model designed to simulate biogeochemical and hydrologic processes across multiple scales (Running and Hunt, 1993). In this investigation, BIOME-BGC was used to estimate daily water and carbon budgets for the BOREAS tower flux sites for 1994. Carbon variables estimated by the model include gross primary production (i.e., net photosynthesis), maintenance and heterotrophic respiration, net primary production, and net ecosystem carbon exchange. Hydrologic variables estimated by the model include snowcover, evaporation, transpiration, evapotranspiration, soil moisture, and outflow. The information provided by the investigation includes input initialization and model output files for various sites in tabular ASCII format.
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor)
2009-01-01
A method of recovering unknown aberrations in an optical system includes collecting intensity data produced by the optical system, generating an initial estimate of a phase of the optical system, iteratively performing a phase retrieval on the intensity data to generate a phase estimate using an initial diversity function corresponding to the intensity data, generating a phase map from the phase retrieval phase estimate, decomposing the phase map to generate a decomposition vector, generating an updated diversity function by combining the initial diversity function with the decomposition vector, generating an updated estimate of the phase of the optical system by removing the initial diversity function from the phase map. The method may further include repeating the process beginning with iteratively performing a phase retrieval on the intensity data using the updated estimate of the phase of the optical system in place of the initial estimate of the phase of the optical system, and using the updated diversity function in place of the initial diversity function, until a predetermined convergence is achieved.
Trends in provider-initiated versus spontaneous preterm deliveries, 2004–2013
Ada, Melissa L.; Hacker, Michele R.; Golen, Toni H.; Haviland, Miriam J.; Shainker, Scott A.; Burris, Heather H.
2017-01-01
Objectives 1) To estimate the proportion of preterm deliveries at a tertiary perinatal center that were provider-initiated vs. spontaneous before and after a 2009 policy to reduce elective early-term deliveries. 2) To evaluate if shifts in type of preterm delivery varied by race/ethnicity. Methods We performed a retrospective cohort study of preterm deliveries over a 10-year period, 2004–2013, including detailed review of 929 of 5,566 preterm deliveries to designate each delivery as provider-initiated or spontaneous. We dichotomized the time period into early (2004–2009) and late (2010–2013). We used log-binomial regression to calculate adjusted risk ratios. Results Of the 46,981 deliveries, 5,566 (11.8%) were preterm, with a significant reduction in the overall incidence of preterm delivery from 12.3% to 11.2% (P=0.0003). Among the 929 preterm deliveries analyzed, there was a reduction in the proportion of provider-initiated deliveries from 48.3% to 41.8% that was not statistically significant. The proportion of provider-initiated preterm deliveries among black, but not white, women declined from 50.8% to 39.7% (adjusted RR: 0.66; 95%CI: 0.45–0.97). This coincided with a larger reduction in overall preterm deliveries among black women (16.2% to 12.8%) vs. white women (12.3% to 11.2%) (P interaction=0.038). By 2013, the incidence of preterm deliveries had decreased for both black (12.1%) and white women (11.4%) and the difference was no longer statistically significant (P=0.7). Conclusion We found a reduction in preterm deliveries after a policy targeted at reducing elective early-term deliveries in 2009 that coincided with reductions in the proportion of provider-initiated preterm deliveries, especially among black women. PMID:28749488
Massive superclusters as a probe of the nature and amplitude of primordial density fluctuations
NASA Technical Reports Server (NTRS)
Kaiser, N.; Davis, M.
1985-01-01
It is pointed out that correlation studies of galaxy positions have been widely used in the search for information about the large-scale matter distribution. The study of rare condensations on large scales provides an approach to extend the existing knowledge of large-scale structure into the weakly clustered regime. Shane (1975) provides a description of several apparent massive condensations within the Shane-Wirtanen catalog, taking into account the Serpens-Virgo cloud and the Corona cloud. In the present study, a description is given of a model for estimating the frequency of condensations which evolve from initially Gaussian fluctuations. This model is applied to the Corona cloud to estimate its 'rareness' and thereby estimate the rms density contrast on this mass scale. An attempt is made to find a conflict between the density fluctuations derived from the Corona cloud and independent constraints. A comparison is conducted of the estimate and the density fluctuations predicted to arise in a universe dominated by cold dark matter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bodkin, J.L.; Udevitz, M.S.
The authors developed an analytical model (intersection model) to estimate the exposure of sea otters (Enhydra lutris), to oil from the Exxon Valdez oil spill. The authors applied estimated and assumed exposure dependent mortality rates to the Kenai Peninsula sea otter population to provide examples of the application of the model in estimating sea otter mortality. The intersection model requires three distinct types of data: (1) distribution, abundance, and movements of oil, (2) abundance and distribution of sea otters, and (3) sea otter mortality rates relative to oil exposure. Initial output of the model is an estimate of exposure ofmore » otters to oil. Exposure is measured in amount and duration of oil near an otter`s observed location (intersections). The authors provide two examples of the model using different assumptions about the relation between exposure and mortality. Because of an apparent non-linear relation between the degree of oiling and survival of otters from rehabilitation, output from the authors` examples are likely biased.« less
Rodosta, T.D.; Litynski, J.T.; Plasynski, S.I.; Hickman, S.; Frailey, S.; Myer, L.
2011-01-01
The U.S. Department of Energy (DOE) is the lead Federal agency for the development and deployment of carbon sequestration technologies. As part of its mission to facilitate technology transfer and develop guidelines from lessons learned, DOE is developing a series of best practice manuals (BPMs) for carbon capture and storage (CCS). The "Site Screening, Site Selection, and Initial Characterization for Storage of CO2 in Deep Geological Formations" BPM is a compilation of best practices and includes flowchart diagrams illustrating the general decision making process for Site Screening, Site Selection, and Initial Characterization. The BPM integrates the knowledge gained from various programmatic efforts, with particular emphasis on the Characterization Phase through pilot-scale CO2 injection testing of the Validation Phase of the Regional Carbon Sequestration Partnership (RCSP) Initiative. Key geologic and surface elements that suitable candidate storage sites should possess are identified, along with example Site Screening, Site Selection, and Initial Characterization protocols for large-scale geologic storage projects located across diverse geologic and regional settings. This manual has been written as a working document, establishing a framework and methodology for proper site selection for CO2 geologic storage. This will be useful for future CO2 emitters, transporters, and storage providers. It will also be of use in informing local, regional, state, and national governmental agencies of best practices in proper sequestration site selection. Furthermore, it will educate the inquisitive general public on options and processes for geologic CO2 storage. In addition to providing best practices, the manual presents a geologic storage resource and capacity classification system. The system provides a "standard" to communicate storage and capacity estimates, uncertainty and project development risk, data guidelines and analyses for adequate site characterization, and guidelines for reporting estimates within the classification based on each project's status.
Reusable Reentry Satellite (RRS) system design study: System cost estimates document
NASA Technical Reports Server (NTRS)
1991-01-01
The Reusable Reentry Satellite (RRS) program was initiated to provide life science investigators relatively inexpensive, frequent access to space for extended periods of time with eventual satellite recovery on earth. The RRS will provide an on-orbit laboratory for research on biological and material processes, be launched from a number of expendable launch vehicles, and operate in Low-Altitude Earth Orbit (LEO) as a free-flying unmanned laboratory. SAIC's design will provide independent atmospheric reentry and soft landing in the continental U.S., orbit for a maximum of 60 days, and will sustain three flights per year for 10 years. The Reusable Reentry Vehicle (RRV) will be 3-axis stabilized with artificial gravity up to 1.5g's, be rugged and easily maintainable, and have a modular design to accommodate a satellite bus and separate modular payloads (e.g., rodent module, general biological module, ESA microgravity botany facility, general botany module). The purpose of this System Cost Estimate Document is to provide a Life Cycle Cost Estimate (LCCE) for a NASA RRS Program using SAIC's RRS design. The estimate includes development, procurement, and 10 years of operations and support (O&S) costs for NASA's RRS program. The estimate does not include costs for other agencies which may track or interface with the RRS program (e.g., Air Force tracking agencies or individual RRS experimenters involved with special payload modules (PM's)). The life cycle cost estimate extends over the 10 year operation and support period FY99-2008.
NASA Astrophysics Data System (ADS)
Goldar, A.; Arneodo, A.; Audit, B.; Argoul, F.; Rappailles, A.; Guilbaud, G.; Petryk, N.; Kahli, M.; Hyrien, O.
2016-03-01
We propose a non-local model of DNA replication that takes into account the observed uncertainty on the position and time of replication initiation in eukaryote cell populations. By picturing replication initiation as a two-state system and considering all possible transition configurations, and by taking into account the chromatin’s fractal dimension, we derive an analytical expression for the rate of replication initiation. This model predicts with no free parameter the temporal profiles of initiation rate, replication fork density and fraction of replicated DNA, in quantitative agreement with corresponding experimental data from both S. cerevisiae and human cells and provides a quantitative estimate of initiation site redundancy. This study shows that, to a large extent, the program that regulates the dynamics of eukaryotic DNA replication is a collective phenomenon that emerges from the stochastic nature of replication origins initiation.
Comparative assessment of techniques for initial pose estimation using monocular vision
NASA Astrophysics Data System (ADS)
Sharma, Sumant; D`Amico, Simone
2016-06-01
This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.
Agriculture-driven deforestation in the tropics from 1990-2015: emissions, trends and uncertainties
NASA Astrophysics Data System (ADS)
Carter, Sarah; Herold, Martin; Avitabile, Valerio; de Bruin, Sytze; De Sy, Veronique; Kooistra, Lammert; Rufino, Mariana C.
2018-01-01
Limited data exists on emissions from agriculture-driven deforestation, and available data are typically uncertain. In this paper, we provide comparable estimates of emissions from both all deforestation and agriculture-driven deforestation, with uncertainties for 91 countries across the tropics between 1990 and 2015. Uncertainties associated with input datasets (activity data and emissions factors) were used to combine the datasets, where most certain datasets contribute the most. This method utilizes all the input data, while minimizing the uncertainty of the emissions estimate. The uncertainty of input datasets was influenced by the quality of the data, the sample size (for sample-based datasets), and the extent to which the timeframe of the data matches the period of interest. Area of deforestation, and the agriculture-driver factor (extent to which agriculture drives deforestation), were the most uncertain components of the emissions estimates, thus improvement in the uncertainties related to these estimates will provide the greatest reductions in uncertainties of emissions estimates. Over the period of the study, Latin America had the highest proportion of deforestation driven by agriculture (78%), and Africa had the lowest (62%). Latin America had the highest emissions from agriculture-driven deforestation, and these peaked at 974 ± 148 Mt CO2 yr-1 in 2000-2005. Africa saw a continuous increase in emissions between 1990 and 2015 (from 154 ± 21-412 ± 75 Mt CO2 yr-1), so mitigation initiatives could be prioritized there. Uncertainties for emissions from agriculture-driven deforestation are ± 62.4% (average over 1990-2015), and uncertainties were highest in Asia and lowest in Latin America. Uncertainty information is crucial for transparency when reporting, and gives credibility to related mitigation initiatives. We demonstrate that uncertainty data can also be useful when combining multiple open datasets, so we recommend new data providers to include this information.
Initial Verification of GEOS-4 Aerosols Using CALIPSO and MODIS: Scene Classification
NASA Technical Reports Server (NTRS)
Welton, Ellsworth J.; Colarco, Peter R.; Hlavka, Dennis; Levy, Robert C.; Vaughan, Mark A.; daSilva, Arlindo
2007-01-01
A-train sensors such as MODIS and MISR provide column aerosol properties, and in the process a means of estimating aerosol type (e.g. smoke vs. dust). Correct classification of aerosol type is important because retrievals are often dependent upon selection of the right aerosol model. In addition, aerosol scene classification helps place the retrieved products in context for comparisons and analysis with aerosol transport models. The recent addition of CALIPSO to the A-train now provides a means of classifying aerosol distribution with altitude. CALIPSO level 1 products include profiles of attenuated backscatter at 532 and 1064 nm, and depolarization at 532 nm. Backscatter intensity, wavelength ratio, and depolarization provide information on the vertical profile of aerosol concentration, size, and shape. Thus similar estimates of aerosol type using MODIS or MISR are possible with CALIPSO, and the combination of data from all sensors provides a means of 3D aerosol scene classification. The NASA Goddard Earth Observing System general circulation model and data assimilation system (GEOS-4) provides global 3D aerosol mass for sulfate, sea salt, dust, and black and organic carbon. A GEOS-4 aerosol scene classification algorithm has been developed to provide estimates of aerosol mixtures along the flight track for NASA's Geoscience Laser Altimeter System (GLAS) satellite lidar. GLAS launched in 2003 and did not have the benefit of depolarization measurements or other sensors from the A-train. Aerosol typing from GLAS data alone was not possible, and the GEOS-4 aerosol classifier has been used to identify aerosol type and improve the retrieval of GLAS products. Here we compare 3D aerosol scene classification using CALIPSO and MODIS with the GEOS-4 aerosol classifier. Dust, smoke, and pollution examples will be discussed in the context of providing an initial verification of the 3D GEOS-4 aerosol products. Prior model verification has only been attempted with surface mass comparisons and column optical depth from AERONET and MODIS.
Magnitude Estimation for the 2011 Tohoku-Oki Earthquake Based on Ground Motion Prediction Equations
NASA Astrophysics Data System (ADS)
Eshaghi, Attieh; Tiampo, Kristy F.; Ghofrani, Hadi; Atkinson, Gail M.
2015-08-01
This study investigates whether real-time strong ground motion data from seismic stations could have been used to provide an accurate estimate of the magnitude of the 2011 Tohoku-Oki earthquake in Japan. Ultimately, such an estimate could be used as input data for a tsunami forecast and would lead to more robust earthquake and tsunami early warning. We collected the strong motion accelerograms recorded by borehole and free-field (surface) Kiban Kyoshin network stations that registered this mega-thrust earthquake in order to perform an off-line test to estimate the magnitude based on ground motion prediction equations (GMPEs). GMPEs for peak ground acceleration and peak ground velocity (PGV) from a previous study by Eshaghi et al. in the Bulletin of the Seismological Society of America 103. (2013) derived using events with moment magnitude ( M) ≥ 5.0, 1998-2010, were used to estimate the magnitude of this event. We developed new GMPEs using a more complete database (1998-2011), which added only 1 year but approximately twice as much data to the initial catalog (including important large events), to improve the determination of attenuation parameters and magnitude scaling. These new GMPEs were used to estimate the magnitude of the Tohoku-Oki event. The estimates obtained were compared with real time magnitude estimates provided by the existing earthquake early warning system in Japan. Unlike the current operational magnitude estimation methods, our method did not saturate and can provide robust estimates of moment magnitude within ~100 s after earthquake onset for both catalogs. It was found that correcting for average shear-wave velocity in the uppermost 30 m () improved the accuracy of magnitude estimates from surface recordings, particularly for magnitude estimates of PGV (Mpgv). The new GMPEs also were used to estimate the magnitude of all earthquakes in the new catalog with at least 20 records. Results show that the magnitude estimate from PGV values using borehole recordings had the smallest standard deviation among the estimated magnitudes and produced more stable and robust magnitude estimates. This suggests that incorporating borehole strong ground-motion records immediately available after the occurrence of large earthquakes can provide robust and accurate magnitude estimation.
USDA-ARS?s Scientific Manuscript database
The USDA initiated the Conservation Effects Assessment Project (CEAP) to quantify the environmental benefits of conservation practices at regional and national scales. For this assessment, a sampling and modeling approach is used. This paper provides a technical overview of the modeling approach use...
DOT National Transportation Integrated Search
2000-10-01
This report demonstrates a unique solution to the challenge of providing accurate, timely estimates of arterial travel times to the motoring public. In particular, it discusses the lessons learned in deploying the Vehicle Tag Project in San Antonio, ...
Initial evaluation of floor cooling on lactating sows under severe acute heat stress
USDA-ARS?s Scientific Manuscript database
The objectives were to evaluate an acute heat stress protocol for lactating sows and evaluate preliminary estimates of water flow rates required to cool sows. Twelve multiparous sows were provided with a cooling pad built with an aluminum plate surface, high-density polyethylene base and copper pipe...
Cross-Validation of the Computerized Adaptive Screening Test (CAST).
ERIC Educational Resources Information Center
Pliske, Rebecca M.; And Others
The Computerized Adaptive Screening Test (CAST) was developed to provide an estimate at recruiting stations of prospects' Armed Forces Qualification Test (AFQT) scores. The CAST was designed to replace the paper-and-pencil Enlistment Screening Test (EST). The initial validation study of CAST indicated that CAST predicts AFQT at least as accurately…
A screening procedure to evaluate air pollution effects on Class I wilderness areas
Douglas G. Fox; Ann M. Bartuska; James G. Byrne; Ellis Cowling; Richard Fisher; Gene E. Likens; Steven E. Lindberg; Rick A. Linthurst; Jay Messer; Dale S. Nichols
1989-01-01
This screening procedure is intended to help wilderness managers conduct "adverse impact determinations" as part of Prevention of Significant Deterioration (PSD) applications for sources that emit air pollutants that might impact Class I wildernesses. The process provides an initial estimate of susceptibility to critical loadings for sulfur, nitrogen, and...
Volt-VAR Optimization on American Electric Power Feeders in Northeast Columbus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, Kevin P.; Weaver, T. F.
2012-05-10
In 2007 American Electric Power launched the gridSMART® initiative with the goals of increasing efficiency of the electricity delivery system and improving service to the end-use customers. As part of the initiative, a coordinated Volt-VAR system was deployed on eleven distribution feeders at five substations in the Northeast Columbus Ohio Area. The goal of the coordinated Volt-VAR system was to decrease the amount of energy necessary to provide end-use customers with the same quality of service. The evaluation of the Volt-VAR system performance was conducted in two stages. The first stage was composed of simulation, analysis, and estimation, while themore » second stage was composed of analyzing collected field data. This panel paper will examine the analysis conducted in both stages and present the estimated improvements in system efficiency.« less
Journal: Efficient Hydrologic Tracer-Test Design for Tracer ...
Hydrological tracer testing is the most reliable diagnostic technique available for the determination of basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed to facilitate the design of tracer tests by root determination of the one-dimensional advection-dispersion equation (ADE) using a preset average tracer concentration which provides a theoretical basis for an estimate of necessary tracer mass. The method uses basic measured field parameters (e.g., discharge, distance, cross-sectional area) that are combined in functional relatipnships that descrive solute-transport processes related to flow velocity and time of travel. These initial estimates for time of travel and velocity are then applied to a hypothetical continuous stirred tank reactor (CSTR) as an analog for the hydrological-flow system to develop initial estimates for tracer concentration, tracer mass, and axial dispersion. Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to be necessary for descri
Tracer-Test Planning Using the Efficient Hydrologic Tracer ...
Hydrological tracer testing is the most reliable diagnostic technique available for establishing flow trajectories and hydrologic connections and for determining basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed that combines basic measured field parameters (e.g., discharge, distance, cross-sectional area) in functional relationships that describe solute-transport processes related to flow velocity and time of travel. The new method applies these initial estimates for time of travel and velocity to a hypothetical continuously stirred tank reactor as an analog for the hydrologic flow system to develop initial estimates for tracer concentration and axial dispersion, based on a preset average tracer concentration. Root determination of the one-dimensional advection-dispersion equation (ADE) using the preset average tracer concentration then provides a theoretical basis for an estimate of necessary tracer mass.Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to be
EFFICIENT HYDROLOGICAL TRACER-TEST DESIGN (EHTD ...
Hydrological tracer testing is the most reliable diagnostic technique available for establishing flow trajectories and hydrologic connections and for determining basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed that combines basic measured field parameters (e.g., discharge, distance, cross-sectional area) in functional relationships that describe solute-transport processes related to flow velocity and time of travel. The new method applies these initial estimates for time of travel and velocity to a hypothetical continuously stirred tank reactor as an analog for the hydrologic flow system to develop initial estimates for tracer concentration and axial dispersion, based on a preset average tracer concentration. Root determination of the one-dimensional advection-dispersion equation (ADE) using the preset average tracer concentration then provides a theoretical basis for an estimate of necessary tracer mass.Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to
Prentice, Ross L.; Chlebowski, Rowan T.; Stefanick, Marcia L.; Manson, JoAnn E.; Langer, Robert D.; Pettinger, Mary; Hendrix, Susan L.; Hubbell, F. Allan; Kooperberg, Charles; Kuller, Lewis H.; Lane, Dorothy S.; McTiernan, Anne; O’Sullivan, Mary Jo; Rossouw, Jacques E.; Anderson, Garnet L.
2009-01-01
The Women’s Health Initiative randomized controlled trial found a trend (p = 0.09) toward a lower breast cancer risk among women assigned to daily 0.625-mg conjugated equine estrogens (CEEs) compared with placebo, in contrast to an observational literature that mostly reports a moderate increase in risk with estrogenalone preparations. In 1993–2004 at 40 US clinical centers, breast cancer hazard ratio estimates for this CEE regimen were compared between the Women’s Health Initiative clinical trial and observational study toward understanding this apparent discrepancy and refining hazard ratio estimates. After control for prior use of postmenopausal hormone therapy and for confounding factors, CEE hazard ratio estimates were higher from the observational study compared with the clinical trial by 43% (p = 0.12). However, after additional control for time from menopause to first use of postmenopausal hormone therapy, the hazard ratios agreed closely between the two cohorts (p = 0.82). For women who begin use soon after menopause, combined analyses of clinical trial and observational study data do not provide clear evidence of either an overall reduction or an increase in breast cancer risk with CEEs, although hazard ratios appeared to be relatively higher among women having certain breast cancer risk factors or a low body mass index. PMID:18448442
Stochastic goal-oriented error estimation with memory
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Marotzke, Jochem; Korn, Peter
2017-11-01
We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.
Effective force control by muscle synergies
Berger, Denise J.; d'Avella, Andrea
2014-01-01
Muscle synergies have been proposed as a way for the central nervous system (CNS) to simplify the generation of motor commands and they have been shown to explain a large fraction of the variation in the muscle patterns across a variety of conditions. However, whether human subjects are able to control forces and movements effectively with a small set of synergies has not been tested directly. Here we show that muscle synergies can be used to generate target forces in multiple directions with the same accuracy achieved using individual muscles. We recorded electromyographic (EMG) activity from 13 arm muscles and isometric hand forces during a force reaching task in a virtual environment. From these data we estimated the force associated to each muscle by linear regression and we identified muscle synergies by non-negative matrix factorization. We compared trajectories of a virtual mass displaced by the force estimated using the entire set of recorded EMGs to trajectories obtained using 4–5 muscle synergies. While trajectories were similar, when feedback was provided according to force estimated from recorded EMGs (EMG-control) on average trajectories generated with the synergies were less accurate. However, when feedback was provided according to recorded force (force-control) we did not find significant differences in initial angle error and endpoint error. We then tested whether synergies could be used as effectively as individual muscles to control cursor movement in the force reaching task by providing feedback according to force estimated from the projection of the recorded EMGs into synergy space (synergy-control). Human subjects were able to perform the task immediately after switching from force-control to EMG-control and synergy-control and we found no differences between initial movement direction errors and endpoint errors in all control modes. These results indicate that muscle synergies provide an effective strategy for motor coordination. PMID:24860489
Net anthropogenic nitrogen inputs and nitrogen fluxes from Indian watersheds: An initial assessment
NASA Astrophysics Data System (ADS)
Swaney, D. P.; Hong, B.; Paneer Selvam, A.; Howarth, R. W.; Ramesh, R.; Purvaja, R.
2015-01-01
In this paper, we apply an established methodology for estimating Net Anthropogenic Nitrogen Inputs (NANI) to India and its major watersheds. Our primary goal here is to provide initial estimates of major nitrogen inputs of NANI for India, at the country level and for major Indian watersheds, including data sources and parameter estimates, making some assumptions as needed in areas of limited data availability. Despite data limitations, we believe that it is clear that the main anthropogenic N source is agricultural fertilizer, which is being produced and applied at a growing rate, followed by N fixation associated with rice, leguminous crops, and sugar cane. While India appears to be a net exporter of N in food/feed as reported elsewhere (Lassaletta et al., 2013b), the balance of N associated with exports and imports of protein in food and feedstuffs is sensitive to protein content and somewhat uncertain. While correlating watershed N inputs with riverine N fluxes is problematic due in part to limited available riverine data, we have assembled some data for comparative purposes. We also suggest possible improvements in methods for future studies, and the potential for estimating riverine N fluxes to coastal waters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
C.P.C. Wong; B. Merrill
2014-10-01
ITER is under construction and will begin operation in 2020. This is the first 500 MWfusion class DT device, and since it is not going to breed tritium, it will consume most of the limited supply of tritium resources in the world. Yet, in parallel, DT fusion nuclear component testing machines will be needed to provide technical data for the design of DEMO. It becomes necessary to estimate the tritium burn-up fraction and corresponding initial tritium inventory and the doubling time of these machines for the planning of future supply and utilization of tritium. With the use of a systemmore » code, tritium burn-up fraction and initial tritium inventory for steady state DT machines can be estimated. Estimated tritium burn-up fractions of FNSF-AT, CFETR-R and ARIES-AT are in the range of 1–2.8%. Corresponding total equilibrium tritium inventories of the plasma flow and tritium processing system, and with the DCLL blanket option are 7.6 kg, 6.1 kg, and 5.2 kg for ARIES-AT, CFETR-R and FNSF-AT, respectively.« less
James, Eric P.; Benjamin, Stanley G.; Marquis, Melinda
2016-10-28
A new gridded dataset for wind and solar resource estimation over the contiguous United States has been derived from hourly updated 1-h forecasts from the National Oceanic and Atmospheric Administration High-Resolution Rapid Refresh (HRRR) 3-km model composited over a three-year period (approximately 22 000 forecast model runs). The unique dataset features hourly data assimilation, and provides physically consistent wind and solar estimates for the renewable energy industry. The wind resource dataset shows strong similarity to that previously provided by a Department of Energy-funded study, and it includes estimates in southern Canada and northern Mexico. The solar resource dataset represents anmore » initial step towards application-specific fields such as global horizontal and direct normal irradiance. This combined dataset will continue to be augmented with new forecast data from the advanced HRRR atmospheric/land-surface model.« less
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.
2017-01-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
Walker, Rachel A; Andreansky, Christopher; Ray, Madelyn H; McDannald, Michael A
2018-06-01
Childhood adversity is associated with exaggerated threat processing and earlier alcohol use initiation. Conclusive links remain elusive, as childhood adversity typically co-occurs with detrimental socioeconomic factors, and its impact is likely moderated by biological sex. To unravel the complex relationships among childhood adversity, sex, threat estimation, and alcohol use initiation, we exposed female and male Long-Evans rats to early adolescent adversity (EAA). In adulthood, >50 days following the last adverse experience, threat estimation was assessed using a novel fear discrimination procedure in which cues predict a unique probability of footshock: danger (p = 1.00), uncertainty (p = .25), and safety (p = .00). Alcohol use initiation was assessed using voluntary access to 20% ethanol, >90 days following the last adverse experience. During development, EAA slowed body weight gain in both females and males. In adulthood, EAA selectively inflated female threat estimation, exaggerating fear to uncertainty and safety, but promoted alcohol use initiation across sexes. Meaningful relationships between threat estimation and alcohol use initiation were not observed, underscoring the independent effects of EAA. Results isolate the contribution of EAA to adult threat estimation, alcohol use initiation, and reveal moderation by biological sex. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
2017-10-01
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
MacLaren, David; Redman-MacLaren, Michelle; Clough, Alan
2010-07-01
To describe and discuss challenges and opportunities encountered when estimating tobacco consumption in six remote Aboriginal communities using tobacco sales data from retail outlets. We consider tobacco sales data collected from retail outlets selling tobacco to six Aboriginal communities in two similar but separate studies. Despite challenges--including: not all outlets provided data; data not uniform across outlets (sales and invoice data); change in format of data; personnel change or management restructures; and anomalies in data and changes in community populations--tobacco consumption was estimated and returned through project newsletters and community feedback sessions. Amounts of tobacco sold were returned using graphs in newsletters and pictures of items common to the community in community feedback sessions. Despite inherent limitations of estimating tobacco consumption using tobacco sales data, returning the amount of tobacco sold to communities provided an opportunity to discuss tobacco consumption and provide a focal point for individual and community action. Using this method, however, may require large and sustained changes be observed over time to evaluate whether initiatives to reduce tobacco consumption have been effective. Estimating tobacco consumption in remote Aboriginal communities using tobacco sales data from retail outlets requires careful consideration of many logistical, social, cultural and geographic challenges.
Weak Value Amplification is Suboptimal for Estimation and Detection
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-01-01
We show by using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection. Specifically, we prove that postselection, a necessary ingredient for weak value amplification, decreases estimation accuracy and, moreover, arranging for anomalously large weak values is a suboptimal strategy. In doing so, we explicitly provide the optimal estimator, which in turn allows us to identify the optimal experimental arrangement to be the one in which all outcomes have equal weak values (all as small as possible) and the initial state of the meter is the maximal eigenvalue of the square of the system observable. Finally, we give precise quantitative conditions for when weak measurement (measurements without postselection or anomalously large weak values) can mitigate the effect of uncharacterized technical noise in estimation.
MIDURA (Minefield Detection Using Reconnaissance Assets) 1982-1983 Experimental Test Plan.
1982-04-01
3.2.4.2 Subjection Validation at the Salem ONG 27 3.2.4.3 Objective Validity at Fort Huachuca 28 4. TEST FLIGHTS AT ARRAYS IIa, lib, Ilia AND IIIb...subjective validation at the Salem ONG; (3) objective validation at Fort Huachuca. 3.2.4.1 Subjective Image Interpretation at ERIM The initial phase...The ERIM II’s will provide for each image estimate of PD’ Pc and PFA on a 0.00 to 1.00 scale. P is defined as the subjective probability estimate that
Radar investigation of asteroids
NASA Astrophysics Data System (ADS)
Ostro, S. J.
1984-07-01
The initial radar observations of the mainbelt asteroids 9 Metis, 27 Euterpe, and 60 Echo are examined. For each target, data are taken simultaneously in the same sense of circular polarization as transmitted as well as in the opposite (OC) sense. Estimates of the radar cross sections provide estimates of the circular polarization ratio, and the normalized OC radar cross section. The circular polarization ratio, is comparable to values measured for other large S type asteroids and for a few much smaller, Earth approaching objects, most of the echo is due to single reflection backscattering from smooth surface elements.
Radar investigation of asteroids
NASA Technical Reports Server (NTRS)
Ostro, S. J.
1984-01-01
The initial radar observations of the mainbelt asteroids 9 Metis, 27 Euterpe, and 60 Echo are examined. For each target, data are taken simultaneously in the same sense of circular polarization as transmitted as well as in the opposite (OC) sense. Estimates of the radar cross sections provide estimates of the circular polarization ratio, and the normalized OC radar cross section. The circular polarization ratio, is comparable to values measured for other large S type asteroids and for a few much smaller, Earth approaching objects, most of the echo is due to single reflection backscattering from smooth surface elements.
NASA Astrophysics Data System (ADS)
Edmonds, Larry D.; Irom, Farokh; Allen, Gregory R.
2017-08-01
A recent model provides risk estimates for the deprogramming of initially programmed floating gates via prompt charge loss produced by an ionizing radiation environment. The environment can be a mixture of electrons, protons, and heavy ions. The model requires several input parameters. This paper extends the model to include TID effects in the control circuitry by including one additional parameter. Parameters intended to produce conservative risk estimates for the Samsung 8 Gb SLC NAND flash memory are given, subject to some qualifications.
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Summery, D. C.; Johnson, W. D.
1972-01-01
Techniques quoted in the literature for the extraction of stability derivative information from flight test records are reviewed. A recent technique developed at NASA's Langley Research Center was regarded as the most productive yet developed. Results of tests of the sensitivity of this procedure to various types of data noise and to the accuracy of the estimated values of the derivatives are reported. Computer programs for providing these initial estimates are given. The literature review also includes a discussion of flight test measuring techniques, instrumentation, and piloting techniques.
An empirical method for estimating travel times for wet volcanic mass flows
Pierson, Thomas C.
1998-01-01
Travel times for wet volcanic mass flows (debris avalanches and lahars) can be forecast as a function of distance from source when the approximate flow rate (peak discharge near the source) can be estimated beforehand. The near-source flow rate is primarily a function of initial flow volume, which should be possible to estimate to an order of magnitude on the basis of geologic, geomorphic, and hydrologic factors at a particular volcano. Least-squares best fits to plots of flow-front travel time as a function of distance from source provide predictive second-degree polynomial equations with high coefficients of determination for four broad size classes of flow based on near-source flow rate: extremely large flows (>1 000 000 m3/s), very large flows (10 000–1 000 000 m3/s), large flows (1000–10 000 m3/s), and moderate flows (100–1000 m3/s). A strong nonlinear correlation that exists between initial total flow volume and flow rate for "instantaneously" generated debris flows can be used to estimate near-source flow rates in advance. Differences in geomorphic controlling factors among different flows in the data sets have relatively little effect on the strong nonlinear correlations between travel time and distance from source. Differences in flow type may be important, especially for extremely large flows, but this could not be evaluated here. At a given distance away from a volcano, travel times can vary by approximately an order of magnitude depending on flow rate. The method can provide emergency-management officials a means for estimating time windows for evacuation of communities located in hazard zones downstream from potentially hazardous volcanoes.
Xu, Nan; Spreng, R. Nathan; Doerschuk, Peter C.
2017-01-01
Resting-state functional MRI (rs-fMRI) is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD) signal from different regions of interest (ROIs). However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1) Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2) On simulated data designed to display the “common driver” problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3) On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain. PMID:28559793
Multitarget mixture reduction algorithm with incorporated target existence recursions
NASA Astrophysics Data System (ADS)
Ristic, Branko; Arulampalam, Sanjeev
2000-07-01
The paper derives a deferred logic data association algorithm based on the mixture reduction approach originally due to Salmond [SPIE vol.1305, 1990]. The novelty of the proposed algorithm provides the recursive formulae for both data association and target existence (confidence) estimation, thus allowing automatic track initiation and termination. T he track initiation performance of the proposed filter is investigated by computer simulations. It is observed that at moderately high levels of clutter density the proposed filter initiates tracks more reliably than its corresponding PDA filter. An extension of the proposed filter to the multi-target case is also presented. In addition, the paper compares the track maintenance performance of the MR algorithm with an MHT implementation.
Protective Effectiveness of Porous Shields Under the Influence of High-Speed Impact Loading
NASA Astrophysics Data System (ADS)
Kramshonkov, E. N.; Krainov, A. V.; Shorohov, P. V.
2016-02-01
The results of numerical simulations of a compact steel impactor with the aluminum porous shields under high-speed shock loading are presented. The porosity of barrier varies in wide range provided that its mass stays the same, but the impactor has always equal (identical) mass. Here presented the final assessment of the barrier perforation speed depending on its porosity and initial shock speed. The range of initial impact speed varies from 1 to 10 km/s. Physical phenomena such as: destruction, melting, vaporization of a interacting objects are taken into account. The analysis of a shield porosity estimation disclosed that the protection effectiveness of porous shield reveals at the initial impact speed grater then 1.5 km/s, and it increases when initial impact speed growth.
Human Pose Estimation from Monocular Images: A Comprehensive Survey
Gong, Wenjuan; Zhang, Xuena; Gonzàlez, Jordi; Sobral, Andrews; Bouwmans, Thierry; Tu, Changhe; Zahzah, El-hadi
2016-01-01
Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used. PMID:27898003
Use of spatial capture–recapture to estimate density of Andean bears in northern Ecuador
Molina, Santiago; Fuller, Angela K.; Morin, Dana J.; Royle, J. Andrew
2017-01-01
The Andean bear (Tremarctos ornatus) is the only extant species of bear in South America and is considered threatened across its range and endangered in Ecuador. Habitat loss and fragmentation is considered a critical threat to the species, and there is a lack of knowledge regarding its distribution and abundance. The species is thought to occur at low densities, making field studies designed to estimate abundance or density challenging. We conducted a pilot camera-trap study to estimate Andean bear density in a recently identified population of Andean bears northwest of Quito, Ecuador, during 2012. We compared 12 candidate spatial capture–recapture models including covariates on encounter probability and density and estimated a density of 7.45 bears/100 km2 within the region. In addition, we estimated that approximately 40 bears used a recently named Andean bear corridor established by the Secretary of Environment, and we produced a density map for this area. Use of a rub-post with vanilla scent attractant allowed us to capture numerous photographs for each event, improving our ability to identify individual bears by unique facial markings. This study provides the first empirically derived density estimate for Andean bears in Ecuador and should provide direction for future landscape-scale studies interested in conservation initiatives requiring spatially explicit estimates of density.
Marshall, John K; Bessette, Louis; Thorne, Carter; Shear, Neil H; Lebovic, Gerald; Gerega, Sebastien K; Millson, Brad; Oraichi, Driss; Gaetano, Tania; Gazel, Sandra; Latour, Martin G; Laliberté, Marie-Claude
2018-03-01
Adalimumab (ADA) is a tumor necrosis factor-α inhibitor indicated for use in various immune-mediated inflammatory diseases. Patients receiving ADA in Canada are eligible to enroll in the AbbVie Care's Patient Support Program (PSP), which provides personalized services, including tailored interventions in the form of nurse-provided care coach calls (CCCs), with the goal of improving patients' experiences and outcomes. The primary objective of this study was to evaluate the impact of PSP services, including CCCs and patient characteristics, on persistence with and adherence to ADA for those patients enrolled in the PSP. A secondary objective was to estimate the effect of initial CCCs on treatment-initiation abandonment (ie, failure to initiate therapy after enrollment in the PSP). An observational retrospective cohort study was conducted. A patient linkage algorithm based on probabilistic matching was developed to link the AbbVie Care PSP database to the QuintilesIMS longitudinal pharmacy transaction database. Patients who started ADA therapy between July 2010 and August 2014 were selected, and their prescriptions were evaluated for 12 months after the date of ADA start to calculate days until drug discontinuation, that is, the end of persistence, defined as >90 days without therapy. Cox proportional hazards modeling was used for estimating hazard ratios for the association between persistence and patient characteristics and each PSP service. Adherence, measured by medication possession ratio, was calculated, and multivariate logistic regression provided adjusted odds ratios for the relationship between being adherent (medication possession ratio ≥80%) and patient characteristics and each PSP service. Treatment-initiation abandonment among patients who received an initial CCC compared with those who did not was analyzed using the χ 2 test. Analysis of 10,857 linked patients yielded statistically significant differences in the hazard ratio of discontinuation and the likelihood of being adherent across multiple variables between patients who received CCCs in comparison to patients who did not. Patients receiving CCCs were found to have a 72% decreased risk for therapy discontinuation (hazard ratio = 0.282; P < 0.0001), and a greater likelihood of being adherent (odds ratio = 1.483; P < 0.0001), when compared with those patients who did not receive CCCs. The rate of treatment-initiation abandonment was significantly higher in patients who did not receive initial CCCs (P < 0.0001). Ongoing CCCs, provided by AbbVie Care PSP, were associated with greater patient persistence and adherence over the first 12 months of treatment, while initial CCCs were associated with a lower rate of treatment-initiation abandonment. Results may inform the planning of interventions aimed at improving treatment adherence and patient outcomes. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Analysis of Shuttle Orbiter Reliability and Maintainability Data for Conceptual Studies
NASA Technical Reports Server (NTRS)
Morris, W. D.; White, N. H.; Ebeling, C. E.
1996-01-01
In order to provide a basis for estimating the expected support required of new systems during their conceptual design phase, Langley Research Center has recently collected Shuttle Orbiter reliability and maintainability data from the various data base sources at Kennedy Space Center. This information was analyzed to provide benchmarks, trends, and distributions to aid in the analysis of new designs. This paper presents a summation of those results and an initial interpretation of the findings.
Level 1 Tornado PRA for the High Flux Beam Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bozoki, G.E.; Conrad, C.S.
This report describes a risk analysis primarily directed at providing an estimate for the frequency of tornado induced damage to the core of the High Flux Beam Reactor (HFBR), and thus it constitutes a Level 1 Probabilistic Risk Assessment (PRA) covering tornado induced accident sequences. The basic methodology of the risk analysis was to develop a ``tornado specific`` plant logic model that integrates the internal random hardware failures with failures caused externally by the tornado strike and includes operator errors worsened by the tornado modified environment. The tornado hazard frequency, as well as earlier prepared structural and equipment fragility data,more » were used as input data to the model. To keep modeling/calculational complexity as simple as reasonable a ``bounding`` type, slightly conservative, approach was applied. By a thorough screening process a single dominant initiating event was selected as a representative initiator, defined as: ``Tornado Induced Loss of Offsite Power.`` The frequency of this initiator was determined to be 6.37E-5/year. The safety response of the HFBR facility resulted in a total Conditional Core Damage Probability of .621. Thus, the point estimate of the HFBR`s Tornado Induced Core Damage Frequency (CDF) was found to be: (CDF){sub Tornado} = 3.96E-5/year. This value represents only 7.8% of the internal CDF and thus is considered to be a small contribution to the overall facility risk expressed in terms of total Core Damage Frequency. In addition to providing the estimate of (CDF){sub Tornado}, the report documents, the relative importance of various tornado induced system, component, and operator failures that contribute most to (CDF){sub Tornado}.« less
Estimating ecosystem carbon stocks at Redwood National and State Parks
van Mantgem, Phillip J.; Madej, Mary Ann; Seney, Joseph; Deshais, Janelle
2013-01-01
Accounting for ecosystem carbon is increasingly important for park managers. In this case study we present our efforts to estimate carbon stocks and the effects of management on carbon stocks for Redwood National and State Parks in northern California. Using currently available information, we estimate that on average these parks’ soils contain approximately 89 tons of carbon per acre (200 Mg C per ha), while vegetation contains about 130 tons C per acre (300 Mg C per ha). estoration activities at the parks (logging-road removal, second-growth forest management) were shown to initially reduce ecosystem carbon, but may provide for enhanced ecosystem carbon storage over the long term. We highlight currently available tools that could be used to estimate ecosystem carbon at other units of the National Park System.
Implications of Pulser Voltage Ripple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnard, J J
In a recent set of measurements obtained by G. Kamin, W. Manning, A. Molvik, and J. Sullivan, the voltage waveform of the diode pulser had a ripple of approximately {+-}1.3% of the 65 kV flattop voltage, and the beam current had a larger corresponding ripple of approximately {+-}8.4% of the 1.5 mA average current at the location of the second Faraday cup, approximately 1.9 m downstream from the ion source. The period of the ripple was about 1 {mu}s. It was initially unclear whether this large current ripple was in fact a true measurement of the current or a spuriousmore » measurement of noise produced by the pulser electronics. The purpose of this note is to provide simulations which closely match the experimental results and thereby corroborate the physical nature of those measurements, and to provide predictions of the amplitude of the current ripples as they propagate to the end of linear transport section. Additionally analytic estimates are obtained which lend some insight into the nature of the current fluctuations and to provide an estimate of what the maximum amplitude of the current fluctuations are expected to be, and conversely what initial ripple in the voltage source is allowed, given a smaller acceptable tolerance on the line charge density.« less
Forensic individual age estimation with DNA: From initial approaches to methylation tests.
Freire-Aradas, A; Phillips, C; Lareu, M V
2017-07-01
Individual age estimation is a key factor in forensic science analysis that can provide very useful information applicable to criminal, legal, and anthropological investigations. Forensic age inference was initially based on morphological inspection or radiography and only later began to adopt molecular approaches. However, a lack of accuracy or technical problems hampered the introduction of these DNA-based methodologies in casework analysis. A turning point occurred when the epigenetic signature of DNA methylation was observed to gradually change during an individual´s lifespan. In the last four years, the number of publications reporting DNA methylation age-correlated changes has gradually risen and the forensic community now has a range of age methylation tests applicable to forensic casework. Most forensic age predictor models have been developed based on blood DNA samples, but additional tissues are now also being explored. This review assesses the most widely adopted genes harboring methylation sites, detection technologies, statistical age-predictive analyses, and potential causes of variation in age estimates. Despite the need for further work to improve predictive accuracy and establishing a broader range of tissues for which tests can analyze the most appropriate methylation sites, several forensic age predictors have now been reported that provide consistency in their prediction accuracies (predictive error of ±4 years); this makes them compelling tools with the potential to contribute key information to help guide criminal investigations. Copyright © 2017 Central Police University.
A national evaluation of Safe Schools/Healthy Students: outcomes and influences.
Derzon, James H; Yu, Ping; Ellis, Bruce; Xiong, Sharon; Arroyo, Carmen; Mannix, Danyelle; Wells, Michael E; Hill, Gary; Rollison, Julia
2012-05-01
The Safe Schools/Healthy Students (SS/HS) Initiative has awarded over $2 billion in grants to more than 350 school districts in partnership with local mental health, law enforcement, and juvenile justice agencies. To estimate the impact of grantee characteristics, grant operations, and near-term outcomes in reducing violence and substance use, promoting mental health, and enhancing school safety, logged odds ratios (LORs) were calculated contrasting Year 3 with Baseline performance from grantee-provided data on seven outcome measures. After comparing grantee performance across outcomes and outcomes across grantees, the LORs were entered as dependent variables in a series of meta-regressions in which grantee characteristics, grant operations, and near-term outcomes were tested after controlling for pre-grant characteristics. Findings indicate that the SS/HS Initiative significantly improved most outcomes, that within-grantee performance varied greatly by outcome, and that random-effects meta-regression appreciably decreased the variance available for modeling. The approach demonstrates that the SS/HS Initiative is effective and that locally collected performance data can be used to estimate grantee success in improving youth outcomes. Copyright © 2011 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
The Dietary Supplement Ingredient Database (DSID) is a federal initiative to provide analytical validation of ingredients in dietary supplements. The first release on vitamins and minerals in adult MVMs is now available. Multiple lots of >100 representative adult MVMs were chemically analyzed for ...
Paul C. Van Deusen
2002-01-01
The annual inventory system was designed under the assumption that a fixed percentage of plots would be measured annually in each State. The initial plan was to assign plots to panels to provide systematic coverage of a State. One panel would be measured each year to allow for annual updates of each State using simple estimation procedures. The reality is that...
ERIC Educational Resources Information Center
Laux, John M.; Perera-Diltz, Dilani; Smirnoff, Jennifer B.; Salyers, Kathleen M.
2005-01-01
The authors investigated the psychometric capabilities of the Face Valid Other Drugs (FVOD) scale of the Substance Abuse Subtle Screening Inventory-3 (SASSI-3; G. A. Miller, 1999). Internal consistency reliability estimates and construct validity factor analysis for 230 college students provided initial support for the psychometric properties of…
The estimation of growth dynamics for Pomacea maculata from hatchling to adult
Sutton, Karyn L.; Zhao, Lihong; Carter, Jacoby
2017-01-01
Pomacea maculata is a relatively new invasive species to the Gulf Coast region and potentially threatens local agriculture (rice) and ecosystems (aquatic vegetation). The population dynamics of P. maculata have largely been unquantified, and therefore, scientists and field-workers are ill-equipped to accurately project population sizes and the resulting impact of this species. We studied the growth of P. maculata ranging in weights from 6 to 105 g, identifying the sex of the animals when possible. Our studied population had a 4:9 male:female sex ratio. We present the findings from initial analysis of the individual growth data of males and females, from which it was apparent that females were generally larger than males and that small snails grew faster than larger snails. Since efforts to characterize the male and female growth rates from individual data do not yield statistically supported estimates, we present the estimation of several parameterized growth rate functions within a population-level mathematical model. We provide a comparison of the results using these various growth functions and discuss which best characterizes the dynamics of our observed population. We conclude that both males and females exhibit biphasic growth rates, and thus, their growth is size-dependent. Further, our results suggest that there are notable differences between males and females that are important to take into consideration in order to accurately model this species' population dynamics. Lastly, we include preliminary analyses of ongoing experiments to provide initial estimates of growth in the earliest life stages (hatchling to ≈6 g).
Application of biological simulation models in estimating feed efficiency of finishing steers.
Williams, C B
2010-07-01
Data on individual daily feed intake, BW at 28-d intervals, and carcass composition were obtained on 1,212 crossbred steers. Within-animal regressions of cumulative feed intake and BW on linear and quadratic days on feed were used to quantify initial and ending BW, average daily observed feed intake (OFI), and ADG over a 120-d finishing period. Feed intake was predicted (PFI) with 3 biological simulation models (BSM): a) Decision Evaluator for the Cattle Industry, b) Cornell Value Discovery System, and c) NRC update 2000, using observed growth and carcass data as input. Residual feed intake (RFI) was estimated using OFI (RFI(EL)) in a linear statistical model (LSM), and feed conversion ratio (FCR) was estimated as OFI/ADG (FCR(E)). Output from the BSM was used to estimate RFI by using PFI in place of OFI with the same LSM, and FCR was estimated as PFI/ADG. These estimates were evaluated against RFI(EL) and FCR(E). In a second analysis, estimates of RFI were obtained for the 3 BSM as the difference between OFI and PFI, and these estimates were evaluated against RFI(EL). The residual variation was extremely small when PFI was used in the LSM to estimate RFI, and this was mainly due to the fact that the same input variables (initial BW, days on feed, and ADG) were used in the BSM and LSM. Hence, the use of PFI obtained with BSM as a replacement for OFI in a LSM to characterize individual animals for RFI was not feasible. This conclusion was also supported by weak correlations (<0.4) between RFI(EL) and RFI obtained with PFI in the LSM, and very weak correlations (<0.13) between RFI(EL) and FCR obtained with PFI. In the second analysis, correlations (>0.89) for RFI(EL) with the other RFI estimates suggest little difference between RFI(EL) and any of these RFI estimates. In addition, results suggest that the RFI estimates calculated with PFI would be better able to identify animals with low OFI and small ADG as inefficient compared with RFI(EL). These results may be due to the fact that computer models predict performance on an individual-animal basis in contrast to a LSM, which estimates a fixed relationship for all animals; hence, the BSM may provide RFI estimates that are closer to the true biological efficiency of animals. In addition, BSM may facilitate comparisons across different data sets and provide more accurate estimates of efficiency in small data sets where errors would be greater with a LSM.
Conservation laws with coinciding smooth solutions but different conserved variables
NASA Astrophysics Data System (ADS)
Colombo, Rinaldo M.; Guerra, Graziano
2018-04-01
Consider two hyperbolic systems of conservation laws in one space dimension with the same eigenvalues and (right) eigenvectors. We prove that solutions to Cauchy problems with the same initial data differ at third order in the total variation of the initial datum. As a first application, relying on the classical Glimm-Lax result (Glimm and Lax in Decay of solutions of systems of nonlinear hyperbolic conservation laws. Memoirs of the American Mathematical Society, No. 101. American Mathematical Society, Providence, 1970), we obtain estimates improving those in Saint-Raymond (Arch Ration Mech Anal 155(3):171-199, 2000) on the distance between solutions to the isentropic and non-isentropic inviscid compressible Euler equations, under general equations of state. Further applications are to the general scalar case, where rather precise estimates are obtained, to an approximation by Di Perna of the p-system and to a traffic model.
NASA Astrophysics Data System (ADS)
Francis, G. L.; Cady-Pereira, K.; Worden, H. M.; Shephard, M.; Fu, D.
2016-12-01
A prototype optimal estimation CO retrieval framework using CrIS thermal-IR spectra is being developed and undergoing initial testing and evaluation. The goal is construction of a multi- decadal climate-quality data record, consistent with MOPITT, extending into the post-EOS/Terra era, given the planned JPSS mission schedule. The EOS/MOPITT instrument has an ongoing and unprecedented record of CO retrievals since early 2000. CrIS CO offers the potential to significantly extend the MOPITT thermal-IR retrieval record, as well as providing expanded spatial coverage. We describe the prototype CrIS CO optimal estimation retrieval system. Test CO retrievals include data for the California Central Valley and the fires near Fort McMurray, Canada. We compare our results to other satellite datasets as well as available in-situ data. Directions for future work will be discussed.
Ultimate Longitudinal Strength of Composite Ship Hulls
NASA Astrophysics Data System (ADS)
Zhang, Xiangming; Huang, Lingkai; Zhu, Libao; Tang, Yuhang; Wang, Anwen
2017-01-01
A simple analytical model to estimate the longitudinal strength of ship hulls in composite materials under buckling, material failure and ultimate collapse is presented in this paper. Ship hulls are regarded as assemblies of stiffened panels which idealized as group of plate-stiffener combinations. Ultimate strain of the plate-stiffener combination is predicted under buckling or material failure with composite beam-column theory. The effects of initial imperfection of ship hull and eccentricity of load are included. Corresponding longitudinal strengths of ship hull are derived in a straightforward method. A longitudinally framed ship hull made of symmetrically stacked unidirectional plies under sagging is analyzed. The results indicate that present analytical results have a good agreement with FEM method. The initial deflection of ship hull and eccentricity of load can dramatically reduce the bending capacity of ship hull. The proposed formulations provide a simple but useful tool for the longitudinal strength estimation in practical design.
Assessing concentration uncertainty estimates from passive microwave sea ice products
NASA Astrophysics Data System (ADS)
Meier, W.; Brucker, L.; Miller, J. A.
2017-12-01
Sea ice concentration is an essential climate variable and passive microwave derived estimates of concentration are one of the longest satellite-derived climate records. However, until recently uncertainty estimates were not provided. Numerous validation studies provided insight into general error characteristics, but the studies have found that concentration error varied greatly depending on sea ice conditions. Thus, an uncertainty estimate from each observation is desired, particularly for initialization, assimilation, and validation of models. Here we investigate three sea ice products that include an uncertainty for each concentration estimate: the NASA Team 2 algorithm product, the EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI-SAF) product, and the NOAA/NSIDC Climate Data Record (CDR) product. Each product estimates uncertainty with a completely different approach. The NASA Team 2 product derives uncertainty internally from the algorithm method itself. The OSI-SAF uses atmospheric reanalysis fields and a radiative transfer model. The CDR uses spatial variability from two algorithms. Each approach has merits and limitations. Here we evaluate the uncertainty estimates by comparing the passive microwave concentration products with fields derived from the NOAA VIIRS sensor. The results show that the relationship between the product uncertainty estimates and the concentration error (relative to VIIRS) is complex. This may be due to the sea ice conditions, the uncertainty methods, as well as the spatial and temporal variability of the passive microwave and VIIRS products.
Approaches and Data Quality for Global Precipitation Estimation
NASA Astrophysics Data System (ADS)
Huffman, G. J.; Bolvin, D. T.; Nelkin, E. J.
2015-12-01
The space and time scales on which precipitation varies are small compared to the satellite coverage that we have, so it is necessary to merge "all" of the available satellite estimates. Differing retrieval capabilities from the various satellites require inter-calibration for the satellite estimates, while "morphing", i.e., Lagrangian time interpolation, is used to lengthen the period over which time interpolation is valid. Additionally, estimates from geostationary-Earth-orbit infrared data are plentiful, but of sufficiently lower quality compared to low-Earth-orbit passive microwave estimates that they are only used when needed. Finally, monthly surface precipitation gauge data can be used to reduce bias and improve patterns of occurrence for monthly satellite data, and short-interval satellite estimates can be improved with a simple scaling such that they sum to the monthly satellite-gauge combination. The presentation will briefly consider some of the design decisions for practical computation of the Global Precipitation Measurement (GPM) mission product Integrated Multi-satellitE Retrievals for GPM (IMERG), then examine design choices that maximize value for end users. For example, data fields are provided in the output file that provide insight into the basis for the estimated precipitation, including error, sensor providing the estimate, precipitation phase (solid/liquid), and intermediate precipitation estimates. Another important initiative is successive computations for the same data date/time at longer latencies as additional data are received, which for IMERG is currently done at 6 hours, 16 hours, and 3 months after observation time. Importantly, users require long records for each latency, which runs counter to the data archiving practices at most archive sites. As well, the assignment of Digital Object Identifiers (DOI's) for near-real-time data sets (at 6 and 16 hours for IMERG) is not a settled issue.
Yiannoutsos, Constantin Theodore; Johnson, Leigh Francis; Boulle, Andrew; Musick, Beverly Sue; Gsponer, Thomas; Balestre, Eric; Law, Matthew; Shepherd, Bryan E; Egger, Matthias
2012-01-01
Objective To provide estimates of mortality among HIV-infected patients starting combination antiretroviral therapy. Methods We report on the death rates from 122 925 adult HIV-infected patients aged 15 years or older from East, Southern and West Africa, Asia Pacific and Latin America. We use two methods to adjust for biases in mortality estimation resulting from loss from follow-up, based on double-sampling methods applied to patient outreach (Kenya) and linkage with vital registries (South Africa), and apply these to mortality estimates in the other three regions. Age, gender and CD4 count at the initiation of therapy were the factors considered as predictors of mortality at 6, 12, 24 and >24 months after the start of treatment. Results Patient mortality was high during the first 6 months after therapy for all patient subgroups and exceeded 40 per 100 patient years among patients who started treatment at low CD4 count. This trend was seen regardless of region, demographic or disease-related risk factor. Mortality was under-reported by up to or exceeding 100% when comparing estimates obtained from passive monitoring of patient vital status. Conclusions Despite advances in antiretroviral treatment coverage many patients start treatment at very low CD4 counts and experience significant mortality during the first 6 months after treatment initiation. Active patient tracing and linkage with vital registries are critical in adjusting estimates of mortality, particularly in low- and middle-income settings. PMID:23172344
Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A
2015-01-15
Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. Copyright © 2014 John Wiley & Sons, Ltd.
Gruber, Susan; Logan, Roger W.; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A.
2014-01-01
Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V -fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152
Implementing microbicides in low income countries
Gengiah, Tanuja; Karim, Quarraisha Abdool
2012-01-01
The magnitude of the global HIV epidemic is determined by women from lower income countries, specifically sub-Saharan Africa. Microbicides offer women who are unable to negotiate safe sex practices a self-initiated HIV prevention method. Of note, is its potential to yield significant public health benefits even with relatively conservative efficacy, coverage and user adherence estimates, making microbicides an effective intervention to invest scarce health care resources. Existing health care delivery systems provide an excellent opportunity to identify women at highest risk for infection and to also provide an access point to initiate microbicide use. Innovative quality improvement approaches, which strengthen existing sexual reproductive health services and include HIV testing, and linkages to care and treatment services provide an opportunity to lay the foundations for wide-scale provision of microbicides. The potential to enhance health outcomes in women and infants and potentially impact rates of new HIV infection may soon be realised. PMID:22498040
Dillon, C R; Borasi, G; Payne, A
2016-01-01
For thermal modeling to play a significant role in treatment planning, monitoring, and control of magnetic resonance-guided focused ultrasound (MRgFUS) thermal therapies, accurate knowledge of ultrasound and thermal properties is essential. This study develops a new analytical solution for the temperature change observed in MRgFUS which can be used with experimental MR temperature data to provide estimates of the ultrasound initial heating rate, Gaussian beam variance, tissue thermal diffusivity, and Pennes perfusion parameter. Simulations demonstrate that this technique provides accurate and robust property estimates that are independent of the beam size, thermal diffusivity, and perfusion levels in the presence of realistic MR noise. The technique is also demonstrated in vivo using MRgFUS heating data in rabbit back muscle. Errors in property estimates are kept less than 5% by applying a third order Taylor series approximation of the perfusion term and ensuring the ratio of the fitting time (the duration of experimental data utilized for optimization) to the perfusion time constant remains less than one. PMID:26741344
Effect of patient selection method on provider group performance estimates.
Thorpe, Carolyn T; Flood, Grace E; Kraft, Sally A; Everett, Christine M; Smith, Maureen A
2011-08-01
Performance measurement at the provider group level is increasingly advocated, but different methods for selecting patients when calculating provider group performance have received little evaluation. We compared 2 currently used methods according to characteristics of the patients selected and impact on performance estimates. We analyzed Medicare claims data for fee-for-service beneficiaries with diabetes ever seen at an academic multispeciality physician group in 2003 to 2004. We examined sample size, sociodemographics, clinical characteristics, and receipt of recommended diabetes monitoring in 2004 for the groups of patients selected using 2 methods implemented in large-scale performance initiatives: the Plurality Provider Algorithm and the Diabetes Care Home method. We examined differences among discordantly assigned patients to determine evidence for differential selection regarding these measures. Fewer patients were selected under the Diabetes Care Home method (n=3558) than the Plurality Provider Algorithm (n=4859). Compared with the Plurality Provider Algorithm, the Diabetes Care Home method preferentially selected patients who were female, not entitled because of disability, older, more likely to have hypertension, and less likely to have kidney disease and peripheral vascular disease, and had lower levels of predicted utilization. Diabetes performance was higher under Diabetes Care Home method, with 67% versus 58% receiving >1 A1c tests, 70% versus 65% receiving ≥1 low-density lipoprotein (LDL) test, and 38% versus 37% receiving an eye examination. The method used to select patients when calculating provider group performance may affect patient case mix and estimated performance levels, and warrants careful consideration when comparing performance estimates.
Cherng, Sarah T; Tam, Jamie; Christine, Paul J; Meza, Rafael
2016-11-01
Electronic cigarette (e-cigarette) use has increased rapidly in recent years. Given the unknown effects of e-cigarette use on cigarette smoking behaviors, e-cigarette regulation has become the subject of considerable controversy. In the absence of longitudinal data documenting the long-term effects of e-cigarette use on smoking behavior and population smoking outcomes, computational models can guide future empirical research and provide insights into the possible effects of e-cigarette use on smoking prevalence over time. Agent-based model examining hypothetical scenarios of e-cigarette use by smoking status and e-cigarette effects on smoking initiation and smoking cessation. If e-cigarettes increase individual-level smoking cessation probabilities by 20%, the model estimates a 6% reduction in smoking prevalence by 2060 compared with baseline model (no effects) outcomes. In contrast, e-cigarette use prevalence among never smokers would have to rise dramatically from current estimates, with e-cigarettes increasing smoking initiation by more than 200% relative to baseline model estimates to achieve a corresponding 6% increase in smoking prevalence by 2060. Based on current knowledge of the patterns of e-cigarette use by smoking status and the heavy concentration of e-cigarette use among current smokers, the simulated effects of e-cigarettes on smoking cessation generate substantially larger changes to smoking prevalence compared with their effects on smoking initiation.
Cherng, Sarah T.; Tam, Jamie; Christine, Paul; Meza, Rafael
2016-01-01
Background Electronic cigarette (e-cigarette) use has increased rapidly in recent years. Given the unknown effects of e-cigarette use on cigarette smoking behaviors, e-cigarette regulation has become the subject of considerable controversy. In the absence of longitudinal data documenting the long-term effects of e-cigarette use on smoking behavior and population smoking outcomes, computational models can guide future empirical research and provide insights into the possible effects of e-cigarette use on smoking prevalence over time. Methods Agent-based model examining hypothetical scenarios of e-cigarette use by smoking status and e-cigarette effects on smoking initiation and smoking cessation. Results If e-cigarettes increase individual-level smoking cessation probabilities by 20%, the model estimates a 6% reduction in smoking prevalence by 2060 compared to baseline model (no effects) outcomes. In contrast, e-cigarette use prevalence among never smokers would have to rise dramatically from current estimates, with e-cigarettes increasing smoking initiation by more than 200% relative to baseline model estimates in order to achieve a corresponding 6% increase in smoking prevalence by 2060. Conclusions Based on current knowledge of the patterns of e-cigarette use by smoking status and the heavy concentration of e-cigarette use among current smokers, the simulated effects of e-cigarettes on smoking cessation generate substantially larger changes to smoking prevalence relative to their effects on smoking initiation. PMID:27093020
Potential costs of breast augmentation mammaplasty.
Schmitt, William P; Eichhorn, Mitchell G; Ford, Ronald D
2016-01-01
Augmentation mammaplasty is one of the most common surgical procedures performed by plastic surgeons. The aim of this study was to estimate the cost of the initial procedure and its subsequent complications, as well as project the cost of Food and Drug Administration (FDA)-recommended surveillance imaging. The potential costs to the individual patient and society were calculated. Local plastic surgeons provided billing data for the initial primary silicone augmentation and reoperative procedures. Complication rates used for the cost analysis were obtained from the Allergen Core study on silicone implants. Imaging surveillance costs were considered in the estimations. The average baseline initial cost of silicone augmentation mammaplasty was calculated at $6335. The average total cost of primary breast augmentation over the first decade for an individual patient, including complications requiring reoperation and other ancillary costs, was calculated at $8226. Each decade thereafter cost an additional $1891. Costs may exceed $15,000 over an averaged lifetime, and the recommended implant surveillance could cost an additional $33,750. The potential cost of a breast augmentation, which includes the costs of complications and imaging, is significantly higher than the initial cost of the procedure. Level III, economic and decision analysis study. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Beckley, B. D.; Zelensky, N. P.; Holmes, S. A.; Lemoine, F. G.; Ray, R. D.; Mitchum, G. T.; Dedai, S. D.; Brown, S. T.
2010-01-01
The Jason-2 (OSTM) follow-on mission to Jason-I provides for the continuation of global and regional mean sea level estimates along the ground-track of the initial phase of the TOPEX/Poseidon mission. During the first several months, Jason-I and Jason-2 flew in formation separated by only 55 seconds, enabling the isolation of intermission instrument biases through direct collinear differencing of near simultaneous observations. The Jason-2 Ku-band range bias with respect to Jason-I is estimated to be -84 +/- 9 mm, based on the orbit altitudes provided on the Geophysical Data Records. Modest improved agreement is achieved with the GSFC replacement orbits, which further enables the isolation of subtle 1 cm) instrument-dependent range correction biases. Inter-mission bias estimates are confirmed with an independent assessment from comparisons to a 64-station tide-gauge network, also providing an estimate of the stability of the 17-year time series to be less than 0.1 mm/yr +/- 0.4 mm/yr. The global mean sea level derived from the multi-mission altimeter sea-surface height record from January 1993 through September 2009 is 3.3 +/- 0.4 mm/yr. Recent trends over the period from 2004 through 2008 are smaller and estimated to be 2.0 +/- 0.4 mm/yr.
Perkins, Rebecca B; Brogly, Susan B; Adams, William G; Freund, Karen M
2012-08-01
Low rates of human papillomavirus (HPV) vaccination in low-income, minority adolescents may exacerbate racial disparities in cervical cancer incidence. Using electronic medical record data and chart abstraction, we examined correlates of HPV vaccine series initiation and completion among 7702 low-income and minority adolescents aged 11-21 receiving primary care at one of seven medical centers between May 1, 2007, and June 30, 2009. Our population included 61% African Americans, 13% Caucasians, 15% Latinas, and 11% other races; 90% receive public insurance (e.g., Medicaid). We used logistic regression to estimate the associations between vaccine initiation and completion and age, race/ethnicity, number of contacts with the healthcare system, provider documentation, and clinical site of care. Of the 41% of adolescent girls who initiated HPV vaccination, 20% completed the series. A higher proportion of girls aged 11-<13 (46%) and 13-<18 (47%) initiated vaccination than those aged 18-21 (28%). In adjusted analyses, receipt of other recommended adolescent vaccines was associated with vaccine initiation, and increased contact with the medical system was associated with both initiation and completion of the series. Conversely, provider failure to document risky health behaviors predicted nonvaccination. Manual review of a subset of unvaccinated patients' charts revealed no documentation of vaccine discussions in 67% of cases. Fewer than half of low-income and minority adolescents receiving health maintenance services initiated HPV vaccination, and only 20% completed the series. Provider failure to discuss vaccination with their patients appears to be an important contributor to nonvaccination. Future research should focus on improving both initiation and completion of HPV vaccination in high-risk adolescents.
Katriel, G.; Yaari, R.; Huppert, A.; Roll, U.; Stone, L.
2011-01-01
This paper presents new computational and modelling tools for studying the dynamics of an epidemic in its initial stages that use both available incidence time series and data describing the population's infection network structure. The work is motivated by data collected at the beginning of the H1N1 pandemic outbreak in Israel in the summer of 2009. We formulated a new discrete-time stochastic epidemic SIR (susceptible-infected-recovered) model that explicitly takes into account the disease's specific generation-time distribution and the intrinsic demographic stochasticity inherent to the infection process. Moreover, in contrast with many other modelling approaches, the model allows direct analytical derivation of estimates for the effective reproductive number (Re) and of their credible intervals, by maximum likelihood and Bayesian methods. The basic model can be extended to include age–class structure, and a maximum likelihood methodology allows us to estimate the model's next-generation matrix by combining two types of data: (i) the incidence series of each age group, and (ii) infection network data that provide partial information of ‘who-infected-who’. Unlike other approaches for estimating the next-generation matrix, the method developed here does not require making a priori assumptions about the structure of the next-generation matrix. We show, using a simulation study, that even a relatively small amount of information about the infection network greatly improves the accuracy of estimation of the next-generation matrix. The method is applied in practice to estimate the next-generation matrix from the Israeli H1N1 pandemic data. The tools developed here should be of practical importance for future investigations of epidemics during their initial stages. However, they require the availability of data which represent a random sample of the real epidemic process. We discuss the conditions under which reporting rates may or may not influence our estimated quantities and the effects of bias. PMID:21247949
2014-01-01
Background Of the estimated 800,000 adults living with HIV in Zambia in 2011, roughly half were receiving antiretroviral therapy (ART). As treatment scale up continues, information on the care provided to patients after initiating ART can help guide decision-making. We estimated retention in care, the quantity of resources utilized, and costs for a retrospective cohort of adults initiating ART under routine clinical conditions in Zambia. Methods Data on resource utilization (antiretroviral [ARV] and non-ARV drugs, laboratory tests, outpatient clinic visits, and fixed resources) and retention in care were extracted from medical records for 846 patients who initiated ART at ≥15 years of age at six treatment sites between July 2007 and October 2008. Unit costs were estimated from the provider’s perspective using site- and country-level data and are reported in 2011 USD. Results Patients initiated ART at a median CD4 cell count of 145 cells/μL. Fifty-nine percent of patients initiated on a tenofovir-containing regimen, ranging from 15% to 86% depending on site. One year after ART initiation, 75% of patients were retained in care. The average cost per patient retained in care one year after ART initiation was $243 (95% CI, $194-$293), ranging from $184 (95% CI, $172-$195) to $304 (95% CI, $290-$319) depending on site. Patients retained in care one year after ART initiation received, on average, 11.4 months’ worth of ARV drugs, 1.5 CD4 tests, 1.3 blood chemistry tests, 1.4 full blood count tests, and 6.5 clinic visits with a doctor or clinical officer. At all sites, ARV drugs were the largest cost component, ranging from 38% to 84% of total costs, depending on site. Conclusions Patients initiate ART late in the course of disease progression and a large proportion drop out of care after initiation. The quantity of resources utilized and costs vary widely by site, and patients utilize a different mix of resources under routine clinical conditions than if they were receiving fully guideline-concordant care. Improving retention in care and guideline concordance, including increasing the use of tenofovir in first-line ART regimens, may lead to increases in overall treatment costs. PMID:24684772
Space transfer vehicle concepts and requirements. Volume 3: Program cost estimates
NASA Technical Reports Server (NTRS)
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study has been an eighteen-month study effort to develop and analyze concepts for a family of vehicles to evolve from an initial STV system into a Lunar Transportation System (LTS) for use with the Heavy Lift Launch Vehicle (HLLV). The study defined vehicle configurations, facility concepts, and ground and flight operations concepts. This volume reports the program cost estimates results for this portion of the study. The STV Reference Concept described within this document provides a complete LTS system that performs both cargo and piloted Lunar missions.
Detection of water vapor on Jupiter
NASA Technical Reports Server (NTRS)
Larson, H. P.; Fink, U.; Treffers, R.; Gautier, T. N., III
1975-01-01
High-altitude (12.4 km) spectroscopic observations of Jupiter at 5 microns from the NASA 91.5 cm airborne infrared telescope have revealed 14 absorptions assigned to the rotation-vibration spectrum of water vapor. Preliminary analysis indicates a mixing ratio about 1 millionth for the vapor phase of water. Estimates of temperature (greater than about 300 K) and pressure (less than 20 atm) suggest observation of water deep in Jupiter's hot spots responsible for its 5 micron flux. Model-atmosphere calculations based on radiative-transfer theory may change these initial estimates and provide a better physical picture of Jupiter's atmosphere below the visible cloud tops.
CH-47F Improved Cargo Helicopter (CH-47F)
2015-12-01
Confidence Level Confidence Level of cost estimate for current APB: 50% The Confidence Level of the CH-47F APB cost estimate, which was approved on April...M) Initial PAUC Development Estimate Changes PAUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total 10.316 -0.491 3.003 -0.164 2.273 7.378...SAR Baseline to Current SAR Baseline (TY $M) Initial APUC Development Estimate Changes APUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total
Power Management and Distribution (PMAD) Model Development: Final Report
NASA Technical Reports Server (NTRS)
Metcalf, Kenneth J.
2011-01-01
Power management and distribution (PMAD) models were developed in the early 1990's to model candidate architectures for various Space Exploration Initiative (SEI) missions. They were used to generate "ballpark" component mass estimates to support conceptual PMAD system design studies. The initial set of models was provided to NASA Lewis Research Center (since renamed Glenn Research Center) in 1992. They were developed to estimate the characteristics of power conditioning components predicted to be available in the 2005 timeframe. Early 90's component and device designs and material technologies were projected forward to the 2005 timeframe, and algorithms reflecting those design and material improvements were incorporated into the models to generate mass, volume, and efficiency estimates for circa 2005 components. The models are about ten years old now and NASA GRC requested a review of them to determine if they should be updated to bring them into agreement with current performance projections or to incorporate unforeseen design or technology advances. This report documents the results of this review and the updated power conditioning models and new transmission line models generated to estimate post 2005 PMAD system masses and sizes. This effort continues the expansion and enhancement of a library of PMAD models developed to allow system designers to assess future power system architectures and distribution techniques quickly and consistently.
NASA Astrophysics Data System (ADS)
Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing
2017-03-01
Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.
NASA Astrophysics Data System (ADS)
Trifonov, A. P.; Korchagin, Yu. E.; Korol'kov, S. V.
2018-05-01
We synthesize the quasi-likelihood, maximum-likelihood, and quasioptimal algorithms for estimating the arrival time and duration of a radio signal with unknown amplitude and initial phase. The discrepancies between the hardware and software realizations of the estimation algorithm are shown. The characteristics of the synthesized-algorithm operation efficiency are obtained. Asymptotic expressions for the biases, variances, and the correlation coefficient of the arrival-time and duration estimates, which hold true for large signal-to-noise ratios, are derived. The accuracy losses of the estimates of the radio-signal arrival time and duration because of the a priori ignorance of the amplitude and initial phase are determined.
Thompson, Kirsten M J; Rocca, Corinne H; Kohn, Julia E; Goodman, Suzan; Stern, Lisa; Blum, Maya; Speidel, J Joseph; Darney, Philip D; Harper, Cynthia C
2016-03-01
We determined whether public funding for contraception was associated with long-acting reversible contraceptive (LARC) use when providers received training on these methods. We evaluated the impact of a clinic training intervention and public funding on LARC use in a cluster randomized trial at 40 randomly assigned clinics across the United States (2011-2013). Twenty intervention clinics received a 4-hour training. Women aged 18 to 25 were enrolled and followed for 1 year (n = 1500: 802 intervention, 698 control). We estimated the effects of the intervention and funding sources on LARC initiation with Cox proportional hazards models with shared frailty. Women at intervention sites had higher LARC initiation than those at control (22 vs 18 per 100 person-years; adjusted hazard ratio [AHR] = 1.43; 95% confidence interval [CI] = 1.04, 1.98). Participants receiving care at clinics with Medicaid family planning expansion programs had almost twice the initiation rate as those at clinics without (25 vs 13 per 100 person-years; AHR = 2.26; 95% CI = 1.59, 3.19). LARC initiation also increased among participants with public (AHR = 1.56; 95% CI = 1.09, 2.22) but not private health insurance. Public funding and provider training substantially improve LARC access.
Quantum limits to gravity estimation with optomechanics
NASA Astrophysics Data System (ADS)
Armata, F.; Latmiral, L.; Plato, A. D. K.; Kim, M. S.
2017-10-01
We present a table-top quantum estimation protocol to measure the gravitational acceleration g by using an optomechanical cavity. In particular, we exploit the nonlinear quantum light-matter interaction between an optical field and a massive mirror acting as mechanical oscillator. The gravitational field influences the system dynamics affecting the phase of the cavity field during the interaction. Reading out such a phase carried by the radiation leaking from the cavity, we provide an estimate of the gravitational acceleration through interference measurements. Contrary to previous studies, having adopted a fully quantum description, we are able to propose a quantum analysis proving the ultimate bound to the estimability of the gravitational acceleration and verifying optimality of homodyne detection. Noticeably, thanks to the light-matter decoupling at the measurement time, no initial cooling of the mechanical oscillator is demanded in principle.
Effects of different definitions on forest area estimation in national forest inventories in Europe
Berthold Traub; Michael Kohl; Risto Paivinen; Olaf Kugler
2000-01-01
International forest statistics such as those provided by the UN/ECE-FAO Temperate and Boreal Forest Resource Assessment (TBFRA) are typically compiled from national surveys. However, the national systems of nomenclature as well as the definition of the attributes often vary considerably. The European Commission, DG VI, initiated a study to investigate the potential of...
Code of Federal Regulations, 2011 CFR
2011-04-01
... appropriate to the nature and phase of the work and sufficient to allow comparisons to the Indian tribe or... changes such as labor, material, and transportation costs. (c) The Secretary shall provide the initial... estimates based on changed or additional information such as the following: (1) Actual subcontract bids; (2...
Using Advice from Multiple Sources to Revise and Improve Judgments
ERIC Educational Resources Information Center
Yaniv, Ilan; Milyavsky, Maxim
2007-01-01
How might people revise their opinions on the basis of multiple pieces of advice? What sort of gains could be obtained from rules for using advice? In the present studies judges first provided their initial estimates for a series of questions; next they were presented with several (2, 4, or 8) opinions from an ecological pool of advisory estimates…
Managing watersheds to change water quality: lessons learned from the NIFA-CEAP watershed studies
Deanna Osmond; M. Arabi; D. Hoag; G. Jennings; D. Line; A. Luloff; M. McFarland; D. Meals; A. Sharpley
2016-01-01
The Conservation Effects Assessment Project (CEAP) is an USDA initiative that involves the Agricultural Research Service, the National Institute for Food and Agriculture (NIFA), and the Natural Resources Conservation Service. The overall goal of CEAP is to provide scientifically credible estimates of the environmental benefits obtained from USDA conservation programs...
The Children's Services Delivery System in California: Preliminary Report--Phase I.
ERIC Educational Resources Information Center
Commission on California State Government Organization and Economy, Sacramento.
Concerned because California now annually administers an estimated $5.9 billion in funding for children's services programs, the Little Hoover Commission initiated a study on the state's provision for children's services. This report, on Phase I of the study, identifies the extent of the problem in 23 findings and provides a plan of action in 15…
Proof of Concept for an Approach to a Finer Resolution Inventory
Chris J. Cieszewski; Kim Iles; Roger C. Lowe; Michal Zasada
2005-01-01
This report presents a proof of concept for a statistical framework to develop a timely, accurate, and unbiased fiber supply assessment in the State of Georgia, U.S.A. The proposed approach is based on using various data sources and modeling techniques to calibrate satellite image-based statewide stand lists, which provide initial estimates for a State inventory on a...
Rapid assessment of wildfire damage using Forest Inventory data: A case in Georgia
Richard A. Harper; John W. Coulsten; Jeffery A. Turner
2009-01-01
The rapid assessment of damage caused by natural disasters is essential for planning the appropriate amount of disaster relief funds and public communication. Annual Forest Inventory and Analysis (FIA) data provided initial estimates of damage to timberland in a timely manner to State leaders during the 2007 Georgia Bay Complex Wildfire in southeast Georgia. FIA plots...
Covert Channels in SIP for VoIP Signalling
NASA Astrophysics Data System (ADS)
Mazurczyk, Wojciech; Szczypiorski, Krzysztof
In this paper, we evaluate available steganographic techniques for SIP (Session Initiation Protocol) that can be used for creating covert channels during signaling phase of VoIP (Voice over IP) call. Apart from characterizing existing steganographic methods we provide new insights by introducing new techniques. We also estimate amount of data that can be transferred in signalling messages for typical IP telephony call.
And the first one now will later be last: Time-reversal in cormack-jolly-seber models
Nichols, James D.
2016-01-01
The models of Cormack, Jolly and Seber (CJS) are remarkable in providing a rich set of inferences about population survival, recruitment, abundance and even sampling probabilities from a seemingly limited data source: a matrix of 1's and 0's reflecting animal captures and recaptures at multiple sampling occasions. Survival and sampling probabilities are estimated directly in CJS models, whereas estimators for recruitment and abundance were initially obtained as derived quantities. Various investigators have noted that just as standard modeling provides direct inferences about survival, reversing the time order of capture history data permits direct modeling and inference about recruitment. Here we review the development of reverse-time modeling efforts, emphasizing the kinds of inferences and questions to which they seem well suited.
Spectral estimates of intercepted solar radiation by corn and soybean canopies
NASA Technical Reports Server (NTRS)
Gallo, K. P.; Brooks, C. C.; Daughtry, C. S. T.; Bauer, M. E.; Vanderbilt, V. C.
1982-01-01
Attention is given to the development of methods for combining spectral and meteorological data in crop yield models which are capable of providing accurate estimates of crop condition and yields throughout the growing season. The present investigation is concerned with initial tests of these concepts using spectral and agronomic data acquired in controlled experiments. The data were acquired at the Purdue University Agronomy Farm, 10 km northwest of West Lafayette, Indiana. Data were obtained throughout several growing seasons for corn and soybeans. Five methods or models for predicting yields were examined. On the basis of the obtained results, it is concluded that estimating intercepted solar radiation using spectral data is a viable approach for merging spectral and meteorological data in crop yield models.
Real-Time PCR Quantification Using A Variable Reaction Efficiency Model
Platts, Adrian E.; Johnson, Graham D.; Linnemann, Amelia K.; Krawetz, Stephen A.
2008-01-01
Quantitative real-time PCR remains a cornerstone technique in gene expression analysis and sequence characterization. Despite the importance of the approach to experimental biology the confident assignment of reaction efficiency to the early cycles of real-time PCR reactions remains problematic. Considerable noise may be generated where few cycles in the amplification are available to estimate peak efficiency. An alternate approach that uses data from beyond the log-linear amplification phase is explored with the aim of reducing noise and adding confidence to efficiency estimates. PCR reaction efficiency is regressed to estimate the per-cycle profile of an asymptotically departed peak efficiency, even when this is not closely approximated in the measurable cycles. The process can be repeated over replicates to develop a robust estimate of peak reaction efficiency. This leads to an estimate of the maximum reaction efficiency that may be considered primer-design specific. Using a series of biological scenarios we demonstrate that this approach can provide an accurate estimate of initial template concentration. PMID:18570886
Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images
NASA Astrophysics Data System (ADS)
Kamble, V. M.; Bhurchandi, K.
2018-03-01
Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.
NASA Astrophysics Data System (ADS)
Ireland, Gareth; North, Matthew R.; Petropoulos, George P.; Srivastava, Prashant K.; Hodges, Crona
2015-04-01
Acquiring accurate information on the spatio-temporal variability of soil moisture content (SM) and evapotranspiration (ET) is of key importance to extend our understanding of the Earth system's physical processes, and is also required in a wide range of multi-disciplinary research studies and applications. The utility and applicability of Earth Observation (EO) technology provides an economically feasible solution to derive continuous spatio-temporal estimates of key parameters characterising land surface interactions, including ET as well as SM. Such information is of key value to practitioners, decision makers and scientists alike. The PREMIER-EO project recently funded by High Performance Computing Wales (HPCW) is a research initiative directed towards the development of a better understanding of EO technology's present ability to derive operational estimations of surface fluxes and SM. Moreover, the project aims at addressing knowledge gaps related to the operational estimation of such parameters, and thus contribute towards current ongoing global efforts towards enhancing the accuracy of those products. In this presentation we introduce the PREMIER-EO project, providing a detailed overview of the research aims and objectives for the 1 year duration of the project's implementation. Subsequently, we make available the initial results of the work carried out herein, in particular, related to an all-inclusive and robust evaluation of the accuracy of existing operational products of ET and SM from different ecosystems globally. The research outcomes of this project, once completed, will provide an important contribution towards addressing the knowledge gaps related to the operational estimation of ET and SM. This project results will also support efforts ongoing globally towards the operational development of related products using technologically advanced EO instruments which were launched recently or planned be launched in the next 1-2 years. Key Words: PREMIER-EO, HPC Wales, Soil Moisture, Evapotranspiration, , Earth Observation
New EVSE Analytical Tools/Models: Electric Vehicle Infrastructure Projection Tool (EVI-Pro)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Eric W; Rames, Clement L; Muratori, Matteo
This presentation addresses the fundamental question of how much charging infrastructure is needed in the United States to support PEVs. It complements ongoing EVSE initiatives by providing a comprehensive analysis of national PEV charging infrastructure requirements. The result is a quantitative estimate for a U.S. network of non-residential (public and workplace) EVSE that would be needed to support broader PEV adoption. The analysis provides guidance to public and private stakeholders who are seeking to provide nationwide charging coverage, improve the EVSE business case by maximizing station utilization, and promote effective use of private/public infrastructure investments.
Caro-Vega, Yanink; del Rio, Carlos; Lima, Viviane Dias; Lopez-Cervantes, Malaquias; Crabtree-Ramirez, Brenda; Bautista-Arredondo, Sergio; Colchero, M Arantxa; Sierra-Madero, Juan
2015-01-01
To estimate the impact of late ART initiation on HIV transmission among men who have sex with men (MSM) in Mexico. An HIV transmission model was built to estimate the number of infections transmitted by HIV-infected men who have sex with men (MSM-HIV+) MSM-HIV+ in the short and long term. Sexual risk behavior data were estimated from a nationwide study of MSM. CD4+ counts at ART initiation from a representative national cohort were used to estimate time since infection. Number of MSM-HIV+ on treatment and suppressed were estimated from surveillance and government reports. Status quo scenario (SQ), and scenarios of early ART initiation and increased HIV testing were modeled. We estimated 14239 new HIV infections per year from MSM-HIV+ in Mexico. In SQ, MSM take an average 7.4 years since infection to initiate treatment with a median CD4+ count of 148 cells/mm3(25th-75th percentiles 52-266). In SQ, 68% of MSM-HIV+ are not aware of their HIV status and transmit 78% of new infections. Increasing the CD4+ count at ART initiation to 350 cells/mm3 shortened the time since infection to 2.8 years. Increasing HIV testing to cover 80% of undiagnosed MSM resulted in a reduction of 70% in new infections in 20 years. Initiating ART at 500 cells/mm3 and increasing HIV testing the reduction would be of 75% in 20 years. A substantial number of new HIV infections in Mexico are transmitted by undiagnosed and untreated MSM-HIV+. An aggressive increase in HIV testing coverage and initiating ART at a CD4 count of 500 cells/mm3 in this population would significantly benefit individuals and decrease the number of new HIV infections in Mexico.
NASA Astrophysics Data System (ADS)
Kucera, P. A.; Steinson, M.
2016-12-01
Accurate and reliable real-time monitoring and dissemination of observations of precipitation and surface weather conditions in general is critical for a variety of research studies and applications. Surface precipitation observations provide important reference information for evaluating satellite (e.g., GPM) precipitation estimates. High quality surface observations of precipitation, temperature, moisture, and winds are important for applications such as agriculture, water resource monitoring, health, and hazardous weather early warning systems. In many regions of the World, surface weather station and precipitation gauge networks are sparsely located and/or of poor quality. Existing stations have often been sited incorrectly, not well-maintained, and have limited communications established at the site for real-time monitoring. The University Corporation for Atmospheric Research (UCAR)/National Center for Atmospheric Research (NCAR), with support from USAID, has started an initiative to develop and deploy low-cost weather instrumentation including tipping bucket and weighing-type precipitation gauges in sparsely observed regions of the world. The goal is to improve the number of observations (temporally and spatially) for the evaluation of satellite precipitation estimates in data-sparse regions and to improve the quality of applications for environmental monitoring and early warning alert systems on a regional to global scale. One important aspect of this initiative is to make the data open to the community. The weather station instrumentation have been developed using innovative new technologies such as 3D printers, Raspberry Pi computing systems, and wireless communications. An initial pilot project have been implemented in the country of Zambia. This effort could be expanded to other data sparse regions around the globe. The presentation will provide an overview and demonstration of 3D printed weather station development and initial evaluation of observed precipitation datasets.
Liu, Frank Xiaoqing; Ghaffari, Arshia; Dhatt, Harman; Kumar, Vijay; Balsera, Cristina; Wallace, Eric; Khairullah, Quresh; Lesher, Beth; Gao, Xin; Henderson, Heather; LaFleur, Paula; Delgado, Edna M.; Alvarez, Melissa M.; Hartley, Janett; McClernon, Marilyn; Walton, Surrey; Guest, Steven
2014-01-01
Abstract Patients presenting late in the course of kidney disease who require urgent initiation of dialysis have traditionally received temporary vascular catheters followed by hemodialysis. Recent changes in Medicare payment policy for dialysis in the USA incentivized the use of peritoneal dialysis (PD). Consequently, the use of more expeditious PD for late-presenting patients (urgent-start PD) has received new attention. Urgent-start PD has been shown to be safe and effective, and offers a mechanism for increasing PD utilization. However, there has been no assessment of the dialysis-related costs over the first 90 days of care. The objective of this study was to characterize the costs associated with urgent-start PD, urgent-start hemodialysis (HD), or a dual approach (urgent-start HD followed by urgent-start PD) over the first 90 days of treatment from a provider perspective. A survey of practitioners from 5 clinics known to use urgent-start PD was conducted to provide inputs for a cost model representing typical patients. Model inputs were obtained from the survey, literature review, and available cost data. Sensitivity analyses were also conducted. The estimated per patient cost over the first 90 days for urgent-start PD was $16,398. Dialysis access represented 15% of total costs, dialysis services 48%, and initial hospitalization 37%. For urgent-start HD, total per patient costs were $19,352, and dialysis access accounted for 27%, dialysis services 42%, and initial hospitalization 31%. The estimated cost for dual patients was $19,400. Urgent-start PD may offer a cost saving approach for the initiation of dialysis in eligible patients requiring an urgent-start to dialysis. PMID:25526471
Heap, Marion; Sinanovic, Edina
2017-01-01
Background The World Health Organisation estimates disabling hearing loss to be around 5.3%, while a study of hearing impairment and auditory pathology in Limpopo, South Africa found a prevalence of nearly 9%. Although Sign Language Interpreters (SLIs) improve the communication challenges in health care, they are unaffordable for many signing Deaf people and people with disabling hearing loss. On the other hand, there are no legal provisions in place to ensure the provision of SLIs in the health sector in most countries including South Africa. To advocate for funding of such initiatives, reliable cost estimates are essential and such data is scarce. To bridge this gap, this study estimated the costs of providing such a service within a South African District health service based on estimates obtained from a pilot-project that initiated the first South African Sign Language Interpreter (SASLI) service in health-care. Methods The ingredients method was used to calculate the unit cost per SASLI-assisted visit from a provider perspective. The unit costs per SASLI-assisted visit were then used in estimating the costs of scaling up this service to the District Health Services. The average annual SASLI utilisation rate per person was calculated on Stata v.12 using the projects’ registry from 2008–2013. Sensitivity analyses were carried out to determine the effect of changing the discount rate and personnel costs. Results Average Sign Language Interpreter services’ utilisation rates increased from 1.66 to 3.58 per person per year, with a median of 2 visits, from 2008–2013. The cost per visit was US$189.38 in 2013 whilst the estimated costs of scaling up this service ranged from US$14.2million to US$76.5million in the Cape Metropole District. These cost estimates represented 2.3%-12.2% of the budget for the Western Cape District Health Services for 2013. Conclusions In the presence of Sign Language Interpreters, Deaf Sign language users utilise health care service to a similar extent as the hearing population. However, this service requires significant capital investment by government to enable access to healthcare for the Deaf. PMID:29272272
Zulu, Tryphine; Heap, Marion; Sinanovic, Edina
2017-01-01
The World Health Organisation estimates disabling hearing loss to be around 5.3%, while a study of hearing impairment and auditory pathology in Limpopo, South Africa found a prevalence of nearly 9%. Although Sign Language Interpreters (SLIs) improve the communication challenges in health care, they are unaffordable for many signing Deaf people and people with disabling hearing loss. On the other hand, there are no legal provisions in place to ensure the provision of SLIs in the health sector in most countries including South Africa. To advocate for funding of such initiatives, reliable cost estimates are essential and such data is scarce. To bridge this gap, this study estimated the costs of providing such a service within a South African District health service based on estimates obtained from a pilot-project that initiated the first South African Sign Language Interpreter (SASLI) service in health-care. The ingredients method was used to calculate the unit cost per SASLI-assisted visit from a provider perspective. The unit costs per SASLI-assisted visit were then used in estimating the costs of scaling up this service to the District Health Services. The average annual SASLI utilisation rate per person was calculated on Stata v.12 using the projects' registry from 2008-2013. Sensitivity analyses were carried out to determine the effect of changing the discount rate and personnel costs. Average Sign Language Interpreter services' utilisation rates increased from 1.66 to 3.58 per person per year, with a median of 2 visits, from 2008-2013. The cost per visit was US$189.38 in 2013 whilst the estimated costs of scaling up this service ranged from US$14.2million to US$76.5million in the Cape Metropole District. These cost estimates represented 2.3%-12.2% of the budget for the Western Cape District Health Services for 2013. In the presence of Sign Language Interpreters, Deaf Sign language users utilise health care service to a similar extent as the hearing population. However, this service requires significant capital investment by government to enable access to healthcare for the Deaf.
Atkins, Michael; Coutinho, Anna D; Nunna, Sasikiran; Gupte-Singh, Komal; Eaddy, Michael
2018-02-01
The utilization of healthcare services and costs among patients with cancer is often estimated by the phase of care: initial, interim, or terminal. Although their durations are often set arbitrarily, we sought to establish data-driven phases of care using joinpoint regression in an advanced melanoma population as a case example. A retrospective claims database study was conducted to assess the costs of advanced melanoma from distant metastasis diagnosis to death during January 2010-September 2014. Joinpoint regression analysis was applied to identify the best-fitting points, where statistically significant changes in the trend of average monthly costs occurred. To identify the initial phase, average monthly costs were modeled from metastasis diagnosis to death; and were modeled backward from death to metastasis diagnosis for the terminal phase. Points of monthly cost trend inflection denoted ending and starting points. The months between represented the interim phase. A total of 1,671 patients with advanced melanoma who died met the eligibility criteria. Initial phase was identified as the 5-month period starting with diagnosis of metastasis, after which there was a sharp, significant decline in monthly cost trend (monthly percent change [MPC] = -13.0%; 95% CI = -16.9% to -8.8%). Terminal phase was defined as the 5-month period before death (MPC = -14.0%; 95% CI = -17.6% to -10.2%). The claims-based algorithm may under-estimate patients due to misclassifications, and may over-estimate terminal phase costs because hospital and emergency visits were used as a death proxy. Also, recently approved therapies were not included, which may under-estimate advanced melanoma costs. In this advanced melanoma population, optimal duration of the initial and terminal phases of care was 5 months immediately after diagnosis of metastasis and before death, respectively. Joinpoint regression can be used to provide data-supported phase of cancer care durations, but should be combined with clinical judgement.
Reef fish communities are spooked by scuba surveys and may take hours to recover
Cheal, Alistair J.; Miller, Ian R.
2018-01-01
Ecological monitoring programs typically aim to detect changes in the abundance of species of conservation concern or which reflect system status. Coral reef fish assemblages are functionally important for reef health and these are most commonly monitored using underwater visual surveys (UVS) by divers. In addition to estimating numbers, most programs also collect estimates of fish lengths to allow calculation of biomass, an important determinant of a fish’s functional impact. However, diver surveys may be biased because fishes may either avoid or are attracted to divers and the process of estimating fish length could result in fish counts that differ from those made without length estimations. Here we investigated whether (1) general diver disturbance and (2) the additional task of estimating fish lengths affected estimates of reef fish abundance and species richness during UVS, and for how long. Initial estimates of abundance and species richness were significantly higher than those made on the same section of reef after diver disturbance. However, there was no evidence that estimating fish lengths at the same time as abundance resulted in counts different from those made when estimating abundance alone. Similarly, there was little consistent bias among observers. Estimates of the time for fish taxa that avoided divers after initial contact to return to initial levels of abundance varied from three to 17 h, with one group of exploited fishes showing initial attraction to divers that declined over the study period. Our finding that many reef fishes may disperse for such long periods after initial contact with divers suggests that monitoring programs should take great care to minimise diver disturbance prior to surveys. PMID:29844998
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, L; Lambert, C; Nyiri, B
Purpose: To standardize the tube calibration for Elekta XVI cone beam CT (CBCT) systems in order to provide a meaningful estimate of the daily imaging dose and reduce the variation between units in a large centre with multiple treatment units. Methods: Initial measurements of the output from the CBCT systems were made using a Farmer chamber and standard CTDI phantom. The correlation between the measured CTDI and the tube current was confirmed using an Unfors Xi detector which was then used to perform a tube current calibration on each unit. Results: Initial measurements showed measured tube current variations of upmore » to 25% between units for scans with the same image settings. In order to reasonably estimate the imaging dose, a systematic approach to x-ray generator calibration was adopted to ensure that the imaging dose was consistent across all units at the centre and was adopted as part of the routine quality assurance program. Subsequent measurements show that the variation in measured dose across nine units is on the order of 5%. Conclusion: Increasingly, patients receiving radiation therapy have extended life expectancies and therefore the cumulative dose from daily imaging should not be ignored. In theory, an estimate of imaging dose can be made from the imaging parameters. However, measurements have shown that there are large differences in the x-ray generator calibration as installed at the clinic. Current protocols recommend routine checks of dose to ensure constancy. The present study suggests that in addition to constancy checks on a single machine, a tube current calibration should be performed on every unit to ensure agreement across multiple machines. This is crucial at a large centre with multiple units in order to provide physicians with a meaningful estimate of the daily imaging dose.« less
Ries, Kernell G.; Crouse, Michele Y.
2002-01-01
For many years, the U.S. Geological Survey (USGS) has been developing regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally, these equations have been developed on a Statewide or metropolitan-area basis as part of cooperative study programs with specific State Departments of Transportation. In 1994, the USGS released a computer program titled the National Flood Frequency Program (NFF), which compiled all the USGS available regression equations for estimating the magnitude and frequency of floods in the United States and Puerto Rico. NFF was developed in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency. Since the initial release of NFF, the USGS has produced new equations for many areas of the Nation. A new version of NFF has been developed that incorporates these new equations and provides additional functionality and ease of use. NFF version 3 provides regression-equation estimates of flood-peak discharges for unregulated rural and urban watersheds, flood-frequency plots, and plots of typical flood hydrographs for selected recurrence intervals. The Program also provides weighting techniques to improve estimates of flood-peak discharges for gaging stations and ungaged sites. The information provided by NFF should be useful to engineers and hydrologists for planning and design applications. This report describes the flood-regionalization techniques used in NFF and provides guidance on the applicability and limitations of the techniques. The NFF software and the documentation for the regression equations included in NFF are available at http://water.usgs.gov/software/nff.html.
NASA Astrophysics Data System (ADS)
Singh, K.; Sandu, A.; Bowman, K. W.; Parrington, M.; Jones, D. B. A.; Lee, M.
2011-08-01
Chemistry transport models determine the evolving chemical state of the atmosphere by solving the fundamental equations that govern physical and chemical transformations subject to initial conditions of the atmospheric state and surface boundary conditions, e.g., surface emissions. The development of data assimilation techniques synthesize model predictions with measurements in a rigorous mathematical framework that provides observational constraints on these conditions. Two families of data assimilation methods are currently widely used: variational and Kalman filter (KF). The variational approach is based on control theory and formulates data assimilation as a minimization problem of a cost functional that measures the model-observations mismatch. The Kalman filter approach is rooted in statistical estimation theory and provides the analysis covariance together with the best state estimate. Suboptimal Kalman filters employ different approximations of the covariances in order to make the computations feasible with large models. Each family of methods has both merits and drawbacks. This paper compares several data assimilation methods used for global chemical data assimilation. Specifically, we evaluate data assimilation approaches for improving estimates of the summertime global tropospheric ozone distribution in August 2006 based on ozone observations from the NASA Tropospheric Emission Spectrometer and the GEOS-Chem chemistry transport model. The resulting analyses are compared against independent ozonesonde measurements to assess the effectiveness of each assimilation method. All assimilation methods provide notable improvements over the free model simulations, which differ from the ozonesonde measurements by about 20 % (below 200 hPa). Four dimensional variational data assimilation with window lengths between five days and two weeks is the most accurate method, with mean differences between analysis profiles and ozonesonde measurements of 1-5 %. Two sequential assimilation approaches (three dimensional variational and suboptimal KF), although derived from different theoretical considerations, provide similar ozone estimates, with relative differences of 5-10 % between the analyses and ozonesonde measurements. Adjoint sensitivity analysis techniques are used to explore the role of of uncertainties in ozone precursors and their emissions on the distribution of tropospheric ozone. A novel technique is introduced that projects 3-D-Variational increments back to an equivalent initial condition, which facilitates comparison with 4-D variational techniques.
A Study of Flexible Composites for Expandable Space Structures
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
2016-01-01
Payload volume for launch vehicles is a critical constraint that impacts spacecraft design. Deployment mechanisms, such as those used for solar arrays and antennas, are approaches that have successfully accommodated this constraint, however, providing pressurized volumes that can be packaged compactly at launch and expanded in space is still a challenge. One approach that has been under development for many years is to utilize softgoods - woven fabric for straps, cloth, and with appropriate coatings, bladders - to provide this expandable pressure vessel capability. The mechanics of woven structure is complicated by a response that is nonlinear and often nonrepeatable due to the discrete nature of the woven fiber architecture. This complexity reduces engineering confidence to reliably design and certify these structures, which increases costs due to increased requirements for system testing. The present study explores flexible composite materials systems as an alternative to the heritage softgoods approach. Materials were obtained from vendors who utilize flexible composites for non-aerospace products to determine some initial physical and mechanical properties of the materials. Uniaxial mechanical testing was performed to obtain the stress-strain response of the flexible composites and the failure behavior. A failure criterion was developed from the data, and a space habitat application was used to provide an estimate of the relative performance of flexible composites compared to the heritage softgoods approach. Initial results are promising with a 25% mass savings estimated for the flexible composite solution.
Method to monitor HC-SCR catalyst NOx reduction performance for lean exhaust applications
Viola, Michael B [Macomb Township, MI; Schmieg, Steven J [Troy, MI; Sloane, Thompson M [Oxford, MI; Hilden, David L [Shelby Township, MI; Mulawa, Patricia A [Clinton Township, MI; Lee, Jong H [Rochester Hills, MI; Cheng, Shi-Wai S [Troy, MI
2012-05-29
A method for initiating a regeneration mode in selective catalytic reduction device utilizing hydrocarbons as a reductant includes monitoring a temperature within the aftertreatment system, monitoring a fuel dosing rate to the selective catalytic reduction device, monitoring an initial conversion efficiency, selecting a determined equation to estimate changes in a conversion efficiency of the selective catalytic reduction device based upon the monitored temperature and the monitored fuel dosing rate, estimating changes in the conversion efficiency based upon the determined equation and the initial conversion efficiency, and initiating a regeneration mode for the selective catalytic reduction device based upon the estimated changes in conversion efficiency.
Estimated prevalence of hearing loss and provision of hearing services in Pacific Island nations.
Sanders, Michael; Houghton, Natasha; Dewes, Ofa; McCool, Judith; Thorne, Peter R
2015-03-01
Hearing impairment (HI) affects an estimated 538 million people worldwide, with 80% of these living in developing countries. Untreated HI in childhood may lead to developmental delay and in adults results in social isolation, inability to find or maintain employment, and dependency. Early intervention and support programmes can significantly reduce the negative effects of HI. To estimate HI prevalence and identify available hearing services in some Pacific countries - Cook Islands, Fiji, Niue, Samoa, Tokelau, Tonga. Data were collected through literature review and correspondence with service providers. Prevalence estimates were based on census data and previously published regional estimates. Estimates indicate 20-23% of the population may have at least a mild HI, with up to 11% having a moderate impairment or worse. Estimated incidence of chronic otitis media in Pacific Island nations is 3-5 times greater than other Australasian countries in children under 10 years old. Permanent HI from otitis media is substantially more likely in children and adults in Pacific Island nations. Several organisations and individuals provide some limited hearing services in a few Pacific Island nations, but the majority of people with HI are largely underserved. Although accurate information on HI prevalence is lacking, prevalence estimates of HI and ear disease suggest they are significant health conditions in Pacific Island nations. There is relatively little support for people with HI or ear disease in the Pacific region. An investment in initiatives to both identify and support people with hearing loss in the Pacific is necessary.
Chaudhury, Sumona; Arlington, Lauren; Brenan, Shelby; Kairuki, Allan Kaijunga; Meda, Amunga Robson; Isangula, Kahabi G; Mponzi, Victor; Bishanga, Dunstan; Thomas, Erica; Msemo, Georgina; Azayo, Mary; Molinier, Alice; Nelson, Brett D
2016-12-01
Helping Babies Breathe (HBB) has become the gold standard globally for training birth-attendants in neonatal resuscitation in low-resource settings in efforts to reduce early newborn asphyxia and mortality. The purpose of this study was to do a first-ever activity-based cost-analysis of at-scale HBB program implementation and initial follow-up in a large region of Tanzania and evaluate costs of national scale-up as one component of a multi-method external evaluation of the implementation of HBB at scale in Tanzania. We used activity-based costing to examine budget expense data during the two-month implementation and follow-up of HBB in one of the target regions. Activity-cost centers included administrative, initial training (including resuscitation equipment), and follow-up training expenses. Sensitivity analysis was utilized to project cost scenarios incurred to achieve countrywide expansion of the program across all mainland regions of Tanzania and to model costs of program maintenance over one and five years following initiation. Total costs for the Mbeya Region were $202,240, with the highest proportion due to initial training and equipment (45.2%), followed by central program administration (37.2%), and follow-up visits (17.6%). Within Mbeya, 49 training sessions were undertaken, involving the training of 1,341 health providers from 336 health facilities in eight districts. To similarly expand the HBB program across the 25 regions of mainland Tanzania, the total economic cost is projected to be around $4,000,000 (around $600 per facility). Following sensitivity analyses, the estimated total for all Tanzania initial rollout lies between $2,934,793 to $4,309,595. In order to maintain the program nationally under the current model, it is estimated it would cost $2,019,115 for a further one year and $5,640,794 for a further five years of ongoing program support. HBB implementation is a relatively low-cost intervention with potential for high impact on perinatal mortality in resource-poor settings. It is shown here that nationwide expansion of this program across the range of health provision levels and regions of Tanzania would be feasible. This study provides policymakers and investors with the relevant cost-estimation for national rollout of this potentially neonatal life-saving intervention.
Effect of cigarette prices on smoking initiation and cessation in China: a duration analysis.
Kostova, Deliana; Husain, Muhammad J; Chaloupka, Frank J
2016-09-01
China is the world's largest producer and consumer of cigarettes. The status of tobacco as both a contributor to China's economy and a liability for the health of its population may complicate the use of taxes for addressing smoking in the country. Understanding how cigarette prices affect transitions in smoking behaviour in China can increase understanding of how China's high smoking rates can be influenced by tax policy. In order to estimate the effect of cigarette prices on smoking initiation and cessation in China, we construct pseudo-longitudinal samples for duration analysis using data from the Global Adult Tobacco Survey China 2010. We use the historical variation in prices representative of 4 China regions over a 20-year period to identify the average price effect on the hazards of initiation and cessation while controlling for unobserved fixed and time-varying region characteristics. We find that initiation rates fall in response to higher prices (with a price elasticity of initiation estimated at -0.95 for men and -1.07 overall). The effect of prices on smoking in China is likely to occur through averting initiation over time. At the population level, cessation behaviour may be less responsive to price increases as the wide range of cigarette prices in China may provide relatively high opportunity for switching to lower priced brands. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Thein, Hla-Hla; Jembere, Nathaniel; Thavorn, Kednapa; Chan, Kelvin K W; Coyte, Peter C; de Oliveira, Claire; Hur, Chin; Earle, Craig C
2018-06-27
Esophageal adenocarcinoma (EAC) incidence is increasing rapidly. Esophageal cancer has the second lowest 5-year survival rate of people diagnosed with cancer in Canada. Given the poor survival and the potential for further increases in incidence, phase-specific cost estimates constitute an important input for economic evaluation of prevention, screening, and treatment interventions. The study aims to estimate phase-specific net direct medical costs of care attributable to EAC, costs stratified by cancer stage and treatment, and predictors of total net costs of care for EAC. A population-based retrospective cohort study was conducted using Ontario Cancer Registry-linked administrative health data from 2003 to 2011. The mean net costs of EAC care per 30 patient-days (2016 CAD) were estimated from the payer perspective using phase of care approach and generalized estimating equations. Predictors of net cost by phase of care were based on a generalized estimating equations model with a logarithmic link and gamma distribution adjusting for sociodemographic and clinical factors. The mean net costs of EAC care per 30 patient-days were $1016 (95% CI, $955-$1078) in the initial phase, $669 (95% CI, $594-$743) in the continuing care phase, and $8678 (95% CI, $8217-$9139) in the terminal phase. Overall, stage IV at diagnosis and surgery plus radiotherapy for EAC incurred the highest cost, particularly in the terminal phase. Strong predictors of higher net costs were receipt of chemotherapy plus radiotherapy, surgery plus chemotherapy, radiotherapy alone, surgery alone, and chemotherapy alone in the initial and continuing care phases, stage III-IV disease and patients diagnosed with EAC later in a calendar year (2007-2011) in the initial and terminal phases, comorbidity in the continuing care phase, and older age at diagnosis (70-74 years), and geographic region in the terminal phase. Costs of care vary by phase of care, stage at diagnosis, and type of treatment for EAC. These cost estimates provide information to guide future resource allocation decisions, and clinical and policy interventions to reduce the burden of EAC.
Predicting future protection of respirator users: Statistical approaches and practical implications.
Hu, Chengcheng; Harber, Philip; Su, Jing
2016-01-01
The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing.
Wang, Xin; Wu, Linhui; Yi, Xi; Zhang, Yanqi; Zhang, Limin; Zhao, Huijuan; Gao, Feng
2015-01-01
Due to both the physiological and morphological differences in the vascularization between healthy and diseased tissues, pharmacokinetic diffuse fluorescence tomography (DFT) can provide contrast-enhanced and comprehensive information for tumor diagnosis and staging. In this regime, the extended Kalman filtering (EKF) based method shows numerous advantages including accurate modeling, online estimation of multiparameters, and universal applicability to any optical fluorophore. Nevertheless the performance of the conventional EKF highly hinges on the exact and inaccessible prior knowledge about the initial values. To address the above issues, an adaptive-EKF scheme is proposed based on a two-compartmental model for the enhancement, which utilizes a variable forgetting-factor to compensate the inaccuracy of the initial states and emphasize the effect of the current data. It is demonstrated using two-dimensional simulative investigations on a circular domain that the proposed adaptive-EKF can obtain preferable estimation of the pharmacokinetic-rates to the conventional-EKF and the enhanced-EKF in terms of quantitativeness, noise robustness, and initialization independence. Further three-dimensional numerical experiments on a digital mouse model validate the efficacy of the method as applied in realistic biological systems.
VizieR Online Data Catalog: Post-merger cluster A2255 membership (Tyler+, 2014)
NASA Astrophysics Data System (ADS)
Tyler, K. D.; Bai, L.; Rieke, G. H.
2017-04-01
A2255 was initially chosen from the Popesso et al. (2007, J/A+A/461/397) sample because it is a large cluster with complete SDSS photometric and spectroscopic coverage out to ~3 r200. It has incomplete areal spectroscopic coverage from 3 r200<~rproj<~5 r200 - about half of this region is covered. The SDSS photometric survey provides a uniform data set to study galaxy properties in the cluster. The model magnitudes are the linear combinations of best-fit exponential and de Vaucouleurs profiles and are recommended as the best estimates of magnitude by SDSS. As such, we use the model magnitudes (except where explicitly stated otherwise) and correct them for Galactic extinction (O'Donnell, 1994ApJ...422..158O). We used these photometric data to estimate galactic stellar masses with the SDSS_KCORRECT routine within KCORRECT (v. 4.2; Blanton & Roweis 2007AJ....133..734B). KCORRECT uses different cosmological values and initial mass function, so we corrected the original stellar mass output to the cosmology and initial mass function (Kroupa, 2001MNRAS.322..231K) adopted in this paper. (1 data file).
Wisconsin street tree assessment, 2002-2003
Anne Buckelew Cumming; Daniel B. Twardus; Robert Hoehn; David J. Nowak; Manfred Mielke; Richard Rideout; Helen Butalla; Patricia Lebow
2008-01-01
A pilot study to assess the structure, function, and health of Wisconsinâs street trees was initiated in 2002. Almost 900 plots were established in Wisconsinâs urban areas. Table 1 provides an overview of plot-level data, population estimates, and a calculated monetary value for Wisconsinâs street trees. Wisconsin has mid-sized street trees, dominated by Norway maple (...
ERIC Educational Resources Information Center
Eignor, Daniel R.; Douglass, James B.
This paper attempts to provide some initial information about the use of a variety of item response theory (IRT) models in the item selection process; its purpose is to compare the information curves derived from the selection of items characterized by several different IRT models and their associated parameter estimation programs. These…
Moving Out: Transition to Non-Residence among Resident Fathers in the United States, 1968-1997
ERIC Educational Resources Information Center
Gupta, Sanjiv; Smock, Pamela J.; Manning, Wendy D.
2004-01-01
This article provides the first individual-level estimates of the change over time in the probability of non-residence for initially resident fathers in the United States. Drawing on the 1968-1997 waves of the Panel Study of Income Dynamics, we used discrete-time event history models to compute the probabilities of non-residence for six 5-year…
Bled, F.; Royle, J. Andrew; Cam, E.
2011-01-01
Invasive species are regularly claimed as the second threat to biodiversity. To apply a relevant response to the potential consequences associated with invasions (e.g., emphasize management efforts to prevent new colonization or to eradicate the species in places where it has already settled), it is essential to understand invasion mechanisms and dynamics. Quantifying and understanding what influences rates of spatial spread is a key research area for invasion theory. In this paper, we develop a model to account for occupancy dynamics of an invasive species. Our model extends existing models to accommodate several elements of invasive processes; we chose the framework of hierarchical modeling to assess site occupancy status during an invasion. First, we explicitly accounted for spatial structure and how distance among sites and position relative to one another affect the invasion spread. In particular, we accounted for the possibility of directional propagation and provided a way of estimating the direction of this possible spread. Second, we considered the influence of local density on site occupancy. Third, we decided to split the colonization process into two subprocesses, initial colonization and recolonization, which may be ground-breaking because these subprocesses may exhibit different relationships with environmental variations (such as density variation) or colonization history (e.g., initial colonization might facilitate further colonization events). Finally, our model incorporates imperfection in detection, which might be a source of substantial bias in estimating population parameters. We focused on the case of the Eurasian Collared-Dove (Streptopelia decaocto) and its invasion of the United States since its introduction in the early 1980s, using data from the North American BBS (Breeding Bird Survey). The Eurasian Collared-Dove is one of the most successful invasive species, at least among terrestrial vertebrates. Our model provided estimation of the spread direction consistent with empirical observations. Site persistence probability exhibits a quadratic response to density. We also succeeded at detecting differences in the relationship between density and initial colonization vs. recolonization probabilities. We provide a map of sites that may be colonized in the future as an example of possible practical application of our work. ?? 2011 by the Ecological Society of America.
One-way quantum computing in superconducting circuits
NASA Astrophysics Data System (ADS)
Albarrán-Arriagada, F.; Alvarado Barrios, G.; Sanz, M.; Romero, G.; Lamata, L.; Retamal, J. C.; Solano, E.
2018-03-01
We propose a method for the implementation of one-way quantum computing in superconducting circuits. Measurement-based quantum computing is a universal quantum computation paradigm in which an initial cluster state provides the quantum resource, while the iteration of sequential measurements and local rotations encodes the quantum algorithm. Up to now, technical constraints have limited a scalable approach to this quantum computing alternative. The initial cluster state can be generated with available controlled-phase gates, while the quantum algorithm makes use of high-fidelity readout and coherent feedforward. With current technology, we estimate that quantum algorithms with above 20 qubits may be implemented in the path toward quantum supremacy. Moreover, we propose an alternative initial state with properties of maximal persistence and maximal connectedness, reducing the required resources of one-way quantum computing protocols.
NASA Technical Reports Server (NTRS)
Murphy, K. A.
1988-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
Kalman Filters for Time Delay of Arrival-Based Source Localization
NASA Astrophysics Data System (ADS)
Klee, Ulrich; Gehrig, Tobias; McDonough, John
2006-12-01
In this work, we propose an algorithm for acoustic source localization based on time delay of arrival (TDOA) estimation. In earlier work by other authors, an initial closed-form approximation was first used to estimate the true position of the speaker followed by a Kalman filtering stage to smooth the time series of estimates. In the proposed algorithm, this closed-form approximation is eliminated by employing a Kalman filter to directly update the speaker's position estimate based on the observed TDOAs. In particular, the TDOAs comprise the observation associated with an extended Kalman filter whose state corresponds to the speaker's position. We tested our algorithm on a data set consisting of seminars held by actual speakers. Our experiments revealed that the proposed algorithm provides source localization accuracy superior to the standard spherical and linear intersection techniques. Moreover, the proposed algorithm, although relying on an iterative optimization scheme, proved efficient enough for real-time operation.
Climate Projections and Uncertainty Communication.
Joslyn, Susan L; LeClerc, Jared E
2016-01-01
Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections. Copyright © 2015 Cognitive Science Society, Inc.
NASA Technical Reports Server (NTRS)
Murphy, K. A.
1990-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
Automatic C-arm pose estimation via 2D/3D hybrid registration of a radiographic fiducial
NASA Astrophysics Data System (ADS)
Moult, E.; Burdette, E. C.; Song, D. Y.; Abolmaesumi, P.; Fichtinger, G.; Fallavollita, P.
2011-03-01
Motivation: In prostate brachytherapy, real-time dosimetry would be ideal to allow for rapid evaluation of the implant quality intra-operatively. However, such a mechanism requires an imaging system that is both real-time and which provides, via multiple C-arm fluoroscopy images, clear information describing the three-dimensional position of the seeds deposited within the prostate. Thus, accurate tracking of the C-arm poses proves to be of critical importance to the process. Methodology: We compute the pose of the C-arm relative to a stationary radiographic fiducial of known geometry by employing a hybrid registration framework. Firstly, by means of an ellipse segmentation algorithm and a 2D/3D feature based registration, we exploit known FTRAC geometry to recover an initial estimate of the C-arm pose. Using this estimate, we then initialize the intensity-based registration which serves to recover a refined and accurate estimation of the C-arm pose. Results: Ground-truth pose was established for each C-arm image through a published and clinically tested segmentation-based method. Using 169 clinical C-arm images and a +/-10° and +/-10 mm random perturbation of the ground-truth pose, the average rotation and translation errors were 0.68° (std = 0.06°) and 0.64 mm (std = 0.24 mm). Conclusion: Fully automated C-arm pose estimation using a 2D/3D hybrid registration scheme was found to be clinically robust based on human patient data.
Filtering observations without the initial guess
NASA Astrophysics Data System (ADS)
Chin, T. M.; Abbondanza, C.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; Soja, B.; Wu, X.
2017-12-01
Noisy geophysical observations sampled irregularly over space and time are often numerically "analyzed" or "filtered" before scientific usage. The standard analysis and filtering techniques based on the Bayesian principle requires "a priori" joint distribution of all the geophysical parameters of interest. However, such prior distributions are seldom known fully in practice, and best-guess mean values (e.g., "climatology" or "background" data if available) accompanied by some arbitrarily set covariance values are often used in lieu. It is therefore desirable to be able to exploit efficient (time sequential) Bayesian algorithms like the Kalman filter while not forced to provide a prior distribution (i.e., initial mean and covariance). An example of this is the estimation of the terrestrial reference frame (TRF) where requirement for numerical precision is such that any use of a priori constraints on the observation data needs to be minimized. We will present the Information Filter algorithm, a variant of the Kalman filter that does not require an initial distribution, and apply the algorithm (and an accompanying smoothing algorithm) to the TRF estimation problem. We show that the information filter allows temporal propagation of partial information on the distribution (marginal distribution of a transformed version of the state vector), instead of the full distribution (mean and covariance) required by the standard Kalman filter. The information filter appears to be a natural choice for the task of filtering observational data in general cases where prior assumption on the initial estimate is not available and/or desirable. For application to data assimilation problems, reduced-order approximations of both the information filter and square-root information filter (SRIF) have been published, and the former has previously been applied to a regional configuration of the HYCOM ocean general circulation model. Such approximation approaches are also briefed in the presentation.
NASA Astrophysics Data System (ADS)
Fitton, N.; Datta, A.; Hastings, A.; Kuhnert, M.; Topp, C. F. E.; Cloy, J. M.; Rees, R. M.; Cardenas, L. M.; Williams, J. R.; Smith, K.; Chadwick, D.; Smith, P.
2014-09-01
The United Kingdom currently reports nitrous oxide emissions from agriculture using the IPCC default Tier 1 methodology. However Tier 1 estimates have a large degree of uncertainty as they do not account for spatial variations in emissions. Therefore biogeochemical models such as DailyDayCent (DDC) are increasingly being used to provide a spatially disaggregated assessment of annual emissions. Prior to use, an assessment of the ability of the model to predict annual emissions should be undertaken, coupled with an analysis of how model inputs influence model outputs, and whether the modelled estimates are more robust that those derived from the Tier 1 methodology. The aims of the study were (a) to evaluate if the DailyDayCent model can accurately estimate annual N2O emissions across nine different experimental sites, (b) to examine its sensitivity to different soil and climate inputs across a number of experimental sites and (c) to examine the influence of uncertainty in the measured inputs on modelled N2O emissions. DailyDayCent performed well across the range of cropland and grassland sites, particularly for fertilized fields indicating that it is robust for UK conditions. The sensitivity of the model varied across the sites and also between fertilizer/manure treatments. Overall our results showed that there was a stronger correlation between the sensitivity of N2O emissions to changes in soil pH and clay content than the remaining input parameters used in this study. The lower the initial site values for soil pH and clay content, the more sensitive DDC was to changes from their initial value. When we compared modelled estimates with Tier 1 estimates for each site, we found that DailyDayCent provided a more accurate representation of the rate of annual emissions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Chaopeng; Fang, Kuai; Ludwig, Noel
The DOE and BLM identified 285,000 acres of desert land in the Chuckwalla valley in the western U.S., for solar energy development. In addition to several approved solar projects, a pumped storage project was recently proposed to pump nearly 8000 acre-ft-yr of groundwater to store and stabilize solar energy output. This study aims at providing estimates of the amount of naturally-occurring recharge, and to estimate the impact of the pumping on the water table. To better provide the locations and intensity of natural recharge, this study employs an integrated, physically-based hydrologic model, PAWS+CLM, to calculate recharge. Then, the simulated rechargemore » is used in a parameter estimation package to calibrate spatially-distributed K field. This design is to incorporate all available observational data, including soil moisture monitoring stations, groundwater head, and estimates of groundwater conductivity, to constrain the modeling. To address the uncertainty of the soil parameters, an ensemble of simulations are conducted, and the resulting recharges are either rejected or accepted based on calibrated groundwater head and local variation of the K field. The results indicate that the natural total inflow to the study domain is between 7107 and 12,772 afy. During the initial-fill phase of pumped storage project, the total outflow exceeds the upper bound estimate of the inflow. If the initial-fill is annualized to 20 years, the average pumping is more than the lower bound of inflows. The results indicate after adding the pumped storage project, the system will nearing, if not exceeding, its maximum renewable pumping capacity. The accepted recharges lead to a drawdown range of 24 to 45 ft for an assumed specific yield of 0.05. However, the drawdown is sensitive to this parameter, whereas there is insufficient data to adequately constrain this parameter.« less
Nelson, Richard E; Stevens, Vanessa W; Khader, Karim; Jones, Makoto; Samore, Matthew H; Evans, Martin E; Douglas Scott, R; Slayton, Rachel B; Schweizer, Marin L; Perencevich, Eli L; Rubin, Michael A
2016-05-01
In an effort to reduce methicillin-resistant Staphylococcus aureus (MRSA) transmission through universal screening and isolation, the Department of Veterans Affairs (VA) launched the National MRSA Prevention Initiative in October 2007. The objective of this analysis was to quantify the budget impact and cost effectiveness of this initiative. An economic model was developed using published data on MRSA hospital-acquired infection (HAI) rates in the VA from October 2007 to September 2010; estimates of the costs of MRSA HAIs in the VA; and estimates of the intervention costs, including salaries of staff members hired to support the initiative at each VA facility. To estimate the rate of MRSA HAIs that would have occurred if the initiative had not been implemented, two different assumptions were made: no change and a downward temporal trend. Effectiveness was measured in life-years gained. The initiative resulted in an estimated 1,466-2,176 fewer MRSA HAIs. The initiative itself was estimated to cost $207 million during this 3-year period, while the cost savings from prevented MRSA HAIs ranged from $27 million to $75 million. The incremental cost-effectiveness ratios ranged from $28,048 to $56,944/life-years. The overall impact on the VA's budget was $131-$179 million. Wide-scale implementation of a national MRSA surveillance and prevention strategy in VA inpatient settings may have prevented a substantial number of MRSA HAIs. Although the savings associated with prevented infections helped offset some but not all of the cost of the initiative, this model indicated that the initiative would be considered cost effective. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
van Stralen, Marijn; Bosch, Johan G; Voormolen, Marco M; van Burken, Gerard; Krenning, Boudewijn J; van Geuns, Robert-Jan M; Lancée, Charles T; de Jong, Nico; Reiber, Johan H C
2005-10-01
We propose a semiautomatic endocardial border detection method for three-dimensional (3D) time series of cardiac ultrasound (US) data based on pattern matching and dynamic programming, operating on two-dimensional (2D) slices of the 3D plus time data, for the estimation of full cycle left ventricular volume, with minimal user interaction. The presented method is generally applicable to 3D US data and evaluated on data acquired with the Fast Rotating Ultrasound (FRU-) Transducer, developed by Erasmus Medical Center (Rotterdam, the Netherlands), a conventional phased-array transducer, rotating at very high speed around its image axis. The detection is based on endocardial edge pattern matching using dynamic programming, which is constrained by a 3D plus time shape model. It is applied to an automatically selected subset of 2D images of the original data set, for typically 10 equidistant rotation angles and 16 cardiac phases (160 images). Initialization requires the drawing of four contours per patient manually. We evaluated this method on 14 patients against MRI end-diastole and end-systole volumes. Initialization requires the drawing of four contours per patient manually. We evaluated this method on 14 patients against MRI end-diastolic (ED) and end-systolic (ES) volumes. The semiautomatic border detection approach shows good correlations with MRI ED/ES volumes (r = 0.938) and low interobserver variability (y = 1.005x - 16.7, r = 0.943) over full-cycle volume estimations. It shows a high consistency in tracking the user-defined initial borders over space and time. We show that the ease of the acquisition using the FRU-transducer and the semiautomatic endocardial border detection method together can provide a way to quickly estimate the left ventricular volume over the full cardiac cycle using little user interaction.
Homogeneous buoyancy-generated turbulence
NASA Technical Reports Server (NTRS)
Batchelor, G. K.; Canuto, V. M.; Chasnov, J. R.
1992-01-01
Using a theoretical analysis of fundamental equations and a numerical simulation of the flow field, the statistically homogeneous motion that is generated by buoyancy forces after the creation of homogeneous random fluctuations in the density of infinite fluid at an initial instant is examined. It is shown that analytical results together with numerical results provide a comprehensive description of the 'birth, life, and death' of buoyancy-generated turbulence. Results of numerical simulations yielded the mean-square density mean-square velocity fluctuations and the associated spectra as functions of time for various initial conditions, and the time required for the mean-square density fluctuation to fall to a specified small value was estimated.
NASA Astrophysics Data System (ADS)
Maples, S.; Fogg, G. E.; Harter, T.
2015-12-01
Accurate estimation of groundwater (GW) budgets and effective management of agricultural GW pumping remains a challenge in much of California's Central Valley (CV) due to a lack of irrigation well metering. CVHM and C2VSim are two regional-scale integrated hydrologic models that provide estimates of historical and current CV distributed pumping rates. However, both models estimate GW pumping using conceptually different agricultural water models with uncertainties that have not been adequately investigated. Here, we evaluate differences in distributed agricultural GW pumping and recharge estimates related to important differences in the conceptual framework and model assumptions used to simulate surface water (SW) and GW interaction across the root zone. Differences in the magnitude and timing of GW pumping and recharge were evaluated for a subregion (~1000 mi2) coincident with Yolo County, CA, to provide similar initial and boundary conditions for both models. Synthetic, multi-year datasets of land-use, precipitation, evapotranspiration (ET), and SW deliveries were prescribed for each model to provide realistic end-member scenarios for GW-pumping demand and recharge. Results show differences in the magnitude and timing of GW-pumping demand, deep percolation, and recharge. Discrepancies are related, in large part, to model differences in the estimation of ET requirements and representation of soil-moisture conditions. CVHM partitions ET demand, while C2VSim uses a bulk ET rate, resulting in differences in both crop-water and GW-pumping demand. Additionally, CVHM assumes steady-state soil-moisture conditions, and simulates deep percolation as a function of irrigation inefficiencies, while C2VSim simulates deep percolation as a function of transient soil-moisture storage conditions. These findings show that estimates of GW-pumping demand are sensitive to these important conceptual differences, which can impact conjunctive-use water management decisions in the CV.
NASA Astrophysics Data System (ADS)
Lee, T. R.; Wood, W. T.; Dale, J.
2017-12-01
Empirical and theoretical models of sub-seafloor organic matter transformation, degradation and methanogenesis require estimates of initial seafloor total organic carbon (TOC). This subsurface methane, under the appropriate geophysical and geochemical conditions may manifest as methane hydrate deposits. Despite the importance of seafloor TOC, actual observations of TOC in the world's oceans are sparse and large regions of the seafloor yet remain unmeasured. To provide an estimate in areas where observations are limited or non-existent, we have implemented interpolation techniques that rely on existing data sets. Recent geospatial analyses have provided accurate accounts of global geophysical and geochemical properties (e.g. crustal heat flow, seafloor biomass, porosity) through machine learning interpolation techniques. These techniques find correlations between the desired quantity (in this case TOC) and other quantities (predictors, e.g. bathymetry, distance from coast, etc.) that are more widely known. Predictions (with uncertainties) of seafloor TOC in regions lacking direct observations are made based on the correlations. Global distribution of seafloor TOC at 1 x 1 arc-degree resolution was estimated from a dataset of seafloor TOC compiled by Seiter et al. [2004] and a non-parametric (i.e. data-driven) machine learning algorithm, specifically k-nearest neighbors (KNN). Built-in predictor selection and a ten-fold validation technique generated statistically optimal estimates of seafloor TOC and uncertainties. In addition, inexperience was estimated. Inexperience is effectively the distance in parameter space to the single nearest neighbor, and it indicates geographic locations where future data collection would most benefit prediction accuracy. These improved geospatial estimates of TOC in data deficient areas will provide new constraints on methane production and subsequent methane hydrate accumulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Department of Energy (DOE) has contracted with Asea Brown Boveri-Combustion Engineering (ABB-CE) to provide information on the capability of ABB-CE`s System 80 + Advanced Light Water Reactor (ALWR) to transform, through reactor burnup, 100 metric tonnes (MT) of weapons grade plutonium (Pu) into a form which is not readily useable in weapons. This information is being developed as part of DOE`s Plutonium Disposition Study, initiated by DOE in response to Congressional action. This document Volume 2, provides a discussion of: Plutonium Fuel Cycle; Technology Needs; Regulatory Considerations; Cost and Schedule Estimates; and Deployment Strategy.
Autonomous optical navigation using nanosatellite-class instruments: a Mars approach case study
NASA Astrophysics Data System (ADS)
Enright, John; Jovanovic, Ilija; Kazemi, Laila; Zhang, Harry; Dzamba, Tom
2018-02-01
This paper examines the effectiveness of small star trackers for orbital estimation. Autonomous optical navigation has been used for some time to provide local estimates of orbital parameters during close approach to celestial bodies. These techniques have been used extensively on spacecraft dating back to the Voyager missions, but often rely on long exposures and large instrument apertures. Using a hyperbolic Mars approach as a reference mission, we present an EKF-based navigation filter suitable for nanosatellite missions. Observations of Mars and its moons allow the estimator to correct initial errors in both position and velocity. Our results show that nanosatellite-class star trackers can produce good quality navigation solutions with low position (<300 {m}) and velocity (<0.15 {m/s}) errors as the spacecraft approaches periapse.
Graphical user interface for yield and dose estimations for cyclotron-produced technetium
NASA Astrophysics Data System (ADS)
Hou, X.; Vuckovic, M.; Buckley, K.; Bénard, F.; Schaffer, P.; Ruth, T.; Celler, A.
2014-07-01
The cyclotron-based 100Mo(p,2n)99mTc reaction has been proposed as an alternative method for solving the shortage of 99mTc. With this production method, however, even if highly enriched molybdenum is used, various radioactive and stable isotopes will be produced simultaneously with 99mTc. In order to optimize reaction parameters and estimate potential patient doses from radiotracers labeled with cyclotron produced 99mTc, the yields for all reaction products must be estimated. Such calculations, however, are extremely complex and time consuming. Therefore, the objective of this study was to design a graphical user interface (GUI) that would automate these calculations, facilitate analysis of the experimental data, and predict dosimetry. The resulting GUI, named Cyclotron production Yields and Dosimetry (CYD), is based on Matlab®. It has three parts providing (a) reaction yield calculations, (b) predictions of gamma emissions and (c) dosimetry estimations. The paper presents the outline of the GUI, lists the parameters that must be provided by the user, discusses the details of calculations and provides examples of the results. Our initial experience shows that the proposed GUI allows the user to very efficiently calculate the yields of reaction products and analyze gamma spectroscopy data. However, it is expected that the main advantage of this GUI will be at the later clinical stage when entering reaction parameters will allow the user to predict production yields and estimate radiation doses to patients for each particular cyclotron run.
Graphical user interface for yield and dose estimations for cyclotron-produced technetium.
Hou, X; Vuckovic, M; Buckley, K; Bénard, F; Schaffer, P; Ruth, T; Celler, A
2014-07-07
The cyclotron-based (100)Mo(p,2n)(99m)Tc reaction has been proposed as an alternative method for solving the shortage of (99m)Tc. With this production method, however, even if highly enriched molybdenum is used, various radioactive and stable isotopes will be produced simultaneously with (99m)Tc. In order to optimize reaction parameters and estimate potential patient doses from radiotracers labeled with cyclotron produced (99m)Tc, the yields for all reaction products must be estimated. Such calculations, however, are extremely complex and time consuming. Therefore, the objective of this study was to design a graphical user interface (GUI) that would automate these calculations, facilitate analysis of the experimental data, and predict dosimetry. The resulting GUI, named Cyclotron production Yields and Dosimetry (CYD), is based on Matlab®. It has three parts providing (a) reaction yield calculations, (b) predictions of gamma emissions and (c) dosimetry estimations. The paper presents the outline of the GUI, lists the parameters that must be provided by the user, discusses the details of calculations and provides examples of the results. Our initial experience shows that the proposed GUI allows the user to very efficiently calculate the yields of reaction products and analyze gamma spectroscopy data. However, it is expected that the main advantage of this GUI will be at the later clinical stage when entering reaction parameters will allow the user to predict production yields and estimate radiation doses to patients for each particular cyclotron run.
The Shannon entropy as a measure of diffusion in multidimensional dynamical systems
NASA Astrophysics Data System (ADS)
Giordano, C. M.; Cincotta, P. M.
2018-05-01
In the present work, we introduce two new estimators of chaotic diffusion based on the Shannon entropy. Using theoretical, heuristic and numerical arguments, we show that the entropy, S, provides a measure of the diffusion extent of a given small initial ensemble of orbits, while an indicator related with the time derivative of the entropy, S', estimates the diffusion rate. We show that in the limiting case of near ergodicity, after an appropriate normalization, S' coincides with the standard homogeneous diffusion coefficient. The very first application of this formulation to a 4D symplectic map and to the Arnold Hamiltonian reveals very successful and encouraging results.
Monitoring vegetation conditions from LANDSAT for use in range management
NASA Technical Reports Server (NTRS)
Haas, R. H.; Deering, D. W.; Rouse, J. W., Jr.; Schell, J. A.
1975-01-01
A summary of the LANDSAT Great Plains Corridor projects and the principal results are presented. Emphasis is given to the use of satellite acquired phenological data for range management and agri-business activities. A convenient method of reducing LANDSAT MSS data to provide quantitative estimates of green biomass on rangelands in the Great Plains is explained. Suggestions for the use of this approach for evaluating range feed conditions are presented. A LANDSAT Follow-on project has been initiated which will employ the green biomass estimation method in a quasi-operational monitoring of range readiness and range feed conditions on a regional scale.
Effect size calculation in meta-analyses of psychotherapy outcome research.
Hoyt, William T; Del Re, A C
2018-05-01
Meta-analysis of psychotherapy intervention research normally examines differences between treatment groups and some form of comparison group (e.g., wait list control; alternative treatment group). The effect of treatment is normally quantified as a standardized mean difference (SMD). We describe procedures for computing unbiased estimates of the population SMD from sample data (e.g., group Ms and SDs), and provide guidance about a number of complications that may arise related to effect size computation. These complications include (a) incomplete data in research reports; (b) use of baseline data in computing SMDs and estimating the population standard deviation (σ); (c) combining effect size data from studies using different research designs; and (d) appropriate techniques for analysis of data from studies providing multiple estimates of the effect of interest (i.e., dependent effect sizes). Clinical or Methodological Significance of this article: Meta-analysis is a set of techniques for producing valid summaries of existing research. The initial computational step for meta-analyses of research on intervention outcomes involves computing an effect size quantifying the change attributable to the intervention. We discuss common issues in the computation of effect sizes and provide recommended procedures to address them.
Automatic portion estimation and visual refinement in mobile dietary assessment
Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.
2011-01-01
As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These “portion volumes” utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach. PMID:22242198
Automatic portion estimation and visual refinement in mobile dietary assessment
NASA Astrophysics Data System (ADS)
Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.
2010-01-01
As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These "portion volumes" utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach.
Stuckey, Marla H.
2008-01-01
The Water Resources Planning Act, Act 220 of 2002, requires the Pennsylvania Department of Environmental Protection (PaDEP) to update the State Water Plan by 2008. As part of this update, a water-analysis screening tool (WAST) was developed by the U.S. Geological Survey, in cooperation with the PaDEP, to provide assistance to the state in the identification of critical water-planning areas. The WAST has two primary inputs: net withdrawals and the initial screening criteria. A comprehensive water-use database that includes data from registration, estimation, discharge monitoring reports, mining data, and other sources was developed as input into the WAST. Water use in the following categories was estimated using water-use factors: residential, industrial, commercial, agriculture, and golf courses. A percentage of the 7-day, 10-year low flow is used for the initial screenings using the WAST to identify potential critical water-planning areas. This quantity, or initial screening criteria, is 50 percent of the 7-day, 10-year low flow for most streams. Using a basic water-balance equation, a screening indicator is calculated that indicates the potential influences of net withdrawals on aquatic-resource uses for watersheds generally larger than 15 square miles. Points representing outlets of these watersheds are colored-coded within the WAST to show the screening criteria for each watershed.
Astrom, Raven L; Wadsworth, Sally J; DeFries, John C
2007-06-01
Results obtained from previous longitudinal studies of reading difficulties indicate that reading deficits are generally stable. However, little is known about the etiology of this stability. Thus, the primary objective of this first longitudinal twin study of reading difficulties is to provide an initial assessment of genetic and environmental influences on the stability of reading deficits. Data were analyzed from a sample of 56 twin pairs, 18 identical (monozygotic, MZ) and 38 fraternal (dizygotic, DZ), in which at least one member of each pair was classified as reading-disabled in the Colorado Learning Disabilities Research Center, and on whom follow-up data were available. The twins were tested at two time points (average age of 10.3 years at initial assessment and 16.1 years at follow-up). A composite measure of reading performance (PIAT Reading Recognition, Reading Comprehension and Spelling) was highly stable, with a stability correlation of .84. Data from the initial time point were first subjected to univariate DeFries-Fulker multiple regression analysis and the resulting estimate of the heritability of the group deficit (h2g) was .84 (+/-.26). When the initial and follow-up data were then fitted to a bivariate extension of the basic DF model, bivariate heritability was estimated at .65, indicating that common genetic influences account for approximately 75% of the stability between reading measures at the two time points.
Colloid-Facilitated Transport of 137Cs in Fracture-Fill Material. Experiments and Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dittrich, Timothy M.; Reimus, Paul William
2015-10-29
In this study, we demonstrate how a combination of batch sorption/desorption experiments and column transport experiments were used to effectively parameterize a model describing the colloid-facilitated transport of Cs in the Grimsel granodiorite/FFM system. Cs partition coefficient estimates onto both the colloids and the stationary media obtained from the batch experiments were used as initial estimates of partition coefficients in the column experiments, and then the column experiment results were used to obtain refined estimates of the number of different sorption sites and the adsorption and desorption rate constants of the sites. The desorption portion of the column breakthrough curvesmore » highlighted the importance of accounting for adsorption-desorption hysteresis (or a very nonlinear adsorption isotherm) of the Cs on the FFM in the model, and this portion of the breakthrough curves also dictated that there be at least two different types of sorption sites on the FFM. In the end, the two-site model parameters estimated from the column experiments provided excellent matches to the batch adsorption/desorption data, which provided a measure of assurance in the validity of the model.« less
Liu, Ying; Geng, Kun; Chu, Yanhao; Xu, Mindi; Zha, Lagabaiyila
2018-03-03
The purpose of this study is to provide a forensic reference data about estimating chronologic age by evaluating the third molar mineralization of Han in central southern China. The mineralization degree of third molars was assessed by Demirjian's classification with modification for 2519 digital orthopantomograms (1190 males, 1329 females; age 8-23 years). The mean ages of the initial mineralization and the crown completion of third molars were around 9.66 and 13.88 years old in males and 9.52 and 14.09 years old in females. The minimum ages of apical closure were around 16 years in both sexes. Twenty-eight at stage C and stage G and 38 and 48 at stage F occurred earlier in males than in females. There was no significant difference between maxillary and mandibular teeth in males and females except that stage C in males. Two formulas were devised to estimate age based on mineralization stages and sexes. In Hunan Province, the person will probably be over age 14, when a third molar reaches the stage G. The results of the study could provide reference for age estimation in forensic cases and clinical dentistry.
Spectral Unmixing Analysis of Time Series Landsat 8 Images
NASA Astrophysics Data System (ADS)
Zhuo, R.; Xu, L.; Peng, J.; Chen, Y.
2018-05-01
Temporal analysis of Landsat 8 images opens up new opportunities in the unmixing procedure. Although spectral analysis of time series Landsat imagery has its own advantage, it has rarely been studied. Nevertheless, using the temporal information can provide improved unmixing performance when compared to independent image analyses. Moreover, different land cover types may demonstrate different temporal patterns, which can aid the discrimination of different natures. Therefore, this letter presents time series K-P-Means, a new solution to the problem of unmixing time series Landsat imagery. The proposed approach is to obtain the "purified" pixels in order to achieve optimal unmixing performance. The vertex component analysis (VCA) is used to extract endmembers for endmember initialization. First, nonnegative least square (NNLS) is used to estimate abundance maps by using the endmember. Then, the estimated endmember is the mean value of "purified" pixels, which is the residual of the mixed pixel after excluding the contribution of all nondominant endmembers. Assembling two main steps (abundance estimation and endmember update) into the iterative optimization framework generates the complete algorithm. Experiments using both simulated and real Landsat 8 images show that the proposed "joint unmixing" approach provides more accurate endmember and abundance estimation results compared with "separate unmixing" approach.
Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W
2012-01-01
The Food and Drug Administration's (FDA) Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of acute respiratory failure (ARF). PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify ARF, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on ARF algorithms and validation estimates. Only two studies provided codes for ARF, each using related yet different ICD-9 codes (i.e., ICD-9 codes 518.8, "other diseases of lung," and 518.81, "acute respiratory failure"). Neither study provided validation estimates. Research needs to be conducted on designing validation studies to test ARF algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.
Mathews, Melissa; Abner, Erin; Caban-Holt, Allison; Dennis, Brandon C; Kryscio, Richard; Schmitt, Frederick
2013-09-01
Memory evaluation is a key component in the accurate diagnosis of cognitive disorders.One memory procedure that has shown promise in discriminating disease-related cognitive decline from normal cognitive aging is the New York University Paragraph Recall Test; however, the effects of education have been unexamined as they pertain to one's literacy level. The current study provides normative data stratified by estimated quality of education as indexed by irregular word reading skill. Conventional norms were derived from a sample (N = 385) of cognitively intact elderly men who were initially recruited for participation in the PREADViSE clinical trial. A series of multiple linear regression models were constructed to assess the influence of demographic variables on mean NYU Paragraph Immediate and Delayed Recall scores. Test version, assessment site, and estimated quality of education were significant predictors of performance on the NYU Paragraph Recall Test. Findings indicate that estimated quality of education is a better predictor of memory performance than ethnicity and years of total education. Normative data stratified according to estimated quality of education are presented. The current study provides evidence and support for normativedata stratified by quality of education as opposed to years of education.
NASA Astrophysics Data System (ADS)
Gharaibeh, Mamoun; Albalasmeh, Ammar
2017-04-01
Stone walls have been adopted for long time to control water erosion in many Mediterranean countries. In soil erosion equations, the support practice factor (P-factor) for stone walls has not been fully studied or rarely taken into account especially in semi-arid and arid regions. Field studies were conducted to evaluate the efficiency of traditional stone walls and to quantify soil erosion in six sites in north and northeastern Jordan. Initial estimates using the Universal Soil Loss Equation (USLE) showed that rainfall erosion was reduced by 65% in areas where stone walls are present. Annual soil loss ranged from 5 to 15 t yr-1. The mean annual soil loss in the absence of stone walls ranged from 10-60 t ha-1 with an average value of 35 t ha-1. Interpolating the slope of thickness of A horizon provided an average initial estimate of 0.3 for P value.
Sewage outfall plume dispersion observations with an autonomous underwater vehicle.
Ramos, P; Cunha, S R; Neves, M V; Pereira, F L; Quintaneiro, I
2005-01-01
This work represents one of the first successful applications of Autonomous Underwater Vehicles (AUVs) for interdisciplinary coastal research. A monitoring mission to study the shape and estimate the initial dilution of the S. Jacinto sewage outfall plume using an AUV was performed on July 2002. An efficient sampling strategy enabling greater improvements in spatial and temporal range of detection demonstrated that the sewage effluent plume can be clearly traced using naturally occurring tracers in the wastewater. The outfall plume was found at the surface highly influenced by the weak stratification and low currents. Dilution varying with distance downstream was estimated from the plume rise over the outfall diffuser until a nearly constant value of 130:1, 60 m from the diffuser, indicating the near field end. Our results demonstrate that AUVs can provide high-quality measurements of physical properties of effluent plumes in a very effective manner and valuable considerations about the initial mixing processes under real oceanic conditions can be further investigated.
NASA Astrophysics Data System (ADS)
Or, D.; von Ruette, J.; Lehmann, P.
2017-12-01
Landslides and subsequent debris-flows initiated by rainfall represent a common natural hazard in mountainous regions. We integrated a landslide hydro-mechanical triggering model with a simple model for debris flow runout pathways and developed a graphical user interface (GUI) to represent these natural hazards at catchment scale at any location. The STEP-TRAMM GUI provides process-based estimates of the initiation locations and sizes of landslides patterns based on digital elevation models (SRTM) linked with high resolution global soil maps (SoilGrids 250 m resolution) and satellite based information on rainfall statistics for the selected region. In the preprocessing phase the STEP-TRAMM model estimates soil depth distribution to supplement other soil information for delineating key hydrological and mechanical properties relevant to representing local soil failure. We will illustrate this publicly available GUI and modeling platform to simulate effects of deforestation on landslide hazards in several regions and compare model outcome with satellite based information.
NASA Astrophysics Data System (ADS)
SchläPfer, Felix; Witzig, Pieter-Jan
2006-12-01
In 1997, about 140,000 citizens in 388 voting districts in the Swiss canton of Bern passed a ballot initiative to allocate about 3 million Swiss Francs annually to a canton-wide river restoration program. Using the municipal voting returns and a detailed georeferenced data set on the ecomorphological status of the rivers, we estimate models of voter support in relation to local river ecomorphology, population density, mean income, cultural background, and recent flood damage. Support of the initiative increased with increasing population density and tended to increase with increasing mean income, in spite of progressive taxation. Furthermore, we found evidence that public support increased with decreasing "naturalness" of local rivers. The model estimates may be cautiously used to predict the public acceptance of similar restoration programs in comparable regions. Moreover, the voting-based insights into the distribution of river restoration benefits provide a useful starting point for debates about appropriate financing schemes.
Millikan, Amy M; Weber, Natalya S; Niebuhr, David W; Torrey, E Fuller; Cowan, David N; Li, Yuanzhang; Kaminski, Brenda
2007-10-01
We are studying associations between selected biomarkers and schizophrenia or bipolar disorder among military personnel. To assess potential diagnostic misclassification and to estimate the date of illness onset, we reviewed medical records for a subset of cases. Two psychiatrists independently reviewed 182 service medical records retrieved from the Department of Veterans Affairs. Data were evaluated for diagnostic concordance between database diagnoses and reviewers. Interreviewer variability was measured by using proportion of agreement and the kappa statistic. Data were abstracted to estimate date of onset. High levels of agreement existed between database diagnoses and reviewers (proportion, 94.7%; kappa = 0.88) and between reviewers (proportion, 92.3%; kappa = 0.87). The median time between illness onset and initiation of medical discharge was 1.6 and 1.1 years for schizophrenia and bipolar disorder, respectively. High levels of agreement between investigators and database diagnoses indicate that diagnostic misclassification is unlikely. Discharge procedure initiation date provides a suitable surrogate for disease onset.
Rocca, Corinne H.; Kohn, Julia E.; Goodman, Suzan; Stern, Lisa; Blum, Maya; Speidel, J. Joseph; Darney, Philip D.; Harper, Cynthia C.
2016-01-01
Objectives. We determined whether public funding for contraception was associated with long-acting reversible contraceptive (LARC) use when providers received training on these methods. Methods. We evaluated the impact of a clinic training intervention and public funding on LARC use in a cluster randomized trial at 40 randomly assigned clinics across the United States (2011–2013). Twenty intervention clinics received a 4-hour training. Women aged 18 to 25 were enrolled and followed for 1 year (n = 1500: 802 intervention, 698 control). We estimated the effects of the intervention and funding sources on LARC initiation with Cox proportional hazards models with shared frailty. Results. Women at intervention sites had higher LARC initiation than those at control (22 vs 18 per 100 person-years; adjusted hazard ratio [AHR] = 1.43; 95% confidence interval [CI] = 1.04, 1.98). Participants receiving care at clinics with Medicaid family planning expansion programs had almost twice the initiation rate as those at clinics without (25 vs 13 per 100 person-years; AHR = 2.26; 95% CI = 1.59, 3.19). LARC initiation also increased among participants with public (AHR = 1.56; 95% CI = 1.09, 2.22) but not private health insurance. Conclusions. Public funding and provider training substantially improve LARC access. PMID:26794168
In situ diffusion experiment in granite: Phase I
NASA Astrophysics Data System (ADS)
Vilks, P.; Cramer, J. J.; Jensen, M.; Miller, N. H.; Miller, H. G.; Stanchell, F. W.
2003-03-01
A program of in situ experiments, supported by laboratory studies, was initiated to study diffusion in sparsely fractured rock (SFR), with a goal of developing an understanding of diffusion processes within intact crystalline rock. Phase I of the in situ diffusion experiment was started in 1996, with the purpose of developing a methodology for estimating diffusion parameter values. Four in situ diffusion experiments, using a conservative iodide tracer, were performed in highly stressed SFR at a depth of 450 m in the Underground Research Laboratory (URL). The experiments, performed over a 2 year period, yielded rock permeability estimates of 2×10 -21 m 2 and effective diffusion coefficients varying from 2.1×10 -14 to 1.9×10 -13 m 2/s, which were estimated using the MOTIF code. The in situ diffusion profiles reveal a characteristic "dog leg" pattern, with iodide concentrations decreasing rapidly within a centimeter of the open borehole wall. It is hypothesized that this is an artifact of local stress redistribution and creation of a zone of increased constrictivity close to the borehole wall. A comparison of estimated in situ and laboratory diffusivities and permeabilities provides evidence that the physical properties of rock samples removed from high-stress regimes change. As a result of the lessons learnt during Phase I, a Phase II in situ program has been initiated to improve our general understanding of diffusion in SFR.
Estimating Antarctica land topography from GRACE gravity and ICESat altimetry data
NASA Astrophysics Data System (ADS)
Wu, I.; Chao, B. F.; Chen, Y.
2009-12-01
We propose a new method combining GRACE (Gravity Recovery and Climate Experiment) gravity and ICESat (Ice, Cloud, and land Elevation Satellite) altimetry data to estimate the land topography for Antarctica. Antarctica is the fifth-largest continent in the world and about 98% of Antarctica is covered by ice, where in-situ measurements are difficult. Some experimental airborne radar and ground-based radar data have revealed very limited land topography beneath heavy ice sheet. To estimate the land topography for the full coverage of Antarctica, we combine GRACE data that indicate the mass distribution, with data of ICESat laser altimetry that provide high-resolution mapping of ice topography. Our approach is actually based on some geological constraints: assuming uniform densities of the land and ice considering the Airy-type isostasy. In the beginning we construct an initial model for the ice thickness and land topography based on the BEDMAP ice thickness and ICESat data. Thereafter we forward compute the model’s gravity field and compare with the GRACE observed data. Our initial model undergoes the adjustments to improve the fit between modeled results and the observed data. Final examination is done by comparing our results with previous but sparse observations of ice thickness to reconfirm the reliability of our results. As the gravitational inversion problem is non-unique, our estimating result is just one of all possibilities constrained by available data in optimal way.
NASA Technical Reports Server (NTRS)
Mcnider, Richard T.; Song, Aaron; Casey, Dan; Crosson, William; Wetzel, Peter
1993-01-01
The current NWS ground based network is not sufficient to capture the dynamic or thermodynamic structure leading to the initiation and organization of air mass moist convective events. Under this investigation we intend to use boundary layer mesoscale models (McNider and Pielke, 1981) to examine the dynamic triggering of convection due to topography and surface thermal contrasts. VAS and MAN's estimates of moisture will be coupled with the dynamic solution to provide an estimate of the total convective potential. Visible GOES images will be used to specify incoming insolation which may lead to surface thermal contrasts and JR skin temperatures will be used to estimate surface moisture (via the surface thermal inertia) (Weizel and Chang, 1988) which can also induce surface thermal contrasts. We will use the SPACE-COHMEX data base to evaluate the ability of the joint mesoscale model satellite products to show skill in predicting the development of air mass convection. We will develop images of model vertical velocity and satellite thermodynamic measures to derive images of predicted convective potential. We will then after suitable geographic registration carry out a pixel by pixel correlation between the model/satellite convective potential and the 'truth' which are the visible images. During the first half of the first year of this investigation we have concentrated on two aspects of the project. The first has been in generating vertical velocity fields from the model for COHMEX case days. We have taken June 19 as the first case and have run the mesoscale model at several different grid resolutions. We are currently developing the composite model/satellite convective image. The second aspect has been the attempted calibration of the surface energy budget to provide the proper horizontal thermal contrasts for convective initiation. We have made extensive progress on this aspect using the FIFE data as a test data set. The calibration technique looks very promising.
NASA Astrophysics Data System (ADS)
Richards, D. A.; Nita, D. C.; Moseley, G. E.; Hoffmann, D. L.; Standish, C. D.; Smart, P. L.; Edwards, R.
2013-12-01
In addition to the many U-Th dated speleothem records (δ18O δ13C, trace elements) of past environmental change based on continuous phases of calcite growth, discontinuous records also provide important constraints for a wide range of past states of the Earth system, including sea levels, permafrost extent, regional aridity and local cave flooding. Chronological information about human activity or faunal evolution can also be obtained where calcite can be seen to overlie cave art or mammalian bones, for example. Among the important considerations when determining the U-Th age of calcite that nucleates on an exposed surface are (1) initial 230Th/232Th, which can be elevated and variable in some settings, and (2) growth rate and sub-sample density, where extrapolation is required. By way of example, we present sea level data based on U-Th ages of vadose speleothems (i.e. formed above the water table and distinct from 'phreatic' examples) from caves of the circum-Caribbean , where calcite growth was interrupted by rising sea levels and then reinitiated after regression. These estimates demand large corrections and derived sea level constraints are compared with alternative data from coral reef terraces, phreatic overgrowths on speleothems or indirect, proxy evidence from oxygen isotopes to constrain rates of ice volume growth. Flowstones from the Bahamas provide useful sea level constraints because they present the longest and most continuous records in such settings (a function of preservation potential in addition to hydrological routing) and also earliest growth post-emergence after sea level fall. We revisit estimates for sea level regression at the end of MIS 5 at ~ 80 ka (Richards et al, 1994; Lundberg and Ford, 1994) and make corrections for non-Bulk Earth initial Th contamination (230Th/232Th activity ratio > 10), based on isochron analysis of alternative stalagmites from the same settings and recent high resolution analysis. We also present new U-Th ages for contiguous layers sub-sampled from the first 2-3 mm of flowstone growth after the MIS 5 hiatus, using a sub-sample milling strategy that matches spatial resolution with maximum achievable precision (ThermoFinnigan Neptune MC-ICPMS methodology; 20-30 mg calcite, U = ~ 300 ng.g-1, 2σ age uncertainty is × 600 a at ~80 ka). Isochron methods are used to estimate the range of initial 230Th/232Th ratio and are compared with elevated values obtained from stalagmites from the same cave (Beck et al, 2001; Hoffmann et al, 2010). A similar strategy is presented for a stalagmite with much faster axial growth data, and the data are combined with additional sea level information from the same region to estimate the rate and uncertainty of sea level regression at the MIS stage 5/4 boundary. Elevated initial 230Th/232Th values have also been observed in a stalagmite from 6 m below present sea level in a cenote from the Yucatan, Mexico, where 5 phases of calcite between 10 and 5.5 ka are separated by serpulid worm tubes formed during periods of submergence. The transition between each phase provides constraints on age and elevation of relative sea level, but the former is hampered by the uncertainty of the high initial 230Th/232Th correction. We consider the possible sources of elevated Th ratios: hydrogenous, colloidal and carbonate or other detrital components.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2012 CFR
2012-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2013 CFR
2013-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2011 CFR
2011-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2014 CFR
2014-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
ERIC Educational Resources Information Center
Korendijk, Elly J. H.; Moerbeek, Mirjam; Maas, Cora J. M.
2010-01-01
In the case of trials with nested data, the optimal allocation of units depends on the budget, the costs, and the intracluster correlation coefficient. In general, the intracluster correlation coefficient is unknown in advance and an initial guess has to be made based on published values or subject matter knowledge. This initial estimate is likely…
The human genetic history of the Americas: the final frontier.
O'Rourke, Dennis H; Raff, Jennifer A
2010-02-23
The Americas, the last continents to be entered by modern humans, were colonized during the late Pleistocene via a land bridge across what is now the Bering strait. However, the timing and nature of the initial colonization events remain contentious. The Asian origin of the earliest Americans has been amply established by numerous classical marker studies of the mid-twentieth century. More recently, mtDNA sequences, Y-chromosome and autosomal marker studies have provided a higher level of resolution in confirming the Asian origin of indigenous Americans and provided more precise time estimates for the emergence of Native Americans. But these data raise many additional questions regarding source populations, number and size of colonizing groups and the points of entry to the Americas. Rapidly accumulating molecular data from populations throughout the Americas, increased use of demographic models to test alternative colonization scenarios, and evaluation of the concordance of archaeological, paleoenvironmental and genetic data provide optimism for a fuller understanding of the initial colonization of the Americas. Copyright 2010 Elsevier Ltd. All rights reserved.
Larson, Bruce A; Rockers, Peter C; Bonawitz, Rachael; Sriruttan, Charlotte; Glencross, Deborah K; Cassim, Naseem; Coetzee, Lindi M; Greene, Gregory S; Chiller, Tom M; Vallabhaneni, Snigdha; Long, Lawrence; van Rensburg, Craig; Govender, Nelesh P
2016-01-01
In 2015 South Africa established a national cryptococcal antigenemia (CrAg) screening policy targeted at HIV-infected patients with CD4+ T-lymphocyte (CD4) counts <100 cells/ μl who are not yet on antiretroviral treatment (ART). Two screening strategies are included in national guidelines: reflex screening, where a CrAg test is performed on remnant blood samples from CD4 testing; and provider-initiated screening, where providers order a CrAg test after a patient returns for CD4 test results. The objective of this study was to compare costs and effectiveness of these two screening strategies. We developed a decision analytic model to compare reflex and provider-initiated screening in terms of programmatic and health outcomes (number screened, number identified for preemptive treatment, lives saved, and discounted years of life saved) and screening and treatment costs (2015 USD). We estimated a base case with prevalence and other parameters based on data collected during CrAg screening pilot projects integrated into routine HIV care in Gauteng, Free State, and Western Cape Provinces. We conducted sensitivity analyses to explore how results change with underlying parameter assumptions. In the base case, for each 100,000 CD4 tests, the reflex strategy compared to the provider-initiated strategy has higher screening costs ($37,536 higher) but lower treatment costs ($55,165 lower), so overall costs of screening and treatment are $17,629 less with the reflex strategy. The reflex strategy saves more lives (30 lives, 647 additional years of life saved). Sensitivity analyses suggest that reflex screening dominates provider-initiated screening (lower total costs and more lives saved) or saves additional lives for small additional costs (< $125 per life year) across a wide range of conditions (CrAg prevalence, patient and provider behavior, patient survival without treatment, and effectiveness of preemptive fluconazole treatment). In countries with substantial numbers of people with untreated, advanced HIV disease such as South Africa, CrAg screening before initiation of ART has the potential to reduce cryptococcal meningitis and save lives. Reflex screening compared to provider-initiated screening saves more lives and is likely to be cost saving or have low additional costs per additional year of life saved.
Rockers, Peter C.; Bonawitz, Rachael; Sriruttan, Charlotte; Glencross, Deborah K.; Cassim, Naseem; Coetzee, Lindi M.; Greene, Gregory S.; Chiller, Tom M.; Vallabhaneni, Snigdha; Long, Lawrence; van Rensburg, Craig; Govender, Nelesh P.
2016-01-01
Background In 2015 South Africa established a national cryptococcal antigenemia (CrAg) screening policy targeted at HIV-infected patients with CD4+ T-lymphocyte (CD4) counts <100 cells/ μl who are not yet on antiretroviral treatment (ART). Two screening strategies are included in national guidelines: reflex screening, where a CrAg test is performed on remnant blood samples from CD4 testing; and provider-initiated screening, where providers order a CrAg test after a patient returns for CD4 test results. The objective of this study was to compare costs and effectiveness of these two screening strategies. Methods We developed a decision analytic model to compare reflex and provider-initiated screening in terms of programmatic and health outcomes (number screened, number identified for preemptive treatment, lives saved, and discounted years of life saved) and screening and treatment costs (2015 USD). We estimated a base case with prevalence and other parameters based on data collected during CrAg screening pilot projects integrated into routine HIV care in Gauteng, Free State, and Western Cape Provinces. We conducted sensitivity analyses to explore how results change with underlying parameter assumptions. Results In the base case, for each 100,000 CD4 tests, the reflex strategy compared to the provider-initiated strategy has higher screening costs ($37,536 higher) but lower treatment costs ($55,165 lower), so overall costs of screening and treatment are $17,629 less with the reflex strategy. The reflex strategy saves more lives (30 lives, 647 additional years of life saved). Sensitivity analyses suggest that reflex screening dominates provider-initiated screening (lower total costs and more lives saved) or saves additional lives for small additional costs (< $125 per life year) across a wide range of conditions (CrAg prevalence, patient and provider behavior, patient survival without treatment, and effectiveness of preemptive fluconazole treatment). Conclusions In countries with substantial numbers of people with untreated, advanced HIV disease such as South Africa, CrAg screening before initiation of ART has the potential to reduce cryptococcal meningitis and save lives. Reflex screening compared to provider-initiated screening saves more lives and is likely to be cost saving or have low additional costs per additional year of life saved. PMID:27390864
Approximate, computationally efficient online learning in Bayesian spiking neurons.
Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André
2014-03-01
Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-05-01
The Transient Reactor Analysis Code (TRAC) is being developed at the Los Alamos Scientific Laboratory (LASL) to provide an advanced ''best estimate'' predictive capability for the analysis of postulated accidents in light water reactors (LWRs). TRAC-Pl provides this analysis capability for pressurized water reactors (PWRs) and for a wide variety of thermal-hydraulic experimental facilities. It features a three-dimensional treatment of the pressure vessel and associated internals; two-phase nonequilibrium hydrodynamics models; flow-regime-dependent constitutive equation treatment; reflood tracking capability for both bottom flood and falling film quench fronts; and consistent treatment of entire accident sequences including the generation of consistent initial conditions.more » The TRAC-Pl User's Manual is composed of two separate volumes. Volume I gives a description of the thermal-hydraulic models and numerical solution methods used in the code. Detailed programming and user information is also provided. Volume II presents the results of the developmental verification calculations.« less
Employment Opportunities for Family Members in Germany.
1983-05-24
30 V . NEW INITIATIVES TO IMPROVE FAMILY MEMBER EMPLOYMENT OPPORTUNITIES ........ ...................... . 40 VI. OBSERVATIONS AND COMMENTS...I, II, V Local National Hire 9.94% 2.5% 6.52% 1.93% 12.07% NOTES 1. CONUS Hire Fringe Benefits include overseas unique estimated costs outlined in...from a survey of ODCSRM FY81 vouchers . 19 TABLE 3-4 FY83 GENERAL SCHEDULE CONUS HIRE CIVILIAN COST FACTORS This table provides cost factors for
1995-02-01
Descriptive Summary 2 FIP Resources and Indefinite Delivery - Quantity Contracts 3 Central Design Activity Summary 4 Accesion For NTIS CRA&IDTIC TAB...interface with financial systems should be integrated into the standard architecture of the Military Departments iii to ensure maximum cost...provided to support and maintain the DFAS enterprise local area network initiative to establish a standardized architecture for office automation and
Combat Service Support (CSS) Enabler Functional Assessment (CEFA)
1998-07-01
CDR), Combined Arms Support Command (CASCOM) with a tool to aid decision making related to mitigating E/I peacetime (programmatic) and wartime risks...not be fielded by Fiscal Year (FY) 10. Based on their estimates, any decisions , especially reductions in manpower, which rely on the existence of the E...Support (CSS) enablers/initiatives (E/I), thereby providing the Commander (CDR), Combined Arms Support Command (CASCOM) with a tool to aid decision
A model-based 3D template matching technique for pose acquisition of an uncooperative space object.
Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele
2015-03-16
This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented.
Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S
2003-01-01
In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).
How social influence can undermine the wisdom of crowd effect.
Lorenz, Jan; Rauhut, Heiko; Schweitzer, Frank; Helbing, Dirk
2011-05-31
Social groups can be remarkably smart and knowledgeable when their averaged judgements are compared with the judgements of individuals. Already Galton [Galton F (1907) Nature 75:7] found evidence that the median estimate of a group can be more accurate than estimates of experts. This wisdom of crowd effect was recently supported by examples from stock markets, political elections, and quiz shows [Surowiecki J (2004) The Wisdom of Crowds]. In contrast, we demonstrate by experimental evidence (N = 144) that even mild social influence can undermine the wisdom of crowd effect in simple estimation tasks. In the experiment, subjects could reconsider their response to factual questions after having received average or full information of the responses of other subjects. We compare subjects' convergence of estimates and improvements in accuracy over five consecutive estimation periods with a control condition, in which no information about others' responses was provided. Although groups are initially "wise," knowledge about estimates of others narrows the diversity of opinions to such an extent that it undermines the wisdom of crowd effect in three different ways. The "social influence effect" diminishes the diversity of the crowd without improvements of its collective error. The "range reduction effect" moves the position of the truth to peripheral regions of the range of estimates so that the crowd becomes less reliable in providing expertise for external observers. The "confidence effect" boosts individuals' confidence after convergence of their estimates despite lack of improved accuracy. Examples of the revealed mechanism range from misled elites to the recent global financial crisis.
A Unified Estimation Framework for State-Related Changes in Effective Brain Connectivity.
Samdin, S Balqis; Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain
2017-04-01
This paper addresses the critical problem of estimating time-evolving effective brain connectivity. Current approaches based on sliding window analysis or time-varying coefficient models do not simultaneously capture both slow and abrupt changes in causal interactions between different brain regions. To overcome these limitations, we develop a unified framework based on a switching vector autoregressive (SVAR) model. Here, the dynamic connectivity regimes are uniquely characterized by distinct vector autoregressive (VAR) processes and allowed to switch between quasi-stationary brain states. The state evolution and the associated directed dependencies are defined by a Markov process and the SVAR parameters. We develop a three-stage estimation algorithm for the SVAR model: 1) feature extraction using time-varying VAR (TV-VAR) coefficients, 2) preliminary regime identification via clustering of the TV-VAR coefficients, 3) refined regime segmentation by Kalman smoothing and parameter estimation via expectation-maximization algorithm under a state-space formulation, using initial estimates from the previous two stages. The proposed framework is adaptive to state-related changes and gives reliable estimates of effective connectivity. Simulation results show that our method provides accurate regime change-point detection and connectivity estimates. In real applications to brain signals, the approach was able to capture directed connectivity state changes in functional magnetic resonance imaging data linked with changes in stimulus conditions, and in epileptic electroencephalograms, differentiating ictal from nonictal periods. The proposed framework accurately identifies state-dependent changes in brain network and provides estimates of connectivity strength and directionality. The proposed approach is useful in neuroscience studies that investigate the dynamics of underlying brain states.
Shared sensory estimates for human motion perception and pursuit eye movements.
Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio; Osborne, Leslie C
2015-06-03
Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. Copyright © 2015 the authors 0270-6474/15/358515-16$15.00/0.
Shared Sensory Estimates for Human Motion Perception and Pursuit Eye Movements
Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio
2015-01-01
Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. PMID:26041919
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steen, M.; Lisell, L.; Mosey, G.
2013-01-01
The U.S. Environmental Protection Agency (EPA), in accordance with the RE-Powering America's Land initiative, selected the Vincent Mullins Landfill in Tucson, Arizona, for a feasibility study of renewable energy production. Under the RE-Powering America's Land initiative, the EPA provided funding to the National Renewable Energy Laboratory (NREL) to support the study. NREL provided technical assistance for this project but did not assess environmental conditions at the site beyond those related to the performance of a photovoltaic (PV) system. The purpose of this report is to assess the site for a possible PV installation and estimate the cost and performance ofmore » different PV configurations, as well as to recommend financing options that could assist in the implementation of a PV system. In addition to the Vincent Mullins site, four similar landfills in Tucson are included as part of this study.« less
Isayev, Olexandr; Gorb, Leonid; Qasim, Mo; Leszczynski, Jerzy
2008-09-04
CL-20 (2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane or HNIW) is a high-energy nitramine explosive. To improve atomistic understanding of the thermal decomposition of CL-20 gas and solid phases, we performed a series of ab initio molecular dynamics simulations. We found that during unimolecular decomposition, unlike other nitramines (e.g., RDX, HMX), CL-20 has only one distinct initial reaction channelhomolysis of the N-NO2 bond. We did not observe any HONO elimination reaction during unimolecular decomposition, whereas the ring-breaking reaction was followed by NO 2 fission. Therefore, in spite of limited sampling, that provides a mostly qualitative picture, we proposed here a scheme of unimolecular decomposition of CL-20. The averaged product population over all trajectories was estimated at four HCN, two to four NO2, two to four NO, one CO, and one OH molecule per one CL-20 molecule. Our simulations provide a detailed description of the chemical processes in the initial stages of thermal decomposition of condensed CL-20, allowing elucidation of key features of such processes as composition of primary reaction products, reaction timing, and Arrhenius behavior of the system. The primary reactions leading to NO2, NO, N 2O, and N2 occur at very early stages. We also estimated potential activation barriers for the formation of NO2, which essentially determines overall decomposition kinetics and effective rate constants for NO2 and N2. The calculated solid-phase decomposition pathways correlate with available condensed-phase experimental data.
Estimating the costs of human space exploration
NASA Technical Reports Server (NTRS)
Mandell, Humboldt C., Jr.
1994-01-01
The plan for NASA's new exploration initiative has the following strategic themes: (1) incremental, logical evolutionary development; (2) economic viability; and (3) excellence in management. The cost estimation process is involved with all of these themes and they are completely dependent upon the engineering cost estimator for success. The purpose is to articulate the issues associated with beginning this major new government initiative, to show how NASA intends to resolve them, and finally to demonstrate the vital importance of a leadership role by the cost estimation community.
NASA Astrophysics Data System (ADS)
Fan, Jishan; Li, Fucai; Nakamura, Gen
2018-06-01
In this paper we continue our study on the establishment of uniform estimates of strong solutions with respect to the Mach number and the dielectric constant to the full compressible Navier-Stokes-Maxwell system in a bounded domain Ω \\subset R^3. In Fan et al. (Kinet Relat Models 9:443-453, 2016), the uniform estimates have been obtained for large initial data in a short time interval. Here we shall show that the uniform estimates exist globally if the initial data are small. Based on these uniform estimates, we obtain the convergence of the full compressible Navier-Stokes-Maxwell system to the incompressible magnetohydrodynamic equations for well-prepared initial data.
NASA Astrophysics Data System (ADS)
Daniell, James; Mühr, Bernhard; Kunz-Plapp, Tina; Brink, Susan A.; Kunz, Michael; Khazai, Bijan; Wenzel, Friedemann
2014-05-01
In the aftermath of a disaster, the extent of the socioeconomic loss (fatalities, homelessness and economic losses) is often not known and it may take days before a reasonable estimate is known. Using the technique of socio-economic fragility functions developed (Daniell, 2014) using a regression of socio-economic indicators through time against historical empirical loss vs. intensity data, a first estimate can be established. With more information from the region as the disaster unfolds, a more detailed estimate can be provided via a calibration of the initial loss estimate parameters. In 2013, two main disasters hit the Philippines; the Bohol earthquake in October and the Haiyan typhoon in November. Although both disasters were contrasting and hit different regions, the same generalised methodology was used for initial rapid estimates and then the updating of the disaster loss estimate through time. The CEDIM Forensic Disaster Analysis Group of KIT and GFZ produced 6 reports for Bohol and 2 reports for Haiyan detailing various aspects of the disasters from the losses to building damage, the socioeconomic profile and also the social networking and disaster response. This study focusses on the loss analysis undertaken. The following technique was used:- 1. A regression of historical earthquake and typhoon losses for the Philippines was examined using the CATDAT Damaging Earthquakes Database, and various Philippines databases respectively. 2. The historical intensity impact of the examined events were placed in a GIS environment in order to allow correlation with the population and capital stock database from 1900-2013 to create a loss function. The modified human development index from 1900-2013 was also used to also calibrate events through time. 3. The earthquake intensity and the wind speed intensity was used from the 2013 events as well as the 2013 capital stock and population in order to calculate the number of fatalities (except in Haiyan), homeless and economic losses. 4. After the initial estimate, damage patterns were examined and the loss estimates calibrated. The economic loss estimates of 9.5 billion USD capital stock and 4.1 billion USD GDP costs and the estimate of 2.1 million long term homeless from the Typhoon Haiyan event from the initial model proved very accurate with around the same values coming from reports around a month after the event. For the Bohol earthquake, the economic loss estimate was reasonable (around 100 million USD), however, the number of fatalities was slightly underestimated given the intensity field being underestimated and due to the number of landslide and other deaths (heart attacks etc.) in the first day. As the damage estimates were reported on post-disaster over the next days, the fatality function was calibrated and produced results closer to 200 deaths. Such parsimonious modelling in the aftermath of a disaster and socioeconomic profiling of the disaster area can prove useful to relief agencies and governments as well as those on the ground giving a first estimate of the extent of the damage and the models will as such continue to be developed in the course of FDA. Daniell J.E. (2014) The development of socio-economic fragility functions for use in worldwide rapid earthquake loss estimation procedures, Ph.D. Thesis (in publishing), Karlsruhe Institute of Technology, Karlsruhe, Germany.
Parent-Child Communication and Marijuana Initiation: Evidence Using Discrete-Time Survival Analysis
Nonnemaker, James M.; Silber-Ashley, Olivia; Farrelly, Matthew C.; Dench, Daniel
2012-01-01
This study supplements existing literature on the relationship between parent-child communication and adolescent drug use by exploring whether parental and/or adolescent recall of specific drug-related conversations differentially impact youth's likelihood of initiating marijuana use. Using discrete-time survival analysis, we estimated the hazard of marijuana initiation using a logit model to obtain an estimate of the relative risk of initiation. Our results suggest that parent-child communication about drug use is either not protective (no effect) or—in the case of youth reports of communication—potentially harmful (leading to increased likelihood of marijuana initiation). PMID:22958867
Curvature estimation for multilayer hinged structures with initial strains
NASA Astrophysics Data System (ADS)
Nikishkov, G. P.
2003-10-01
Closed-form estimate of curvature for hinged multilayer structures with initial strains is developed. The finite element method is used for modeling of self-positioning microstructures. The geometrically nonlinear problem with large rotations and large displacements is solved using step procedure with node coordinate update. Finite element results for curvature of the hinged micromirror with variable width is compared to closed-form estimates.
Layered ejecta craters and the early water/ice aquifer on Mars
NASA Astrophysics Data System (ADS)
Oberbeck, V. R.
2009-03-01
A model for emplacement of deposits of impact craters is presented that explains the size range of Martian layered ejecta craters between 5 km and 60 km in diameter in the low and middle latitudes. The impact model provides estimates of the water content of crater deposits relative to volatile content in the aquifer of Mars. These estimates together with the amount of water required to initiate fluid flow in terrestrial debris flows provide an estimate of 21% by volume (7.6 × 107 km3) of water/ice that was stored between 0.27 and 2.5 km depth in the crust of Mars during Hesperian and Amazonian time. This would have been sufficient to supply the water for an ocean in the northern lowlands of Mars. The existence of fluidized craters smaller than 5 km diameter in some places on Mars suggests that volatiles were present locally at depths less than 0.27 km. Deposits of Martian craters may be ideal sites for searches for fossils of early organisms that may have existed in the water table if life originated on Mars.
GD SDR Automatic Gain Control Characterization Testing
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) will provide experimenters an opportunity to develop and demonstrate experimental waveforms in space. The GD SDR platform and initial waveform were characterized on the ground before launch and the data will be compared to the data that will be collected during on-orbit operations. A desired function of the SDR is to estimate the received signal to noise ratio (SNR), which would enable experimenters to better determine on-orbit link conditions. The GD SDR does not have an SNR estimator, but it does have an analog and a digital automatic gain control (AGC). The AGCs can be used to estimate the SDR input power which can be converted into a SNR. Tests were conducted to characterize the AGC response to changes in SDR input power and temperature. This purpose of this paper is to describe the tests that were conducted, discuss the results showi ng how the AGCs relate to the SDR input power, and provide recommendations for AGC testing and characterization.
GD SDR Automatic Gain Control Characterization Testing
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) will provide experimenters an opportunity to develop and demonstrate experimental waveforms in space. The GD SDR platform and initial waveform were characterized on the ground before launch and the data will be compared to the data that will be collected during on-orbit operations. A desired function of the SDR is to estimate the received signal to noise ratio (SNR), which would enable experimenters to better determine on-orbit link conditions. The GD SDR does not have an SNR estimator, but it does have an analog and a digital automatic gain control (AGC). The AGCs can be used to estimate the SDR input power which can be converted into a SNR. Tests were conducted to characterize the AGC response to changes in SDR input power and temperature. This purpose of this paper is to describe the tests that were conducted, discuss the results showing how the AGCs relate to the SDR input power, and provide recommendations for AGC testing and characterization.
Wright, David; Twigg, Michael; Thornley, Tracey
2015-02-01
This study aims to pilot a community pharmacy chronic obstructive pulmonary disease (COPD) case finding service in England, estimating costs and effects. Patients potentially at risk of COPD were screened with validated tools. Smoking cessation was offered to all smokers identified as potentially having undiagnosed COPD. Cost and effects of the service were estimated. Twenty-one community pharmacies screened 238 patients over 9 months. One hundred thirty-five patients were identified with potentially undiagnosed COPD; 88 were smokers. Smoking cessation initiation provided a project gain of 38.62 life years, 19.92 quality-adjusted life years and a cost saving of £392.67 per patient screened. COPD case finding by community pharmacists potentially provides cost-savings and improves quality of life. © 2014 The Authors. International Journal of Pharmacy Practice published by John Wiley & Sons Ltd on behalf of Royal Pharmaceutical Society.
NASA Astrophysics Data System (ADS)
Nora, R.; Field, J. E.; Peterson, J. Luc; Spears, B.; Kruse, M.; Humbird, K.; Gaffney, J.; Springer, P. T.; Brandon, S.; Langer, S.
2017-10-01
We present an experimentally corroborated hydrodynamic extrapolation of several recent BigFoot implosions on the National Ignition Facility. An estimate on the value and error of the hydrodynamic scale necessary for ignition (for each individual BigFoot implosion) is found by hydrodynamically scaling a distribution of multi-dimensional HYDRA simulations whose outputs correspond to their experimental observables. The 11-parameter database of simulations, which include arbitrary drive asymmetries, dopant fractions, hydrodynamic scaling parameters, and surface perturbations due to surrogate tent and fill-tube engineering features, was computed on the TRINITY supercomputer at Los Alamos National Laboratory. This simple extrapolation is the first step in providing a rigorous calibration of our workflow to provide an accurate estimate of the efficacy of achieving ignition on the National Ignition Facility. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Heli/SITAN: A Terrain Referenced Navigation algorithm for helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollowell, J.
1990-01-01
Heli/SITAN is a Terrain Referenced Navigation (TRN) algorithm that utilizes radar altimeter ground clearance measurements in combination with a conventional navigation system and a stored digital terrain elevation map to accurately estimate a helicopter's position. Multiple Model Adaptive Estimation (MMAE) techniques are employed using a bank of single state Kalman filters to ensure that reliable position estimates are obtained even in the face of large initial position errors. A real-time implementation of the algorithm was tested aboard a US Army UH-1 helicopter equipped with a Singer-Kearfott Doppler Velocity Sensor (DVS) and a Litton LR-80 strapdown Attitude and Heading Reference Systemmore » (AHRS). The median radial error of the position fixes provided in real-time by this implementation was less than 50 m for a variety of mission profiles. 6 refs., 7 figs.« less
Mercury Content of Sediments in East Fork Poplar Creek: Current Assessment and Past Trends
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooks, Scott C.; Eller, Virginia A.; Dickson, Johnbull O.
2017-01-01
This study provided new information on sediment mercury (Hg) and monomethylmercury (MMHg) content and chemistry. The current inventory of Hg in East Fork Poplar Creek (EFPC) bed sediments was estimated to be 334 kg, which represents a ~67% decrease relative to the initial investigations in 1984. MMHg sediment inventory was estimated to be 44.1 g, lower but roughly similar to past estimates. The results support the relevance and potential impacts of other active and planned investigations within the Mercury Remediation Technology Development for Lower East Fork Poplar Creek project (e.g., assessment and control of bank soil inputs, sorbents for Hgmore » and MMHg removal, re-introduction of freshwater clams to EFPC), and identify gaps in current understanding that represent opportunities to understand controlling variables that may inform future technology development studies.« less
Magma ocean formation due to giant impacts
NASA Technical Reports Server (NTRS)
Tonks, W. B.; Melosh, H. J.
1993-01-01
The thermal effects of giant impacts are studied by estimating the melt volume generated by the initial shock wave and corresponding magma ocean depths. Additionally, the effects of the planet's initial temperature on the generated melt volume are examined. The shock pressure required to completely melt the material is determined using the Hugoniot curve plotted in pressure-entropy space. Once the melting pressure is known, an impact melting model is used to estimate the radial distance melting occurred from the impact site. The melt region's geometry then determines the associated melt volume. The model is also used to estimate the partial melt volume. Magma ocean depths resulting from both excavated and retained melt are calculated, and the melt fraction not excavated during the formation of the crater is estimated. The fraction of a planet melted by the initial shock wave is also estimated using the model.
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2009-02-01
This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).
Using Landsat to Diagnose Trends in Disturbance Magnitude Across the National Forest System
NASA Astrophysics Data System (ADS)
Hernandez, A. J.; Healey, S. P.; Stehman, S. V.; Ramsey, R. D.
2014-12-01
The Landsat archive is increasingly being used to detect trends in the occurrence of forest disturbance. Beyond information about the amount of area affected, forest managers need to know if and how disturbance severity is changing. For example, the United States National Forest System (NFS) has developed a comprehensive plan for carbon monitoring, which requires a detailed temporal mapping of forest disturbance magnitudes across 75 million hectares. To meet this need, we have prepared multitemporal models of percent canopy cover that were calibrated with extensive field data from the USFS Forest Inventory and Analysis Program (FIA). By applying these models to pre- and post-event Landsat images at the site of known disturbances, we develop maps showing first-order estimates of disturbance magnitude on the basis of cover removal. However, validation activities consistently show that these initial estimates under-estimate disturbance magnitude. We have developed an approach, which quantifies this under-prediction at the landscape level and uses empirical validation data to adjust change magnitude estimates derived from initial disturbance maps. In an assessment of adjusted magnitude trends of NFS' Northern Region from 1990 to the present, we observed significant declines since 1990 (p < .01) in harvest magnitude, likely related to known reduction of clearcutting practices in the region. Fire, conversely, did not show strongly significant trends in magnitude, despite an increase in the overall area affected. As Landsat is used to provide increasingly precise maps of the timing and location of historical forest disturbance, a logical next step is to use the archive to generate widely interpretable and objective estimates of disturbance magnitude.
Estimating error rates for firearm evidence identifications in forensic science
Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan
2018-01-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680
Estimating error rates for firearm evidence identifications in forensic science.
Song, John; Vorburger, Theodore V; Chu, Wei; Yen, James; Soons, Johannes A; Ott, Daniel B; Zhang, Nien Fan
2018-03-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. Published by Elsevier B.V.
On-Orbit Multi-Field Wavefront Control with a Kalman Filter
NASA Technical Reports Server (NTRS)
Lou, John; Sigrist, Norbert; Basinger, Scott; Redding, David
2008-01-01
A document describes a multi-field wavefront control (WFC) procedure for the James Webb Space Telescope (JWST) on-orbit optical telescope element (OTE) fine-phasing using wavefront measurements at the NIRCam pupil. The control is applied to JWST primary mirror (PM) segments and secondary mirror (SM) simultaneously with a carefully selected ordering. Through computer simulations, the multi-field WFC procedure shows that it can reduce the initial system wavefront error (WFE), as caused by random initial system misalignments within the JWST fine-phasing error budget, from a few dozen micrometers to below 50 nm across the entire NIRCam Field of View, and the WFC procedure is also computationally stable as the Monte-Carlo simulations indicate. With the incorporation of a Kalman Filter (KF) as an optical state estimator into the WFC process, the robustness of the JWST OTE alignment process can be further improved. In the presence of some large optical misalignments, the Kalman state estimator can provide a reasonable estimate of the optical state, especially for those degrees of freedom that have a significant impact on the system WFE. The state estimate allows for a few corrections to the optical state to push the system towards its nominal state, and the result is that a large part of the WFE can be eliminated in this step. When the multi-field WFC procedure is applied after Kalman state estimate and correction, the stability of fine-phasing control is much more certain. Kalman Filter has been successfully applied to diverse applications as a robust and optimal state estimator. In the context of space-based optical system alignment based on wavefront measurements, a KF state estimator can combine all available wavefront measurements, past and present, as well as measurement and actuation error statistics to generate a Maximum-Likelihood optimal state estimator. The strength and flexibility of the KF algorithm make it attractive for use in real-time optical system alignment when WFC alone cannot effectively align the system.
Coldman, Andrew; Phillips, Norm
2013-07-09
There has been growing interest in the overdiagnosis of breast cancer as a result of mammography screening. We report incidence rates in British Columbia before and after the initiation of population screening and provide estimates of overdiagnosis. We obtained the numbers of breast cancer diagnoses from the BC Cancer Registry and screening histories from the Screening Mammography Program of BC for women aged 30-89 years between 1970 and 2009. We calculated age-specific rates of invasive breast cancer and ductal carcinoma in situ. We compared these rates by age, calendar period and screening participation. We obtained 2 estimates of overdiagnosis from cumulative cancer rates among women between the ages of 40 and 89 years: the first estimate compared participants with nonparticipants; the second estimate compared observed and predicted population rates. We calculated participation-based estimates of overdiagnosis to be 5.4% for invasive disease alone and 17.3% when ductal carcinoma in situ was included. The corresponding population-based estimates were -0.7% and 6.7%. Participants had higher rates of invasive cancer and ductal carcinoma in situ than nonparticipants but lower rates after screening stopped. Population incidence rates for invasive cancer increased after 1980; by 2009, they had returned to levels similar to those of the 1970s among women under 60 years of age but remained elevated among women 60-79 years old. Rates of ductal carcinoma in situ increased in all age groups. The extent of overdiagnosis of invasive cancer in our study population was modest and primarily occurred among women over the age of 60 years. However, overdiagnosis of ductal carcinoma in situ was elevated for all age groups. The estimation of overdiagnosis from observational data is complex and subject to many influences. The use of mammography screening in older women has an increased risk of overdiagnosis, which should be considered in screening decisions.
Development and initial validation of the internalization of Asian American stereotypes scale.
Shen, Frances C; Wang, Yu-Wei; Swanson, Jane L
2011-07-01
This research consists of four studies on the initial reliability and validity of the Internalization of Asian American Stereotypes Scale (IAASS), a self-report instrument that measures the degree Asian Americans have internalized racial stereotypes about their own group. The results from the exploratory and confirmatory factor analyses support a stable four-factor structure of the IAASS: Difficulties with English Language Communication, Pursuit of Prestigious Careers, Emotional Reservation, and Expected Academic Success. Evidence for concurrent and discriminant validity is presented. High internal-consistency and test-retest reliability estimates are reported. A discussion of how this scale can contribute to research and practice regarding internalized stereotyping among Asian Americans is provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scarlata, C.; Mosey, G.
2013-05-01
The U.S. Environmental Protection Agency (EPA), in accordance with the RE-Powering America's Land initiative, selected the Former Chanute Air Force Base site in Rantoul, Illinois, for a feasibility study of renewable energy production. The National Renewable Energy Laboratory (NREL) was contacted to provide technical assistance for this project. The purpose of this study was to assess the site for a possible biopower system installation and estimate the cost, performance, and impacts of different biopower options.
Mathias, Susan D; Gao, Sue K; Rutstein, Mark; Snyder, Claire F; Wu, Albert W; Cella, David
2009-02-01
Interpretation of data from health-related quality of life (HRQoL) questionnaires can be enhanced with the availability of minimally important difference (MID) estimates. This information will aid clinicians in interpreting HRQoL differences within patients over time and between treatment groups. The Immune Thrombocytopenic Purpura (ITP)-Patient Assessment Questionnaire (PAQ) is the only comprehensive HRQoL questionnaire available for adults with ITP. Forty centers from within the US and Europe enrolled ITP patients into one of two multicenter, randomized, placebo-controlled, double-blind, 6-month, phase III clinical trials of romiplostim. Patients enrolled in these studies self-administered the ITP-PAQ and two items assessing global change (anchors) at baseline and weeks 4, 12, and 24. Using data from the ITP-PAQ and these two anchors, an anchor-based estimate was computed and combined with the standard error of measurement and standard deviation to compute a distribution-based estimate in order to provide an MID range for each of the 11 scales of the ITP-PAQ. A total of 125 patients participated in these clinical trials and provided data for use in these analyses. Combining results from anchor- and distribution-based approaches, MID values were computed for 9 of the 11 scales. MIDs ranged from 8 to 12 points for Symptoms, Bother, Psychological, Overall QOL, Social Activity, Menstrual Symptoms, and Fertility, while the range was 10 to 15 points for the Fatigue and Activity scales of the ITP-PAQ. These estimates, while slightly higher than other published MID estimates, were consistent with moderate effect sizes. These MID estimates will serve as a useful tool to researchers and clinicians using the ITP-PAQ, providing guidance for interpretation of baseline scores as well as changes in ITP-PAQ scores over time. Additional work should be done to finalize these initial estimates using more appropriate anchors that correlate more highly with the ITP-PAQ scales.
Clement, Matthew; O'Keefe, Joy M; Walters, Brianne
2015-01-01
While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.
Le, Thao N; Stockdale, Gary
2011-10-01
The purpose of this study was to examine the effects of school demographic factors and youth's perception of discrimination on delinquency in adolescence and into young adulthood for African American, Asian, Hispanic, and white racial/ethnic groups. Using data from the National Longitudinal Study of Adolescent Health (Add Health), models testing the effect of school-related variables on delinquency trajectories were evaluated for the four racial/ethnic groups using Mplus 5.21 statistical software. Results revealed that greater student ethnic diversity and perceived discrimination, but not teacher ethnic diversity, resulted in higher initial delinquency estimates at 13 years of age for all groups. However, except for African Americans, having a greater proportion of female teachers in the school decreased initial delinquency estimates. For African Americans and whites, a larger school size also increased the initial estimates. Additionally, lower social-economic status increased the initial estimates for whites, and being born in the United States increased the initial estimates for Asians and Hispanics. Finally, regardless of the initial delinquency estimate at age 13 and the effect of the school variables, all groups eventually converged to extremely low delinquency in young adulthood, at the age of 21 years. Educators and public policy makers seeking to prevent and reduce delinquency can modify individual risks by modifying characteristics of the school environment. Policies that promote respect for diversity and intolerance toward discrimination, as well as training to help teachers recognize the precursors and signs of aggression and/or violence, may also facilitate a positive school environment, resulting in lower delinquency. Copyright © 2011 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior
NASA Technical Reports Server (NTRS)
Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.
2017-01-01
A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.
Gallagher, Glenn; Zhan, Tao; Hsu, Ying-Kuang; Gupta, Pamela; Pederson, James; Croes, Bart; Blake, Donald R; Barletta, Barbara; Meinardi, Simone; Ashford, Paul; Vetter, Arnie; Saba, Sabine; Slim, Rayan; Palandre, Lionel; Clodic, Denis; Mathis, Pamela; Wagner, Mark; Forgie, Julia; Dwyer, Harry; Wolf, Katy
2014-01-21
To provide information for greenhouse gas reduction policies, the California Air Resources Board (CARB) inventories annual emissions of high-global-warming potential (GWP) fluorinated gases, the fastest growing sector of greenhouse gas (GHG) emissions globally. Baseline 2008 F-gas emissions estimates for selected chlorofluorocarbons (CFC-12), hydrochlorofluorocarbons (HCFC-22), and hydrofluorocarbons (HFC-134a) made with an inventory-based methodology were compared to emissions estimates made by ambient-based measurements. Significant discrepancies were found, with the inventory-based emissions methodology resulting in a systematic 42% under-estimation of CFC-12 emissions from older refrigeration equipment and older vehicles, and a systematic 114% overestimation of emissions for HFC-134a, a refrigerant substitute for phased-out CFCs. Initial, inventory-based estimates for all F-gas emissions had assumed that equipment is no longer in service once it reaches its average lifetime of use. Revised emission estimates using improved models for equipment age at end-of-life, inventories, and leak rates specific to California resulted in F-gas emissions estimates in closer agreement to ambient-based measurements. The discrepancies between inventory-based estimates and ambient-based measurements were reduced from -42% to -6% for CFC-12, and from +114% to +9% for HFC-134a.
Williams, K.A.; Frederick, P.C.; Nichols, J.D.
2011-01-01
Many populations of animals are fluid in both space and time, making estimation of numbers difficult. Much attention has been devoted to estimation of bias in detection of animals that are present at the time of survey. However, an equally important problem is estimation of population size when all animals are not present on all survey occasions. Here, we showcase use of the superpopulation approach to capture-recapture modeling for estimating populations where group membership is asynchronous, and where considerable overlap in group membership among sampling occasions may occur. We estimate total population size of long-legged wading bird (Great Egret and White Ibis) breeding colonies from aerial observations of individually identifiable nests at various times in the nesting season. Initiation and termination of nests were analogous to entry and departure from a population. Estimates using the superpopulation approach were 47-382% larger than peak aerial counts of the same colonies. Our results indicate that the use of the superpopulation approach to model nesting asynchrony provides a considerably less biased and more efficient estimate of nesting activity than traditional methods. We suggest that this approach may also be used to derive population estimates in a variety of situations where group membership is fluid. ?? 2011 by the Ecological Society of America.
A model for the prediction of latent errors using data obtained during the development process
NASA Technical Reports Server (NTRS)
Gaffney, J. E., Jr.; Martello, S. J.
1984-01-01
A model implemented in a program that runs on the IBM PC for estimating the latent (or post ship) content of a body of software upon its initial release to the user is presented. The model employs the count of errors discovered at one or more of the error discovery processes during development, such as a design inspection, as the input data for a process which provides estimates of the total life-time (injected) error content and of the latent (or post ship) error content--the errors remaining a delivery. The model presented presumes that these activities cover all of the opportunities during the software development process for error discovery (and removal).
Production of Methane and Water from Crew Plastic Waste
NASA Technical Reports Server (NTRS)
Captain, Janine; Santiago, Eddie; Parrish, Clyde; Strayer, Richard F.; Garland, Jay L.
2008-01-01
Recycling is a technology that will be key to creating a self sustaining lunar outpost. The plastics used for food packaging provide a source of material that could be recycled to produce water and methane. The recycling of these plastics will require some additional resources that will affect the initial estimate of starting materials that will have to be transported from earth, mainly oxygen, energy and mass. These requirements will vary depending on the recycling conditions. The degredation products of these plastics will vary under different atmospheric conditions. An estimate of the the production rate of methane and water using typical ISRU processes along with the plastic recycling will be presented.
Raithel, C.J.; Ginsberg, H.S.; Prospero, M.L.
2006-01-01
The endangered American burying beetle, Nicrophorus americanus, was monitored on Block Island, RI, USA, from 1991-2003 using mark-recapture population estimates of adults collected in pitfall traps. Populations increased through time, especially after 1994 when a program was initiated that provided carrion for beetle production. Beetle captures increased with increasing temperature and dew point, and decreased with increasing wind speed. Short distance movement was not related to wind direction, while longer distance flights tended to be downwind. Although many individuals flew considerable distances along transects, most recaptures were in traps near the point of release. These behaviors probably have counterbalancing effects on population estimates.
An Integrated Approach to Indoor and Outdoor Localization
2017-04-17
localization estimate, followed by particle filter based tracking. Initial localization is performed using WiFi and image observations. For tracking we...source. A two-step process is proposed that performs an initial localization es-timate, followed by particle filter based t racking. Initial...mapped, it is possible to use them for localization [20, 21, 22]. Haverinen et al. show that these fields could be used with a particle filter to
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coble, Jamie; Orton, Christopher; Schwantes, Jon
Abstract—The Multi-Isotope Process (MIP) Monitor provides an efficient approach to monitoring the process conditions in used nuclear fuel reprocessing facilities to support process verification and validation. The MIP Monitor applies multivariate analysis to gamma spectroscopy of reprocessing streams in order to detect small changes in the gamma spectrum, which may indicate changes in process conditions. This research extends the MIP Monitor by characterizing a used fuel sample after initial dissolution according to the type of reactor of origin (pressurized or boiling water reactor), initial enrichment, burn up, and cooling time. Simulated gamma spectra were used to develop and test threemore » fuel characterization algorithms. The classification and estimation models employed are based on the partial least squares regression (PLS) algorithm. A PLS discriminate analysis model was developed which perfectly classified reactor type. Locally weighted PLS models were fitted on-the-fly to estimate continuous fuel characteristics. Burn up was predicted within 0.1% root mean squared percent error (RMSPE) and both cooling time and initial enrichment within approximately 2% RMSPE. This automated fuel characterization can be used to independently verify operator declarations of used fuel characteristics and inform the MIP Monitor anomaly detection routines at later stages of the fuel reprocessing stream to improve sensitivity to changes in operational parameters and material diversions.« less
Imbibition of hydraulic fracturing fluids into partially saturated shale
NASA Astrophysics Data System (ADS)
Birdsell, Daniel T.; Rajaram, Harihar; Lackey, Greg
2015-08-01
Recent studies suggest that imbibition of hydraulic fracturing fluids into partially saturated shale is an important mechanism that restricts their migration, thus reducing the risk of groundwater contamination. We present computations of imbibition based on an exact semianalytical solution for spontaneous imbibition. These computations lead to quantitative estimates of an imbibition rate parameter (A) with units of LT-1/2 for shale, which is related to porous medium and fluid properties, and the initial water saturation. Our calculations suggest that significant fractions of injected fluid volumes (15-95%) can be imbibed in shale gas systems, whereas imbibition volumes in shale oil systems is much lower (3-27%). We present a nondimensionalization of A, which provides insights into the critical factors controlling imbibition, and facilitates the estimation of A based on readily measured porous medium and fluid properties. For a given set of medium and fluid properties, A varies by less than factors of ˜1.8 (gas nonwetting phase) and ˜3.4 (oil nonwetting phase) over the range of initial water saturations reported for the Marcellus shale (0.05-0.6). However, for higher initial water saturations, A decreases significantly. The intrinsic permeability of the shale and the viscosity of the fluids are the most important properties controlling the imbibition rate.
NASA Astrophysics Data System (ADS)
Cho, Hyunjung; Jin, Kyeong Sik; Lee, Jaegeun; Lee, Kun-Hong
2018-07-01
Small angle x-ray scattering (SAXS) was used to estimate the degree of polymerization of polymer-grafted carbon nanotubes (CNTs) synthesized using a ‘grafting from’ method. This analysis characterizes the grafted polymer chains without cleaving them from CNTs, and provides reliable data that can complement conventional methods such as thermogravimetric analysis or transmittance electron microscopy. Acrylonitrile was polymerized from the surface of the CNTs by using redox initiation to produce poly-acrylonitrile-grafted CNTs (PAN-CNTs). Polymerization time and the initiation rate were varied to control the degree of polymerization. Radius of gyration (R g ) of PAN-CNTs was determined using the Guinier plot obtained from SAXS solution analysis. The results showed consistent values according to the polymerization condition, up to a maximum R g = 125.70 Å whereas that of pristine CNTs was 99.23 Å. The dispersibility of PAN-CNTs in N,N-dimethylformamide was tested using ultraviolet–visible-near infrared spectroscopy and was confirmed to increase as the degree of polymerization increased. This analysis will be helpful to estimate the degree of polymerization of any polymer-grafted CNTs synthesized using the ‘grafting from’ method and to fabricate polymer/CNT composite materials.
Estimation of effective connectivity via data-driven neural modeling
Freestone, Dean R.; Karoly, Philippa J.; Nešić, Dragan; Aram, Parham; Cook, Mark J.; Grayden, David B.
2014-01-01
This research introduces a new method for functional brain imaging via a process of model inversion. By estimating parameters of a computational model, we are able to track effective connectivity and mean membrane potential dynamics that cannot be directly measured using electrophysiological measurements alone. The ability to track the hidden aspects of neurophysiology will have a profound impact on the way we understand and treat epilepsy. For example, under the assumption the model captures the key features of the cortical circuits of interest, the framework will provide insights into seizure initiation and termination on a patient-specific basis. It will enable investigation into the effect a particular drug has on specific neural populations and connectivity structures using minimally invasive measurements. The method is based on approximating brain networks using an interconnected neural population model. The neural population model is based on a neural mass model that describes the functional activity of the brain, capturing the mesoscopic biophysics and anatomical structure. The model is made subject-specific by estimating the strength of intra-cortical connections within a region and inter-cortical connections between regions using a novel Kalman filtering method. We demonstrate through simulation how the framework can be used to track the mechanisms involved in seizure initiation and termination. PMID:25506315
Bringing physiology into PET of the liver.
Keiding, Susanne
2012-03-01
Several physiologic features make interpretation of PET studies of liver physiology an exciting challenge. As with other organs, hepatic tracer kinetics using PET is quantified by dynamic recording of the liver after the administration of a radioactive tracer, with measurements of time-activity curves in the blood supply. However, the liver receives blood from both the portal vein and the hepatic artery, with the peak of the portal vein time-activity curve being delayed and dispersed compared with that of the hepatic artery. The use of a flow-weighted dual-input time-activity curve is of importance for the estimation of hepatic blood perfusion through initial dynamic PET recording. The portal vein is inaccessible in humans, and methods of estimating the dual-input time-activity curve without portal vein measurements are being developed. Such methods are used to estimate regional hepatic blood perfusion, for example, by means of the initial part of a dynamic (18)F-FDG PET/CT recording. Later, steady-state hepatic metabolism can be assessed using only the arterial input, provided that neither the tracer nor its metabolites are irreversibly trapped in the prehepatic splanchnic area within the acquisition period. This is used in studies of regulation of hepatic metabolism of, for example, (18)F-FDG and (11)C-palmitate.
Cho, Hyunjung; Jin, Kyeong Sik; Lee, Jaegeun; Lee, Kun-Hong
2018-07-06
Small angle x-ray scattering (SAXS) was used to estimate the degree of polymerization of polymer-grafted carbon nanotubes (CNTs) synthesized using a 'grafting from' method. This analysis characterizes the grafted polymer chains without cleaving them from CNTs, and provides reliable data that can complement conventional methods such as thermogravimetric analysis or transmittance electron microscopy. Acrylonitrile was polymerized from the surface of the CNTs by using redox initiation to produce poly-acrylonitrile-grafted CNTs (PAN-CNTs). Polymerization time and the initiation rate were varied to control the degree of polymerization. Radius of gyration (R g ) of PAN-CNTs was determined using the Guinier plot obtained from SAXS solution analysis. The results showed consistent values according to the polymerization condition, up to a maximum R g = 125.70 Å whereas that of pristine CNTs was 99.23 Å. The dispersibility of PAN-CNTs in N,N-dimethylformamide was tested using ultraviolet-visible-near infrared spectroscopy and was confirmed to increase as the degree of polymerization increased. This analysis will be helpful to estimate the degree of polymerization of any polymer-grafted CNTs synthesized using the 'grafting from' method and to fabricate polymer/CNT composite materials.
NASA Astrophysics Data System (ADS)
Kiani, Maryam; Pourtakdoust, Seid H.
2014-12-01
A novel algorithm is presented in this study for estimation of spacecraft's attitudes and angular rates from vector observations. In this regard, a new cubature-quadrature particle filter (CQPF) is initially developed that uses the Square-Root Cubature-Quadrature Kalman Filter (SR-CQKF) to generate the importance proposal distribution. The developed CQPF scheme avoids the basic limitation of particle filter (PF) with regards to counting the new measurements. Subsequently, CQPF is enhanced to adjust the sample size at every time step utilizing the idea of confidence intervals, thus improving the efficiency and accuracy of the newly proposed adaptive CQPF (ACQPF). In addition, application of the q-method for filter initialization has intensified the computation burden as well. The current study also applies ACQPF to the problem of attitude estimation of a low Earth orbit (LEO) satellite. For this purpose, the undertaken satellite is equipped with a three-axis magnetometer (TAM) as well as a sun sensor pack that provide noisy geomagnetic field data and Sun direction measurements, respectively. The results and performance of the proposed filter are investigated and compared with those of the extended Kalman filter (EKF) and the standard particle filter (PF) utilizing a Monte Carlo simulation. The comparison demonstrates the viability and the accuracy of the proposed nonlinear estimator.
ERIC Educational Resources Information Center
Bird, Ronald E.
This paper describes an initial effort to provide a carefully reasoned, factually based, systematic analysis of teacher pay in comparison to pay in other occupations available to college-educated workers. It also reports on the sensitivity of these salary comparison estimates to differences in certain characteristics of the labor force, such as…
Determination of Eros Physical Parameters for Near Earth Asteroid Rendezvous Orbit Phase Navigation
NASA Technical Reports Server (NTRS)
Miller, J. K.; Antreasian, P. J.; Georgini, J.; Owen, W. M.; Williams, B. G.; Yeomans, D. K.
1995-01-01
Navigation of the orbit phase of the Near Earth steroid Rendezvous (NEAR) mission will re,quire determination of certain physical parameters describing the size, shape, gravity field, attitude and inertial properties of Eros. Prior to launch, little was known about Eros except for its orbit which could be determined with high precision from ground based telescope observations. Radar bounce and light curve data provided a rough estimate of Eros shape and a fairly good estimate of the pole, prime meridian and spin rate. However, the determination of the NEAR spacecraft orbit requires a high precision model of Eros's physical parameters and the ground based data provides only marginal a priori information. Eros is the principal source of perturbations of the spacecraft's trajectory and the principal source of data for determining the orbit. The initial orbit determination strategy is therefore concerned with developing a precise model of Eros. The original plan for Eros orbital operations was to execute a series of rendezvous burns beginning on December 20,1998 and insert into a close Eros orbit in January 1999. As a result of an unplanned termination of the rendezvous burn on December 20, 1998, the NEAR spacecraft continued on its high velocity approach trajectory and passed within 3900 km of Eros on December 23, 1998. The planned rendezvous burn was delayed until January 3, 1999 which resulted in the spacecraft being placed on a trajectory that slowly returns to Eros with a subsequent delay of close Eros orbital operations until February 2001. The flyby of Eros provided a brief glimpse and allowed for a crude estimate of the pole, prime meridian and mass of Eros. More importantly for navigation, orbit determination software was executed in the landmark tracking mode to determine the spacecraft orbit and a preliminary shape and landmark data base has been obtained. The flyby also provided an opportunity to test orbit determination operational procedures that will be used in February of 2001. The initial attitude and spin rate of Eros, as well as estimates of reference landmark locations, are obtained from images of the asteroid. These initial estimates are used as a priori values for a more precise refinement of these parameters by the orbit determination software which combines optical measurements with Doppler tracking data to obtain solutions for the required parameters. As the spacecraft is maneuvered; closer to the asteroid, estimates of spacecraft state, asteroid attitude, solar pressure, landmark locations and Eros physical parameters including mass, moments of inertia and gravity harmonics are determined with increasing precision. The determination of the elements of the inertia tensor of the asteroid is critical to spacecraft orbit determination and prediction of the asteroid attitude. The moments of inertia about the principal axes are also of scientific interest since they provide some insight into the internal mass distribution. Determination of the principal axes moments of inertia will depend on observing free precession in the asteroid's attitude dynamics. Gravity harmonics are in themselves of interest to science. When compared with the asteroid shape, some insight may be obtained into Eros' internal structure. The location of the center of mass derived from the first degree harmonic coefficients give a direct indication of overall mass distribution. The second degree harmonic coefficients relate to the radial distribution of mass. Higher degree harmonics may be compared with surface features to gain additional insight into mass distribution. In this paper, estimates of Eros physical parameters obtained from the December 23,1998 flyby will be presented. This new knowledge will be applied to simplification of Eros orbital operations in February of 2001. The resulting revision to the orbit determination strategy will also be discussed.
NASA Astrophysics Data System (ADS)
Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang; Liu, Ming
2016-12-01
An integrated inertial/celestial navigation system (INS/CNS) has wide applicability in lunar rovers as it provides accurate and autonomous navigational information. Initialization is particularly vital for a INS. This paper proposes a two-position initialization method based on a standard Kalman filter. The difference between the computed star vector and the measured star vector is measured. With the aid of a star sensor and the two positions, the attitudinal and positional errors can be greatly reduced, and the biases of three gyros and accelerometers can also be estimated. The semi-physical simulation results show that the positional and attitudinal errors converge within 0.07″ and 0.1 m, respectively, when the given initial positional error is 1 km and the attitudinal error is 10°. These good results show that the proposed method can accomplish alignment, positioning and calibration functions simultaneously. Thus the proposed two-position initialization method has the potential for application in lunar rover navigation.
Patient-centered medical home implementation and primary care provider turnover.
Sylling, Philip W; Wong, Edwin S; Liu, Chuan-Fen; Hernandez, Susan E; Batten, Adam J; Helfrich, Christian D; Nelson, Karin; Fihn, Stephan D; Hebert, Paul L
2014-12-01
The Veterans Health Administration (VHA) began implementing a patient-centered medical home (PCMH) model of care delivery in April 2010 through its Patient Aligned Care Team (PACT) initiative. PACT represents a substantial system reengineering of VHA primary care and its potential effect on primary care provider (PCP) turnover is an important but unexplored relationship. This study examined the association between a system-wide PCMH implementation and PCP turnover. This was a retrospective, longitudinal study of VHA-employed PCPs spanning 29 calendar quarters before PACT and eight quarters of PACT implementation. PCP employment periods were identified from administrative data and turnover was defined by an indicator on the last quarter of each uncensored period. An interrupted time series model was used to estimate the association between PACT and turnover, adjusting for secular trend and seasonality, provider and job characteristics, and local unemployment. We calculated average marginal effects (AME), which reflected the change in turnover probability associated with PACT implementation. The quarterly rate of PCP turnover was 3.06% before PACT and 3.38% after initiation of PACT. In adjusted analysis, PACT was associated with a modest increase in turnover (AME=4.0 additional PCPs per 1000 PCPs per quarter, P=0.004). Models with interaction terms suggested that the PACT-related change in turnover was increasing in provider age and experience. PACT was associated with a modest increase in PCP turnover, concentrated among older and more experienced providers, during initial implementation. Our findings suggest that policymakers should evaluate potential workforce effects when implementing PCMH.
Inverse analysis of turbidites by machine learning
NASA Astrophysics Data System (ADS)
Naruse, H.; Nakao, K.
2017-12-01
This study aims to propose a method to estimate paleo-hydraulic conditions of turbidity currents from ancient turbidites by using machine-learning technique. In this method, numerical simulation was repeated under various initial conditions, which produces a data set of characteristic features of turbidites. Then, this data set of turbidites is used for supervised training of a deep-learning neural network (NN). Quantities of characteristic features of turbidites in the training data set are given to input nodes of NN, and output nodes are expected to provide the estimates of initial condition of the turbidity current. The optimization of weight coefficients of NN is then conducted to reduce root-mean-square of the difference between the true conditions and the output values of NN. The empirical relationship with numerical results and the initial conditions is explored in this method, and the discovered relationship is used for inversion of turbidity currents. This machine learning can potentially produce NN that estimates paleo-hydraulic conditions from data of ancient turbidites. We produced a preliminary implementation of this methodology. A forward model based on 1D shallow-water equations with a correction of density-stratification effect was employed. This model calculates a behavior of a surge-like turbidity current transporting mixed-size sediment, and outputs spatial distribution of volume per unit area of each grain-size class on the uniform slope. Grain-size distribution was discretized 3 classes. Numerical simulation was repeated 1000 times, and thus 1000 beds of turbidites were used as the training data for NN that has 21000 input nodes and 5 output nodes with two hidden-layers. After the machine learning finished, independent simulations were conducted 200 times in order to evaluate the performance of NN. As a result of this test, the initial conditions of validation data were successfully reconstructed by NN. The estimated values show very small deviation from the true parameters. Comparing to previous inverse modeling of turbidity currents, our methodology is superior especially in the efficiency of computation. Also, our methodology has advantage in extensibility and applicability to various sediment transport processes such as pyroclastic flows or debris flows.
Lee, H Y; Lee, J; Henning-Smith, C; Choi, J
2017-11-01
This study identifies whether, and how, human papillomavirus (HPV) literacy and predisposing, enabling, and need factors are associated with HPV vaccine initiation and completion among young adults in Minnesota. Cross-sectional survey design. Using a sample of 170 young adults (aged 18-26 years), we used logistic regression models to identify factors associated with HPV vaccination initiation and completion, including HPV literacy, adjusting for relevant predisposing, enabling, and need factors. Consistent with national estimates, we found relatively low rates of HPV vaccination initiation (46%) and completion (36%). Better HPV literacy was significantly associated with higher rates of both initiation and completion, as was being female and having an annual check-up. Being married/partnered was significantly associated with lower odds of HPV vaccination. Public health programs, policy-makers, and healthcare providers can use these results to increase HPV vaccination rates by making concerted efforts to improve HPV vaccination literacy through individual and public education campaigns and by improving access to annual check-ups. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Gong, Inna Y.; Schwarz, Ute I.; Crown, Natalie; Dresser, George K.; Lazo-Langner, Alejandro; Zou, GuangYong; Roden, Dan M.; Stein, C. Michael; Rodger, Marc; Wells, Philip S.; Kim, Richard B.; Tirona, Rommel G.
2011-01-01
Variable warfarin response during treatment initiation poses a significant challenge to providing optimal anticoagulation therapy. We investigated the determinants of initial warfarin response in a cohort of 167 patients. During the first nine days of treatment with pharmacogenetics-guided dosing, S-warfarin plasma levels and international normalized ratio were obtained to serve as inputs to a pharmacokinetic-pharmacodynamic (PK-PD) model. Individual PK (S-warfarin clearance) and PD (Imax) parameter values were estimated. Regression analysis demonstrated that CYP2C9 genotype, kidney function, and gender were independent determinants of S-warfarin clearance. The values for Imax were dependent on VKORC1 and CYP4F2 genotypes, vitamin K status (as measured by plasma concentrations of proteins induced by vitamin K absence, PIVKA-II) and weight. Importantly, indication for warfarin was a major independent determinant of Imax during initiation, where PD sensitivity was greater in atrial fibrillation than venous thromboembolism. To demonstrate the utility of the global PK-PD model, we compared the predicted initial anticoagulation responses with previously established warfarin dosing algorithms. These insights and modeling approaches have application to personalized warfarin therapy. PMID:22114699
Parent-child communication and marijuana initiation: evidence using discrete-time survival analysis.
Nonnemaker, James M; Silber-Ashley, Olivia; Farrelly, Matthew C; Dench, Daniel
2012-12-01
This study supplements existing literature on the relationship between parent-child communication and adolescent drug use by exploring whether parental and/or adolescent recall of specific drug-related conversations differentially impact youth's likelihood of initiating marijuana use. Using discrete-time survival analysis, we estimated the hazard of marijuana initiation using a logit model to obtain an estimate of the relative risk of initiation. Our results suggest that parent-child communication about drug use is either not protective (no effect) or - in the case of youth reports of communication - potentially harmful (leading to increased likelihood of marijuana initiation). Copyright © 2012 Elsevier Ltd. All rights reserved.
Testing Software Development Project Productivity Model
NASA Astrophysics Data System (ADS)
Lipkin, Ilya
Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control, Simulation and etc... This research validates findings from previous work concerning software project productivity and leverages said results in this study. The hypothesized project productivity model provides statistical support and validation of expert opinions used by practitioners in the field of software project estimation.
Improving Children’s Knowledge of Fraction Magnitudes
Fazio, Lisa K.; Kennedy, Casey A.; Siegler, Robert S.
2016-01-01
We examined whether playing a computerized fraction game, based on the integrated theory of numerical development and on the Common Core State Standards’ suggestions for teaching fractions, would improve children’s fraction magnitude understanding. Fourth and fifth-graders were given brief instruction about unit fractions and played Catch the Monster with Fractions, a game in which they estimated fraction locations on a number line and received feedback on the accuracy of their estimates. The intervention lasted less than 15 minutes. In our initial study, children showed large gains from pretest to posttest in their fraction number line estimates, magnitude comparisons, and recall accuracy. In a more rigorous second study, the experimental group showed similarly large improvements, whereas a control group showed no improvement from practicing fraction number line estimates without feedback. The results provide evidence for the effectiveness of interventions emphasizing fraction magnitudes and indicate how psychological theories and research can be used to evaluate specific recommendations of the Common Core State Standards. PMID:27768756
Vision-guided gripping of a cylinder
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1991-01-01
The motivation for vision-guided servoing is taken from tasks in automated or telerobotic space assembly and construction. Vision-guided servoing requires the ability to perform rapid pose estimates and provide predictive feature tracking. Monocular information from a gripper-mounted camera is used to servo the gripper to grasp a cylinder. The procedure is divided into recognition and servo phases. The recognition stage verifies the presence of a cylinder in the camera field of view. Then an initial pose estimate is computed and uncluttered scan regions are selected. The servo phase processes only the selected scan regions of the image. Given the knowledge, from the recognition phase, that there is a cylinder in the image and knowing the radius of the cylinder, 4 of the 6 pose parameters can be estimated with minimal computation. The relative motion of the cylinder is obtained by using the current pose and prior pose estimates. The motion information is then used to generate a predictive feature-based trajectory for the path of the gripper.
Hanford Site Composite Analysis Technical Approach Description: Groundwater
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budge, T. J.
The groundwater facet of the revised CA is responsible for generating predicted contaminant concentration values over the entire analysis spatial and temporal domain. These estimates will be used as part of the groundwater pathway dose calculation facet to estimate dose for exposure scenarios. Based on the analysis of existing models and available information, the P2R Model was selected as the numerical simulator to provide these estimates over the 10,000-year temporal domain of the CA. The P2R Model will use inputs from initial plume distributions, updated for a start date of 1/1/2017, and inputs from the vadose zone facet, created bymore » a tool under development as part of the ICF, to produce estimates of hydraulic head, transmissivity, and contaminant concentration over time. A recommendation of acquiring 12 computer processors and 2 TB of hard drive space is made to ensure that the work can be completed within the anticipated schedule of the revised CA.« less
Cost Estimate for Molybdenum and Tantalum Refractory Metal Alloy Flow Circuit Concepts
NASA Technical Reports Server (NTRS)
Hickman, Robert R.; Martin, James J.; Schmidt, George R.; Godfroy, Thomas J.; Bryhan, A.J.
2010-01-01
The Early Flight Fission-Test Facilities (EFF-TF) team at NASA Marshall Space Flight Center (MSFC) has been tasked by the Naval Reactors Prime Contract Team (NRPCT) to provide a cost and delivery rough order of magnitude estimate for a refractory metal-based lithium (Li) flow circuit. The design is based on the stainless steel Li flow circuit that is currently being assembled for an NRPCT task underway at the EFF-TF. While geometrically the flow circuit is not representative of a final flight prototype, knowledge has been gained to quantify (time and cost) the materials, manufacturing, fabrication, assembly, and operations to produce a testable configuration. This Technical Memorandum (TM) also identifies the following key issues that need to be addressed by the fabrication process: Alloy selection and forming, cost and availability, welding, bending, machining, assembly, and instrumentation. Several candidate materials were identified by NRPCT including molybdenum (Mo) alloy (Mo-47.5 %Re), tantalum (Ta) alloys (T-111, ASTAR-811C), and niobium (Nb) alloy (Nb-1 %Zr). This TM is focused only on the Mo and Ta alloys, since they are of higher concern to the ongoing effort. The initial estimate to complete a Mo-47%Re system ready for testing is =$9,000k over a period of 30 mo. The initial estimate to complete a T-111 or ASTAR-811C system ready for testing is =$12,000k over a period of 36 mo.
Cole, Stephen R.; Hudgens, Michael G.; Tien, Phyllis C.; Anastos, Kathryn; Kingsley, Lawrence; Chmiel, Joan S.; Jacobson, Lisa P.
2012-01-01
To estimate the association of antiretroviral therapy initiation with incident acquired immunodeficiency syndrome (AIDS) or death while accounting for time-varying confounding in a cost-efficient manner, the authors combined a case-cohort study design with inverse probability-weighted estimation of a marginal structural Cox proportional hazards model. A total of 950 adults who were positive for human immunodeficiency virus type 1 were followed in 2 US cohort studies between 1995 and 2007. In the full cohort, 211 AIDS cases or deaths occurred during 4,456 person-years. In an illustrative 20% random subcohort of 190 participants, 41 AIDS cases or deaths occurred during 861 person-years. Accounting for measured confounders and determinants of dropout by inverse probability weighting, the full cohort hazard ratio was 0.41 (95% confidence interval: 0.26, 0.65) and the case-cohort hazard ratio was 0.47 (95% confidence interval: 0.26, 0.83). Standard multivariable-adjusted hazard ratios were closer to the null, regardless of study design. The precision lost with the case-cohort design was modest given the cost savings. Results from Monte Carlo simulations demonstrated that the proposed approach yields approximately unbiased estimates of the hazard ratio with appropriate confidence interval coverage. Marginal structural model analysis of case-cohort study designs provides a cost-efficient design coupled with an accurate analytic method for research settings in which there is time-varying confounding. PMID:22302074
Natarajan, A T; Santos, S J; Darroudi, F; Hadjidikova, V; Vermeulen, S; Chatterjee, S; Berg, M; Grigorova, M; Sakamoto-Hojo, E T; Granath, F; Ramalho, A T; Curado, M P
1998-05-25
The radiation accident in focus here occurred in a section of Goiânia (Brazil) where more than a hundred individuals were contaminated with 137Cesium on September 1987. In order to estimate the absorbed radiation doses, initial frequencies of dicentrics and rings were determined in 129 victims [A.T. Ramalho, PhD Thesis, Subsidios a tecnica de dosimetria citogenetica gerados a partir da analise de resultados obtidos com o acidente radiologico de Goiânia, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil, 1992]. We have followed some of these victims cytogenetically over the years seeking for parameters that could be used as basis for retrospective radiation dosimetry. Our data on translocation frequencies obtained by fluorescence in situ hybridization (FISH) could be directly compared to the baseline frequencies of dicentrics available for those same victims. Our results provided valuable information on how precise these estimates are. The frequencies of translocations observed years after the radiation exposure were two to three times lower than the initial dicentrics frequencies, the differences being larger at higher doses (>1 Gy). The accuracy of such dose estimates might be increased by scoring sufficient amount of cells. However, factors such as the persistence of translocation carrying lymphocytes, translocation levels not proportional to chromosome size, and inter-individual variation reduce the precision of these estimates. Copyright 1998 Elsevier Science B.V. All rights reserved.
New formulations for tsunami runup estimation
NASA Astrophysics Data System (ADS)
Kanoglu, U.; Aydin, B.; Ceylan, N.
2017-12-01
We evaluate shoreline motion and maximum runup in two folds: One, we use linear shallow water-wave equations over a sloping beach and solve as initial-boundary value problem similar to the nonlinear solution of Aydın and Kanoglu (2017, Pure Appl. Geophys., https://doi.org/10.1007/s00024-017-1508-z). Methodology we present here is simple; it involves eigenfunction expansion and, hence, avoids integral transform techniques. We then use several different types of initial wave profiles with and without initial velocity, estimate shoreline properties and confirm classical runup invariance between linear and nonlinear theories. Two, we use the nonlinear shallow water-wave solution of Kanoglu (2004, J. Fluid Mech. 513, 363-372) to estimate maximum runup. Kanoglu (2004) presented a simple integral solution for the nonlinear shallow water-wave equations using the classical Carrier and Greenspan transformation, and further extended shoreline position and velocity to a simpler integral formulation. In addition, Tinti and Tonini (2005, J. Fluid Mech. 535, 33-64) defined initial condition in a very convenient form for near-shore events. We use Tinti and Tonini (2005) type initial condition in Kanoglu's (2004) shoreline integral solution, which leads further simplified estimates for shoreline position and velocity, i.e. algebraic relation. We then use this algebraic runup estimate to investigate effect of earthquake source parameters on maximum runup and present results similar to Sepulveda and Liu (2016, Coast. Eng. 112, 57-68).
Parameter identification of material constants in a composite shell structure
NASA Technical Reports Server (NTRS)
Martinez, David R.; Carne, Thomas G.
1988-01-01
One of the basic requirements in engineering analysis is the development of a mathematical model describing the system. Frequently comparisons with test data are used as a measurement of the adequacy of the model. An attempt is typically made to update or improve the model to provide a test verified analysis tool. System identification provides a systematic procedure for accomplishing this task. The terms system identification, parameter estimation, and model correlation all refer to techniques that use test information to update or verify mathematical models. The goal of system identification is to improve the correlation of model predictions with measured test data, and produce accurate, predictive models. For nonmetallic structures the modeling task is often difficult due to uncertainties in the elastic constants. A finite element model of the shell was created, which included uncertain orthotropic elastic constants. A modal survey test was then performed on the shell. The resulting modal data, along with the finite element model of the shell, were used in a Bayes estimation algorithm. This permitted the use of covariance matrices to weight the confidence in the initial parameter values as well as confidence in the measured test data. The estimation procedure also employed the concept of successive linearization to obtain an approximate solution to the original nonlinear estimation problem.
NASA Astrophysics Data System (ADS)
Rhodes, Russel E.; Byrd, Raymond J.
1998-01-01
This paper presents a ``back of the envelope'' technique for fast, timely, on-the-spot, assessment of affordability (profitability) of commercial space transportation architectural concepts. The tool presented here is not intended to replace conventional, detailed costing methodology. The process described enables ``quick look'' estimations and assumptions to effectively determine whether an initial concept (with its attendant cost estimating line items) provides focus for major leapfrog improvement. The Cost Charts Users Guide provides a generic sample tutorial, building an approximate understanding of the basic launch system cost factors and their representative magnitudes. This process will enable the user to develop a net ``cost (and price) per payload-mass unit to orbit'' incorporating a variety of significant cost drivers, supplemental to basic vehicle cost estimates. If acquisition cost and recurring cost factors (as a function of cost per payload-mass unit to orbit) do not meet the predetermined system-profitability goal, the concept in question will be clearly seen as non-competitive. Multiple analytical approaches, and applications of a variety of interrelated assumptions, can be examined in a quick, (on-the-spot) cost approximation analysis as this tool has inherent flexibility. The technique will allow determination of concept conformance to system objectives.
Gaussian Decomposition of Laser Altimeter Waveforms
NASA Technical Reports Server (NTRS)
Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan
1999-01-01
We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.
The Killer Fly Hunger Games: Target Size and Speed Predict Decision to Pursuit
Wardill, Trevor J.; Knowles, Katie; Barlow, Laura; Tapia, Gervasio; Nordström, Karin; Olberg, Robert M.; Gonzalez-Bellido, Paloma T.
2015-01-01
Predatory animals have evolved to optimally detect their prey using exquisite sensory systems such as vision, olfaction and hearing. It may not be so surprising that vertebrates, with large central nervous systems, excel at predatory behaviors. More striking is the fact that many tiny insects, with their miniscule brains and scaled down nerve cords, are also ferocious, highly successful predators. For predation, it is important to determine whether a prey is suitable before initiating pursuit. This is paramount since pursuing a prey that is too large to capture, subdue or dispatch will generate a substantial metabolic cost (in the form of muscle output) without any chance of metabolic gain (in the form of food). In addition, during all pursuits, the predator breaks its potential camouflage and thus runs the risk of becoming prey itself. Many insects use their eyes to initially detect and subsequently pursue prey. Dragonflies, which are extremely efficient predators, therefore have huge eyes with relatively high spatial resolution that allow efficient prey size estimation before initiating pursuit. However, much smaller insects, such as killer flies, also visualize and successfully pursue prey. This is an impressive behavior since the small size of the killer fly naturally limits the neural capacity and also the spatial resolution provided by the compound eye. Despite this, we here show that killer flies efficiently pursue natural (Drosophila melanogaster) and artificial (beads) prey. The natural pursuits are initiated at a distance of 7.9 ± 2.9 cm, which we show is too far away to allow for distance estimation using binocular disparities. Moreover, we show that rather than estimating absolute prey size prior to launching the attack, as dragonflies do, killer flies attack with high probability when the ratio of the prey's subtended retinal velocity and retinal size is 0.37. We also show that killer flies will respond to a stimulus of an angular size that is smaller than that of the photoreceptor acceptance angle, and that the predatory response is strongly modulated by the metabolic state. Our data thus provide an exciting example of a loosely designed matched filter to Drosophila, but one which will still generate successful pursuits of other suitable prey. PMID:26398293
Gould, William R.; Kendall, William L.
2013-01-01
Capture-recapture methods were initially developed to estimate human population abundance, but since that time have seen widespread use for fish and wildlife populations to estimate and model various parameters of population, metapopulation, and disease dynamics. Repeated sampling of marked animals provides information for estimating abundance and tracking the fate of individuals in the face of imperfect detection. Mark types have evolved from clipping or tagging to use of noninvasive methods such as photography of natural markings and DNA collection from feces. Survival estimation has been emphasized more recently as have transition probabilities between life history states and/or geographical locations, even where some states are unobservable or uncertain. Sophisticated software has been developed to handle highly parameterized models, including environmental and individual covariates, to conduct model selection, and to employ various estimation approaches such as maximum likelihood and Bayesian approaches. With these user-friendly tools, complex statistical models for studying population dynamics have been made available to ecologists. The future will include a continuing trend toward integrating data types, both for tagged and untagged individuals, to produce more precise and robust population models.
Estimates of cancer burden in Abruzzo and Molise.
Foschi, Roberto; Viviano, Lorena; Rossi, Silvia
2013-01-01
Abruzzo and Molise are two regions located in the south of Italy, currently without population-based cancer registries. The aim of this paper is to provide estimates of cancer incidence, mortality and prevalence for the Abruzzo and Molise regions combined. The MIAMOD method, a back-calculation approach to estimate and project the incidence of chronic diseases from mortality and patient survival, was used for the estimation of incidence and prevalence by calendar year (from 1970 to 2015) and age (from 0 to 99). The survival estimates are based on cancer registry data of southern Italy. The most frequently diagnosed cancers were those of the colon and rectum, breast and prostate, with 1,394, 1,341 and 698 new diagnosed cases, respectively, estimated in 2012. Incidence rates were estimated to increase constantly for female breast cancer, colorectal cancer in men and melanoma in both sexes. For prostate cancer and male lung cancer, the incidence rates increased, reaching a peak, and then decreased. In women the incidence of colorectal and lung cancer stabilized after an initial increase. For stomach and cervical cancers, the incidence rates showed a constant decrease. Prevalence was increasing for all the considered cancer sites with the exception of the cervix uteri. The highest prevalence values were estimated for breast and colorectal cancer with about 12,300 and over 8,200 cases in 2012, respectively. In the 2000s the mortality rates declined for all cancers except skin melanoma and female lung cancer, for which the mortality was almost stable. This paper provides a description of the burden of the major cancers in Abruzzo and Molise until 2015. The increase in cancer survival, added to population aging, will inflate the cancer prevalence. In order to better evaluate the cancer burden in the two regions, it would be important to implement cancer registration.
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
Zhou, Yongxin; Bai, Jing
2007-01-01
A framework that combines atlas registration, fuzzy connectedness (FC) segmentation, and parametric bias field correction (PABIC) is proposed for the automatic segmentation of brain magnetic resonance imaging (MRI). First, the atlas is registered onto the MRI to initialize the following FC segmentation. Original techniques are proposed to estimate necessary initial parameters of FC segmentation. Further, the result of the FC segmentation is utilized to initialize a following PABIC algorithm. Finally, we re-apply the FC technique on the PABIC corrected MRI to get the final segmentation. Thus, we avoid expert human intervention and provide a fully automatic method for brain MRI segmentation. Experiments on both simulated and real MRI images demonstrate the validity of the method, as well as the limitation of the method. Being a fully automatic method, it is expected to find wide applications, such as three-dimensional visualization, radiation therapy planning, and medical database construction.
An All-Sky Search for Wide Binaries in the SUPERBLINK Proper Motion Catalog
NASA Astrophysics Data System (ADS)
Hartman, Zachary; Lepine, Sebastien
2017-01-01
We present initial results from an all-sky search for Common Proper Motion (CPM) binaries in the SUPERBLINK all-sky proper motion catalog of 2.8 million stars with proper motions greater than 40 mas/yr, which has been recently enhanced with data from the GAIA mission. We initially search the SUPERBLINK catalog for pairs of stars with angular separations up to 1 degree and proper motion difference less than 40 mas/yr. In order to determine which of these pairs are real binaries, we develop a Bayesian analysis to calculate probabilities of true companionship based on a combination of proper motion magnitude, angular separation, and proper motion differences. The analysis reveals that the SUPERBLINK catalog most likely contains ~40,000 genuine common proper motion binaries. We provide initial estimates of the distances and projected physical separations of these wide binaries.
2017-01-01
Emissions from traditional cooking practices in low- and middle-income countries have detrimental health and climate effects; cleaner-burning cookstoves may provide “co-benefits”. Here we assess this potential via in-home measurements of fuel-use and emissions and real-time optical properties of pollutants from traditional and alternative cookstoves in rural Malawi. Alternative cookstove models were distributed by existing initiatives and include a low-cost ceramic model, two forced-draft cookstoves (FDCS; Philips HD4012LS and ACE-1), and three institutional cookstoves. Among household cookstoves, emission factors (EF; g (kg wood)−1) were lowest for the Philips, with statistically significant reductions relative to baseline of 45% and 47% for fine particulate matter (PM2.5) and carbon monoxide (CO), respectively. The Philips was the only cookstove tested that showed significant reductions in elemental carbon (EC) emission rate. Estimated health and climate cobenefits of alternative cookstoves were smaller than predicted from laboratory tests due to the effects of real-world conditions including fuel variability and nonideal operation. For example, estimated daily PM intake and field-measurement-based global warming commitment (GWC) for the Philips FDCS were a factor of 8.6 and 2.8 times higher, respectively, than those based on lab measurements. In-field measurements provide an assessment of alternative cookstoves under real-world conditions and as such likely provide more realistic estimates of their potential health and climate benefits than laboratory tests. PMID:28060518
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-03-01
Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. Creative Commons Attribution License
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-01-01
Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. PMID:28440974
NASA Technical Reports Server (NTRS)
Gibson, David M.; Spisz, Thomas S.; Taylor, Jeff C.; Zalameda, Joseph N.; Horvath, Thomas J.; Tomek, Deborah M.; Tietjen, Alan B.; Tack, Steve; Bush, Brett C.
2010-01-01
We provide the first geometrically accurate (i.e., 3-D) temperature maps of the entire windward surface of the Space Shuttle during hypersonic reentry. To accomplish this task we began with estimated surface temperatures derived from CFD models at integral high Mach numbers and used them, the Shuttle's surface properties and reasonable estimates of the sensor-to-target geometry to predict the emitted spectral radiance from the surface (in units of W sr-1 m-2 nm-1). These data were converted to sensor counts using properties of the sensor (e.g. aperture, spectral band, and various efficiencies), the expected background, and the atmosphere transmission to inform the optimal settings for the near-infrared and midwave IR cameras on the Cast Glance aircraft. Once these data were collected, calibrated, edited, registered and co-added we formed both 2-D maps of the scene in the above units and 3-D maps of the bottom surface in temperature that could be compared with not only the initial inputs but also thermocouple data from the Shuttle itself. The 3-D temperature mapping process was based on the initial radiance modeling process. Here temperatures were guessed for each node in a well-resolved 3-D framework, a radiance model was produced and compared to the processed imagery, and corrections to the temperature were estimated until the iterative process converged. This process did very well in characterizing the temperature structure of the large asymmetric boundary layer transition the covered much of the starboard bottom surface of STS-119 Discovery. Both internally estimated accuracies and differences with CFD models and thermocouple measurements are at most a few percent. The technique did less well characterizing the temperature structure of the turbulent wedge behind the trip due to limitations in understanding the true sensor resolution. (Note: Those less inclined to read the entire paper are encouraged to read an Executive Summary provided at the end.)
Verweij, Karin J.H.; Zietsch, Brendan P.; Lynskey, Michael T.; Medland, Sarah E.; Neale, Michael C.; Martin, Nicholas G.; Boomsma, Dorret I.; Vink, Jacqueline M.
2009-01-01
Background Because cannabis use is associated with social, physical and psychological problems, it is important to know what causes some individuals to initiate cannabis use and a subset of those to become problematic users. Previous twin studies found evidence for both genetic and environmental influences on vulnerability, but due to considerable variation in the results it is difficult to draw clear conclusions regarding the relative magnitude of these influences. Method A systematic literature search identified 28 twin studies on cannabis use initiation and 24 studies on problematic cannabis use. The proportion of total variance accounted for by genes (A), shared environment (C), and unshared environment (E) in (1) initiation of cannabis use and (2) problematic cannabis use was calculated by averaging corresponding A, C, and E estimates across studies from independent cohorts and weighting by sample size. Results For cannabis use initiation, A, C, and E estimates were 48%, 25% and 27% in males and 40%, 39% and 21% in females. For problematic cannabis use A, C, and E estimates were 51%, 20% and 29% for males and 59%, 15% and 26% for females. Confidence intervals of these estimates are considerably narrower than those in the source studies. Conclusions Our results indicate that vulnerability to both cannabis use initiation and problematic use was significantly influenced by A, C, and E. There was a trend for a greater C and lesser A component for cannabis initiation as compared to problematic use for females. PMID:20402985
NASA Astrophysics Data System (ADS)
Dobler, J. T.; Blume, N.; Pernini, T.; Zaccheo, T. S.; Braun, M.
2017-12-01
The Greenhouse Gas Laser Imaging Tomography Experiment (GreenLITE™) was originally developed by Harris and Atmospheric and Environmental Research (AER) under a cooperative agreement with the National Energy Technology Laboratory of the Department of Energy. The system, initially conceived in 2013, used a pair of high-precision intensity modulated continuous wave (IMCW) transceivers and a series of retroreflectors to generate overlapping atmospheric density measurements of carbon dioxide (CO2) for continuous monitoring of ground carbon storage sites. The overlapping measurements provide an estimate of the two-dimensional (2-D) spatial distribution of the gas within the area of interest using sparsely sampled tomography methods. GreenLITE™ is a full end-to-end system that utilizes standard 4G connectivity and an all cloud-based data storage, processing, and dissemination suite to provide autonomous, near-real-time data via a web-based user interface. The system has been demonstrated for measuring and mapping CO2 over areas from approximately 0.04 km2 to 25 km2 ( 200 m X 200 m, up to 5 km X 5 km), including a year-long demonstration over the city of Paris, France. In late 2016, the GreenLITE™ system was converted by Harris and AER to provide similar measurement capabilities for methane (CH4). Recent experiments have shown that GreenLITE™ CH4 retrieved concentrations agree with a Picarro cavity ring-down spectrometer, calibrated with World Meteorological Organization traceable gas, to within approximately 0.5% of background or 10-15 parts per billion. The system has been tested with several controlled releases over the past year, including a weeklong experiment at an industrial oil and gas facility. Recent experiments have been exploring the use of a box model-based approach for estimating flux, and the initial results are very promising. We will present a description of the instrument, share some recent methane experimental results, and describe the flux estimation process and results of testing to date.
Wilson, Justin B; Osterhaus, Matt C; Farris, Karen B; Doucette, William R; Currie, Jay D; Bullock, Tammy; Kumbera, Patty
2005-01-01
To perform a retrospective financial analysis on the implementation of a self-insured company's wellness program from the pharmaceutical care provider's perspective and conduct sensitivity analyses to estimate costs versus revenues for pharmacies without resident pharmacists, program implementation for a second employer, the second year of the program, and a range of pharmacist wages. Cost-benefit and sensitivity analyses. Self-insured employer with headquarters in Canton, N.C. 36 employees at facility in Clinton, Iowa. Pharmacist-provided cardiovascular wellness program. Costs and revenues collected from pharmacy records, including pharmacy purchasing records, billing records, and pharmacists' time estimates. All costs and revenues were calculated for the development and first year of the intervention program. Costs included initial and follow-up screening supplies, office supplies, screening/group presentation time, service provision time, documentation/preparation time, travel expenses, claims submission time, and administrative fees. Revenues included initial screening revenues, follow-up screening revenues, group session revenues, and Heart Smart program revenues. For the development and first year of Heart Smart, net benefit to the pharmacy (revenues minus costs) amounted to dollars 2,413. All sensitivity analyses showed a net benefit. For pharmacies without a resident pharmacist, the net benefit was dollars 106; for Heart Smart in a second employer, the net benefit was dollars 6,024; for the second year, the projected net benefit was dollars 6,844; factoring in a lower pharmacist salary, the net benefit was dollars 2,905; and for a higher pharmacist salary, the net benefit was dollars 1,265. For the development and first year of Heart Smart, the revenues of the wellness program in a self-insured company outweighed the costs.
Influence of Initial Inclined Surface Crack on Estimated Residual Fatigue Lifetime of Railway Axle
NASA Astrophysics Data System (ADS)
Náhlík, Luboš; Pokorný, Pavel; Ševčík, Martin; Hutař, Pavel
2016-11-01
Railway axles are subjected to cyclic loading which can lead to fatigue failure. For safe operation of railway axles a damage tolerance approach taking into account a possible defect on railway axle surface is often required. The contribution deals with an estimation of residual fatigue lifetime of railway axle with initial inclined surface crack. 3D numerical model of inclined semi-elliptical surface crack in railway axle was developed and its curved propagation through the axle was simulated by finite element method. Presence of press-fitted wheel in the vicinity of initial crack was taken into account. A typical loading spectrum of railway axle was considered and residual fatigue lifetime was estimated by NASGRO approach. Material properties of typical axle steel EA4T were considered in numerical calculations and lifetime estimation.
Estimating avian population size using Bowden's estimator
Diefenbach, D.R.
2009-01-01
Avian researchers often uniquely mark birds, and multiple estimators could be used to estimate population size using individually identified birds. However, most estimators of population size require that all sightings of marked birds be uniquely identified, and many assume homogeneous detection probabilities. Bowden's estimator can incorporate sightings of marked birds that are not uniquely identified and relax assumptions required of other estimators. I used computer simulation to evaluate the performance of Bowden's estimator for situations likely to be encountered in bird studies. When the assumptions of the estimator were met, abundance and variance estimates and confidence-interval coverage were accurate. However, precision was poor for small population sizes (N < 50) unless a large percentage of the population was marked (>75%) and multiple (≥8) sighting surveys were conducted. If additional birds are marked after sighting surveys begin, it is important to initially mark a large proportion of the population (pm ≥ 0.5 if N ≤ 100 or pm > 0.1 if N ≥ 250) and minimize sightings in which birds are not uniquely identified; otherwise, most population estimates will be overestimated by >10%. Bowden's estimator can be useful for avian studies because birds can be resighted multiple times during a single survey, not all sightings of marked birds have to uniquely identify individuals, detection probabilities among birds can vary, and the complete study area does not have to be surveyed. I provide computer code for use with pilot data to design mark-resight surveys to meet desired precision for abundance estimates.
Hoffmann, Jessica; Rudra, Souman; Toor, Saqib S; Holm-Nielsen, Jens Bo; Rosendahl, Lasse A
2013-02-01
Initial process studies carried out in Aspen Plus on an integrated thermochemical conversion process are presented herein. In the simulations, a hydrothermal liquefaction (HTL) plant is combined with a biogas plant (BP), such that the digestate from the BP is converted to a biocrude in the HTL process. This biorefinery concept offers a sophisticated and sustainable way of converting organic residuals into a range of high-value biofuel streams in addition to combined heat and power (CHP) production. The primary goal of this study is to provide an initial estimate of the feasibility of such a process. By adding a diesel-quality-fuel output to the process, the product value is increased significantly compared to a conventional BP. An input of 1000 kg h(-1) manure delivers approximately 30-38 kg h(-1) fuel and 38-61 kg h(-1) biogas. The biogas can be used to upgrade the biocrude, to supply the gas grid or for CHP. An estimated 62-84% of the biomass energy can be recovered in the biofuels. Copyright © 2012 Elsevier Ltd. All rights reserved.
A UAV and S2A data-based estimation of the initial biomass of green algae in the South Yellow Sea.
Xu, Fuxiang; Gao, Zhiqiang; Jiang, Xiaopeng; Shang, Weitao; Ning, Jicai; Song, Debin; Ai, Jinquan
2018-03-01
Previous studies have shown that the initial biomass of green tide was the green algae attaching to Pyropia aquaculture rafts in the Southern Yellow Sea. In this study, the green algae was identified with unmanned aerial vehicle (UAV), an biomass estimation model was proposed for green algae biomass in the radial sand ridge area based on Sentinel-2A image (S2A) and UAV images. The result showed that the green algae was detected highly accurately with the normalized green-red difference index (NGRDI); approximately 1340 tons and 700 tons of green algae were attached to rafts and raft ropes respectively, and the lower biomass might be the main cause for the smaller scale of green tide in 2017. In addition, UAV play an important role in raft-attaching green algae monitoring and long-term research of its biomass would provide a scientific basis for the control and forecast of green tide in the Yellow Sea. Copyright © 2018 Elsevier Ltd. All rights reserved.
Chowell, Gerardo; Fuentes, R; Olea, A; Aguilera, X; Nesse, H; Hyman, J M
2013-01-01
We use a stochastic simulation model to explore the effect of reactive intervention strategies during the 2002 dengue outbreak in the small population of Easter Island, Chile. We quantified the effect of interventions on the transmission dynamics and epidemic size as a function of the simulated control intensity levels and the timing of initiation of control interventions. Because no dengue outbreaks had been reported prior to 2002 in Easter Island, the 2002 epidemic provided a unique opportunity to estimate the basic reproduction number R0 during the initial epidemic phase, prior to the start of control interventions. We estimated R0 at 27.2 (95%CI: 14.8, 49.3). We found that the final epidemic size is highly sensitive to the timing of start of interventions. However, even when the control interventions start several weeks after the epidemic onset, reactive intervention efforts can have a significant impact on the final epidemic size. Our results indicate that the rapid implementation of control interventions can have a significant effect in reducing the epidemic size of dengue epidemics.
NASA Astrophysics Data System (ADS)
Coelho, Flavio Codeço; Carvalho, Luiz Max De
2015-12-01
Quantifying the attack ratio of disease is key to epidemiological inference and public health planning. For multi-serotype pathogens, however, different levels of serotype-specific immunity make it difficult to assess the population at risk. In this paper we propose a Bayesian method for estimation of the attack ratio of an epidemic and the initial fraction of susceptibles using aggregated incidence data. We derive the probability distribution of the effective reproductive number, Rt, and use MCMC to obtain posterior distributions of the parameters of a single-strain SIR transmission model with time-varying force of infection. Our method is showcased in a data set consisting of 18 years of dengue incidence in the city of Rio de Janeiro, Brazil. We demonstrate that it is possible to learn about the initial fraction of susceptibles and the attack ratio even in the absence of serotype specific data. On the other hand, the information provided by this approach is limited, stressing the need for detailed serological surveys to characterise the distribution of serotype-specific immunity in the population.
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Awwal, Abdul A. S. (Inventor); Karim, Mohammad A. (Inventor)
1993-01-01
An inner-product array processor is provided with thresholding of the inner product during each iteration to make more significant the inner product employed in estimating a vector to be used as the input vector for the next iteration. While stored vectors and estimated vectors are represented in bipolar binary (1,-1), only those elements of an initial partial input vector that are believed to be common with those of a stored vector are represented in bipolar binary; the remaining elements of a partial input vector are set to 0. This mode of representation, in which the known elements of a partial input vector are in bipolar binary form and the remaining elements are set equal to 0, is referred to as trinary representation. The initial inner products corresponding to the partial input vector will then be equal to the number of known elements. Inner-product thresholding is applied to accelerate convergence and to avoid convergence to a negative input product.
Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.
2006-01-01
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.
Battery state-of-charge estimation using approximate least squares
NASA Astrophysics Data System (ADS)
Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.
2015-03-01
In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.
System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.
2011-01-01
Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed
Dense motion estimation using regularization constraints on local parametric models.
Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein
2004-11-01
This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.
The virtuous tax: lifesaving and crime-prevention effects of the 1991 federal alcohol-tax increase.
Cook, Philip J; Durrance, Christine Piette
2013-01-01
The last time that federal excise taxes on alcoholic beverages were increased was 1991. The changes were larger than the typical state-level changes that have been used to study price effects, but the consequences have not been assessed due to the lack of a control group. Here we develop and implement a novel method for utilizing interstate heterogeneity to estimate the aggregate effects of a federal tax increase on rates of injury fatality and crime. We provide evidence that the relative importance of alcohol in violence and injury rates is directly related to per capita consumption, and build on that finding to generate estimates. A conservative estimate is that the federal tax (which increased alcohol prices by 6% initially) reduced injury deaths by 4.5% (6480 deaths), in 1991, and had a still larger effect on violent crime. Copyright © 2012 Elsevier B.V. All rights reserved.
Adaptive AOA-aided TOA self-positioning for mobile wireless sensor networks.
Wen, Chih-Yu; Chan, Fu-Kai
2010-01-01
Location-awareness is crucial and becoming increasingly important to many applications in wireless sensor networks. This paper presents a network-based positioning system and outlines recent work in which we have developed an efficient principled approach to localize a mobile sensor using time of arrival (TOA) and angle of arrival (AOA) information employing multiple seeds in the line-of-sight scenario. By receiving the periodic broadcasts from the seeds, the mobile target sensors can obtain adequate observations and localize themselves automatically. The proposed positioning scheme performs location estimation in three phases: (I) AOA-aided TOA measurement, (II) Geometrical positioning with particle filter, and (III) Adaptive fuzzy control. Based on the distance measurements and the initial position estimate, adaptive fuzzy control scheme is applied to solve the localization adjustment problem. The simulations show that the proposed approach provides adaptive flexibility and robust improvement in position estimation.
Cascadia Gas Vent Distribution and Challenges to Quantify Margin-Wide Methane Fluxes
NASA Astrophysics Data System (ADS)
Scherwath, M.; Riedel, M.; Roemer, M.; Veloso, M.; Heesemann, M.; Spence, G.
2017-12-01
Gas venting along the Cascadia Margin has been mapped over decades with ship sonar and in recent years with permanent seafloor installations utilizing the seafloor observatories NEPTUNE of Ocean Networks Canada and the Cabled Array of the Ocean Observatories Initiative. We show the distribution of over 1000 vents, most on the shallow shelf. For a third of the vents we have estimated methane flow rates, ranging from 0.05 to 69 L/min, and extrapolate these results to a margin-wide methane flow estaimate of around 4 Mt/yr (at surface pressure and temperature) and a flux estimate of 0.05 kg yr-1 m-2. However, these estimates are based on several assumptions, e.g. bubble sizes or data coverage, providing large uncertainties. With continued research expeditions and potential seafloor calibration experiments, these data can be refined and improved in future years.
Population Estimates for Chum Salmon Spawning in the Mainstem Columbia River, 2002 Technical Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rawding, Dan; Hillson, Todd D.
2003-11-15
Accurate and precise population estimates of chum salmon (Oncorhynchus keta) spawning in the mainstem Columbia River are needed to provide a basis for informed water allocation decisions, to determine the status of chum salmon listed under the Endangered Species Act, and to evaluate the contribution of the Duncan Creek re-introduction program to mainstem spawners. Currently, mark-recapture experiments using the Jolly-Seber model provide the only framework for this type of estimation. In 2002, a study was initiated to estimate mainstem Columbia River chum salmon populations using seining data collected while capturing broodstock as part of the Duncan Creek re-introduction. The fivemore » assumptions of the Jolly-Seber model were examined using hypothesis testing within a statistical framework, including goodness of fit tests and secondary experiments. We used POPAN 6, an integrated computer system for the analysis of capture-recapture data, to obtain maximum likelihood estimates of standard model parameters, derived estimates, and their precision. A more parsimonious final model was selected using Akaike Information Criteria. Final chum salmon escapement estimates and (standard error) from seining data for the Ives Island, Multnomah, and I-205 sites are 3,179 (150), 1,269 (216), and 3,468 (180), respectively. The Ives Island estimate is likely lower than the total escapement because only the largest two of four spawning sites were sampled. The accuracy and precision of these estimates would improve if seining was conducted twice per week instead of weekly, and by incorporating carcass recoveries into the analysis. Population estimates derived from seining mark-recapture data were compared to those obtained using the current mainstem Columbia River salmon escapement methodologies. The Jolly-Seber population estimate from carcass tagging in the Ives Island area was 4,232 adults with a standard error of 79. This population estimate appears reasonable and precise but batch marks and lack of secondary studies made it difficult to test Jolly-Seber assumptions, necessary for unbiased estimates. We recommend that individual tags be applied to carcasses to provide a statistical basis for goodness of fit tests and ultimately model selection. Secondary or double marks should be applied to assess tag loss and male and female chum salmon carcasses should be enumerated separately. Carcass tagging population estimates at the two other sites were biased low due to limited sampling. The Area-Under-the-Curve escapement estimates at all three sites were 36% to 76% of Jolly-Seber estimates. Area-Under-the Curve estimates are likely biased low because previous assumptions that observer efficiency is 100% and residence time is 10 days proved incorrect. If managers continue to rely on Area-Under-the-Curve to estimate mainstem Columbia River spawners, a methodology is provided to develop annual estimates of observer efficiency and residence time, and to incorporate uncertainty into the Area-Under-the-Curve escapement estimate.« less
NASA Astrophysics Data System (ADS)
Berlanga, Juan M.; Harbaugh, John W.
The Tabasco region contains a number of major oilfields, including some of the emerging "giant" oil fields which have received extensive publicity. Fields in the Tabasco region are associated with large geologic structures which are detected readily by seismic surveys. The structures seem to be associated with deepseated movement of salt, and they are complexly faulted. Some structures have as much as 1000 milliseconds relief of seismic lines. A study, interpreting the structure of the area, used initially only a fraction of the total seismic lines That part of Tabasco region that has been studied was surveyed with a close-spaced rectilinear network of seismic lines. A, interpreting the structure of the area, used initially only a fraction of the total seismic data available. The purpose was to compare "predictions" of reflection time based on widely spaced seismic lines, with "results" obtained along more closely spaced lines. This process of comparison simulates the sequence of events in which a reconnaissance network of seismic lines is used to guide a succession of progressively more closely spaced lines. A square gridwork was established with lines spaced at 10 km intervals, and using machine contour maps, compared the results with those obtained with seismic grids employing spacings of 5 and 2.5 km respectively. The comparisons of predictions based on widely spaced lines with observations along closely spaced lines provide information by which an error function can be established. The error at any point can be defined as the difference between the predicted value for that point, and the subsequently observed value at that point. Residuals obtained by fitting third-degree polynomial trend surfaces were used for comparison. The root mean square of the error measurement, (expressed in seconds or milliseconds reflection time) was found to increase more or less linearly with distance from the nearest seismic point. Oil-occurrence probabilities were established on the basis of frequency distributions of trend-surface residuals obtained by fitting and subtracting polynomial trend surfaces from the machine-contoured reflection time maps. We found that there is a strong preferential relationship between the occurrence of petroleum (i.e. its presence versus absence) and particular ranges of trend-surface residual values. An estimate of the probability of oil occurring at any particular geographic point can be calculated on the basis of the estimated trend-surface residual value. This estimate, however, must be tempered by the probable error in the estimate of the residual value provided by the error function. The result, we believe, is a simple but effective procedure for estimating exploration outcome probabilities where seismic data provide the principal form of information in advance of drilling. Implicit in this approach is the comparison between a maturely explored area, for which both seismic and production data are available, and which serves as a statistical "training area", with the "target" area which is undergoing exploration and for which probability forecasts are to be calculated.
Joiner, Kevin L; Nam, Soohyun; Whittemore, Robin
2017-07-01
The objective was to describe Diabetes Prevention Program (DPP)-based lifestyle interventions delivered via electronic, mobile, and certain types of telehealth (eHealth) and estimate the magnitude of the effect on weight loss. A systematic review was conducted. PubMed and EMBASE were searched for studies published between January 2003 and February 2016 that met inclusion and exclusion criteria. An overall estimate of the effect on mean percentage weight loss across all the interventions was initially conducted. A stratified meta-analysis was also conducted to determine estimates of the effect across the interventions classified according to whether behavioral support by counselors post-baseline was not provided, provided remotely with communication technology, or face-to-face. Twenty-two studies met the inclusion/exclusion criteria, in which 26 interventions were evaluated. Samples were primarily white and college educated. Interventions included Web-based applications, mobile phone applications, text messages, DVDs, interactive voice response telephone calls, telehealth video conferencing, and video on-demand programing. Nine interventions were stand-alone, delivered post-baseline exclusively via eHealth. Seventeen interventions included additional behavioral support provided by counselors post-baseline remotely with communication technology or face-to-face. The estimated overall effect on mean percentage weight loss from baseline to up to 15months of follow-up across all the interventions was -3.98%. The subtotal estimate across the stand-alone eHealth interventions (-3.34%) was less than the estimate across interventions with behavioral support given by a counselor remotely (-4.31%), and the estimate across interventions with behavioral support given by a counselor in-person (-4.65%). There is promising evidence of the efficacy of DPP-based eHealth interventions on weight loss. Further studies are needed particularly in racially and ethnically diverse populations with limited levels of educational attainment. Future research should also focus on ways to optimize behavioral support. Copyright © 2017 Elsevier Inc. All rights reserved.
Timing of spring wild turkey hunting in relation to nest incubation
Casalena, Mary Jo; Everett, Rex; Vreeland, Wendy C.; Gregg, Ian D.; Diefenbach, Duane R.
2016-01-01
State wildlife agencies are often requested to open spring wild turkey (Meleagris gallopavo; hereafter, turkey) hunting seasons earlier to increase hunter satisfaction by hunters hearing more gobbling male turkeys. Timing of spring turkey hunting season in several states, including Pennsylvania, has been established to open, on average, near median date of incubation initiation of turkey nests. This is believed to reduce illegal and undesired hen harvest and possibly nest abandonment, while maintaining hunter satisfaction of hearing male turkeys when most hens are incubating eggs. However, Pennsylvania’s spring season structure was established in 1968. Given earlier spring phenology, and potentially more variation in spring weather due to climate change, there is concern that timing of nest incubation for turkeys in Pennsylvania could be changing. Therefore, our objective was to determine if nest incubation and opening of spring turkey hunting in Pennsylvania have continued to coincide. We attached satellite transmitters to 254 female turkeys during 2010–2014 and estimated median incubation initiation date to be 2 May, which was 2 days earlier than median date during a statewide study during 1953–1963 and 9 days earlier than during a smaller scale study in south–central Pennsylvania during 2000–2001. However, incubation initiation varied greatly among years and individual hens during all 3 studies. During 4 of 5 years of our study, Pennsylvania’s spring season opened 3 to 8 days prior to median date of incubation initiation. Over the 5 years, estimated initiation of incubation for first nesting attempts, measured from earliest date of incubation initiation to latest, was >2 months and maximum proportion of hens beginning incubation at any one time differed by several days to >1 week. Consequently, in years of late incubation, a constant season opening date set near the long-term median date of incubation initiation exposes few additional hens to risk and hunter satisfaction is likely maintained at greater levels than would be seen with a more conservative approach of opening the season later. Long-term and large scale studies using GPS transmitters that provide precise determination of incubation initiation will be useful to study environmental influences on initiation of incubation.
Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan
2012-01-01
Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727
Global Genomic Epidemiology of Salmonella enterica Serovar Typhimurium DT104
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leekitcharoenphon, Pimlapas; Hendriksen, Rene S.; Le Hello, Simon
It has been 30 years since the initial emergence and subsequent rapid global spread of multidrug-resistant Salmonella enterica serovar Typhimurium DT104 (MDR DT104). Nonetheless, its origin and transmission route have never been revealed. In this paper, we used whole-genome sequencing (WGS) and temporally structured sequence analysis within a Bayesian framework to reconstruct temporal and spatial phylogenetic trees and estimate the rates of mutation and divergence times of 315 S. Typhimurium DT104 isolates sampled from 1969 to 2012 from 21 countries on six continents. DT104 was estimated to have emerged initially as antimicrobial susceptible in ~1948 (95% credible interval [CI], 1934more » to 1962) and later became MDR DT104 in ~1972 (95% CI, 1972 to 1988) through horizontal transfer of the 13-kb Salmonella genomic island 1 (SGI1) MDR region into susceptible strains already containing SGI1. This was followed by multiple transmission events, initially from central Europe and later between several European countries. An independent transmission to the United States and another to Japan occurred, and from there MDR DT104 was probably transmitted to Taiwan and Canada. An independent acquisition of resistance genes took place in Thailand in ~1975 (95% CI, 1975 to 1990). In Denmark, WGS analysis provided evidence for transmission of the organism between herds of animals. Interestingly, the demographic history of Danish MDR DT104 provided evidence for the success of the program to eradicate Salmonella from pig herds in Denmark from 1996 to 2000. Finally, the results from this study refute several hypotheses on the evolution of DT104 and suggest that WGS may be useful in monitoring emerging clones and devising strategies for prevention of Salmonella infections.« less
Global Genomic Epidemiology of Salmonella enterica Serovar Typhimurium DT104
Hendriksen, Rene S.; Le Hello, Simon; Weill, François-Xavier; Baggesen, Dorte Lau; Jun, Se-Ran; Lund, Ole; Crook, Derrick W.; Wilson, Daniel J.; Aarestrup, Frank M.
2016-01-01
It has been 30 years since the initial emergence and subsequent rapid global spread of multidrug-resistant Salmonella enterica serovar Typhimurium DT104 (MDR DT104). Nonetheless, its origin and transmission route have never been revealed. We used whole-genome sequencing (WGS) and temporally structured sequence analysis within a Bayesian framework to reconstruct temporal and spatial phylogenetic trees and estimate the rates of mutation and divergence times of 315 S. Typhimurium DT104 isolates sampled from 1969 to 2012 from 21 countries on six continents. DT104 was estimated to have emerged initially as antimicrobial susceptible in ∼1948 (95% credible interval [CI], 1934 to 1962) and later became MDR DT104 in ∼1972 (95% CI, 1972 to 1988) through horizontal transfer of the 13-kb Salmonella genomic island 1 (SGI1) MDR region into susceptible strains already containing SGI1. This was followed by multiple transmission events, initially from central Europe and later between several European countries. An independent transmission to the United States and another to Japan occurred, and from there MDR DT104 was probably transmitted to Taiwan and Canada. An independent acquisition of resistance genes took place in Thailand in ∼1975 (95% CI, 1975 to 1990). In Denmark, WGS analysis provided evidence for transmission of the organism between herds of animals. Interestingly, the demographic history of Danish MDR DT104 provided evidence for the success of the program to eradicate Salmonella from pig herds in Denmark from 1996 to 2000. The results from this study refute several hypotheses on the evolution of DT104 and suggest that WGS may be useful in monitoring emerging clones and devising strategies for prevention of Salmonella infections. PMID:26944846
Global Genomic Epidemiology of Salmonella enterica Serovar Typhimurium DT104
Leekitcharoenphon, Pimlapas; Hendriksen, Rene S.; Le Hello, Simon; ...
2016-03-04
It has been 30 years since the initial emergence and subsequent rapid global spread of multidrug-resistant Salmonella enterica serovar Typhimurium DT104 (MDR DT104). Nonetheless, its origin and transmission route have never been revealed. In this paper, we used whole-genome sequencing (WGS) and temporally structured sequence analysis within a Bayesian framework to reconstruct temporal and spatial phylogenetic trees and estimate the rates of mutation and divergence times of 315 S. Typhimurium DT104 isolates sampled from 1969 to 2012 from 21 countries on six continents. DT104 was estimated to have emerged initially as antimicrobial susceptible in ~1948 (95% credible interval [CI], 1934more » to 1962) and later became MDR DT104 in ~1972 (95% CI, 1972 to 1988) through horizontal transfer of the 13-kb Salmonella genomic island 1 (SGI1) MDR region into susceptible strains already containing SGI1. This was followed by multiple transmission events, initially from central Europe and later between several European countries. An independent transmission to the United States and another to Japan occurred, and from there MDR DT104 was probably transmitted to Taiwan and Canada. An independent acquisition of resistance genes took place in Thailand in ~1975 (95% CI, 1975 to 1990). In Denmark, WGS analysis provided evidence for transmission of the organism between herds of animals. Interestingly, the demographic history of Danish MDR DT104 provided evidence for the success of the program to eradicate Salmonella from pig herds in Denmark from 1996 to 2000. Finally, the results from this study refute several hypotheses on the evolution of DT104 and suggest that WGS may be useful in monitoring emerging clones and devising strategies for prevention of Salmonella infections.« less
Guinea: Background and Relations with the United States
2010-03-22
wounded, by stray bullets.20 After a week of unrest, Conté met with mutiny leaders, and the government agreed to pay salary arrears of $1,100 to each...period. IDA also provides grants to countries at risk of debt distress. 117 The HIPC Initiative is a comprehensive approach to debt reduction for...track. Reaching the HIPC “completion point” would grant Guinea an estimated relief of $2.2 billion and reduce debt service by approximately $100
Electrically heated particulate filter propagation support methods and systems
Gonze, Eugene V [Pinckney, MI; Ament, Frank [Troy, MI
2011-06-07
A control system that controls regeneration of a particulate filter is provided. The system generally includes a regeneration module that controls current to the particulate filter to initiate combustion of particulate matter in the particulate filter. A propagation module estimates a propagation status of the combustion of the particulate matter based on a combustion temperature. A temperature adjustment module controls the combustion temperature by selectively increasing a temperature of exhaust that passes through the particulate filter.
Human papillomavirus vaccination in Auckland: reducing ethnic and socioeconomic inequities.
Poole, Tracey; Goodyear-Smith, Felicity; Petousis-Harris, Helen; Desmond, Natalie; Exeter, Daniel; Pointon, Leah; Jayasinha, Ranmalie
2012-12-17
The New Zealand HPV publicly funded immunisation programme commenced in September 2008. Delivery through a school based programme was anticipated to result in higher coverage rates and reduced inequalities compared to vaccination delivered through other settings. The programme provided for on-going vaccination of girls in year 8 with an initial catch-up programme through general practices for young women born after 1 January 1990 until the end of 2010. To assess the uptake of the funded HPV vaccine through school based vaccination programmes in secondary schools and general practices in 2009, and the factors associated with coverage by database matching. Retrospective quantitative analysis of secondary anonymised data School-Based Vaccination Service and National Immunisation Register databases of female students from secondary schools in Auckland District Health Board catchment area. Data included student and school demographic and other variables. Binary logistic regression was used to estimate odds ratios and significance for univariables. Multivariable logistic regression estimated strength of association between individual factors and initiation and completion, adjusted for all other factors. The programme achieved overall coverage of 71.5%, with Pacific girls highest at 88% and Maori at 78%. Girls higher socioeconomic status were more likely be vaccinated in general practice. School-based vaccination service targeted at ethic sub-populations provided equity for the Maori and Pacific student who achieved high levels of vaccination. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.
Lapane, Kate L; Jesdale, Bill M; Dubé, Catherine E; Pimentel, Camilla B; Rajpathak, Swapnil N
2015-08-01
Although sulfonylureas increase the risk of hypoglycemia which may lead to fall-associated fractures, studies quantifying the association between sulfonylureas and falls and/or fractures are sparse and existing studies have yielded inconsistent results. Our objective is to evaluate the extent to which sulfonylurea use was associated with fractures and falls among nursing home residents with type 2 diabetes mellitus. We performed a propensity-matched retrospective new user cohort study of 12,327 Medicare Parts A/B/D eligible long-stay NH residents. Medicare Part D data provided information on sulfonylurea and biguanide use initiated as monotherapy (nsulfonylurea=5807 and nbiguanide=6151) after NH entry. Medicare hospitalizations were used to identify hypoglycemic events (ICD-9-CM codes 250.8, 251.1, 251.2) and fall-associated fractures (ICD-9-CM codes 800, 804, 812-817, 820, 823, 824). Minimum Data Set 2.0 (2008-2010) provided information on falls and potential confounders. Cox models conducted on propensity-matched samples provided adjusted hazard ratio (aHR) estimates and 95% confidence intervals (CI). Falls were common (37.4 per 100 person-years). Fractures were not associated with initiation of sulfonylureas. Sulfonylurea initiation was associated with an excess risk of falls among residents with moderate activities of daily living limitations (aHR: 1.13; 95% CI: 1.00-1.26), but not among those with minimal limitations or dependence in activities of daily living. Nursing home residents with moderate limitations in activities of daily living are at increased risk of falls upon initiation of sulfonylureas. Initiating sulfonylurea use in NH residents must be done with caution. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A model for estimating pathogen variability in shellfish and predicting minimum depuration times.
McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick
2018-01-01
Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist norovirus risk managers with future control strategies.
NASA Astrophysics Data System (ADS)
Fukuhara, T.; Kouyama, T.; Kato, S.; Nakamura, R.
2016-12-01
University International Formation Mission (UNIFORM) in Japan started in 2011 is an ambitious project that specialized to surveillance of small wildfire to contribute to provide fire information for initial suppression. Final aim of the mission is to construct a constellation with several 50 kg class satellites for frequent and exclusive observation. The uncooled micro-bolometer camera with 640 x 480 pixels based on commercial products has been newly developed for the first satellite. It has been successfully launched on 24 May 2014 and injected to the Sun-Synchronous orbit at local time of 12:00 with altitude of 628 km. The camera has been detected considerable hotspots not only wildfire but also volcanoes. Brightness temperature observed on orbit has been verified and scale of observed wildfire has been roughly presumed; the smallest wildfire ever detected has flame zone less than 2 x 103 m2. It is one tenth of initial requirement estimated in design process; our camera has enough ability to discover small wildfire and to provide beneficial information for fire control with low cost and quick fabrication; it would contribute to practical utility especially in developing nations. A next camera is available for new wildfire mission with satellite constellation; it has already developed for flight. Pixel arrays increasing to 1024 x 768, spatial resolution becomes fine to detect smaller wildfire whereas the swath of image is kept. This camera would be applied to the future planetary mission for Mars and Asteroid explore, too. When it observes planetary surface, thermal inertia can be estimated from continuous observation. When it observes atmosphere, cloud-top altitude can be estimated from horizontal temperature distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Aardt, Jan; Romanczyk, Paul; van Leeuwen, Martin
Terrestrial laser scanning (TLS) has emerged as an effective tool for rapid comprehensive measurement of object structure. Registration of TLS data is an important prerequisite to overcome the limitations of occlusion. However, due to the high dissimilarity of point cloud data collected from disparate viewpoints in the forest environment, adequate marker-free registration approaches have not been developed. The majority of studies instead rely on the utilization of artificial tie points (e.g., reflective tooling balls) placed within a scene to aid in coordinate transformation. We present a technique for generating view-invariant feature descriptors that are intrinsic to the point cloud datamore » and, thus, enable blind marker-free registration in forest environments. To overcome the limitation of initial pose estimation, we employ a voting method to blindly determine the optimal pairwise transformation parameters, without an a priori estimate of the initial sensor pose. To provide embedded error metrics, we developed a set theory framework in which a circular transformation is traversed between disjoint tie point subsets. This provides an upper estimate of the Root Mean Square Error (RMSE) confidence associated with each pairwise transformation. Output RMSE errors are commensurate with the RMSE of input tie points locations. Thus, while the mean output RMSE=16.3cm, improved results could be achieved with a more precise laser scanning system. This study 1) quantifies the RMSE of the proposed marker-free registration approach, 2) assesses the validity of embedded confidence metrics using receiver operator characteristic (ROC) curves, and 3) informs optimal sample spacing considerations for TLS data collection in New England forests. Furthermore, while the implications for rapid, accurate, and precise forest inventory are obvious, the conceptual framework outlined here could potentially be extended to built environments.« less
Van Aardt, Jan; Romanczyk, Paul; van Leeuwen, Martin; ...
2016-04-04
Terrestrial laser scanning (TLS) has emerged as an effective tool for rapid comprehensive measurement of object structure. Registration of TLS data is an important prerequisite to overcome the limitations of occlusion. However, due to the high dissimilarity of point cloud data collected from disparate viewpoints in the forest environment, adequate marker-free registration approaches have not been developed. The majority of studies instead rely on the utilization of artificial tie points (e.g., reflective tooling balls) placed within a scene to aid in coordinate transformation. We present a technique for generating view-invariant feature descriptors that are intrinsic to the point cloud datamore » and, thus, enable blind marker-free registration in forest environments. To overcome the limitation of initial pose estimation, we employ a voting method to blindly determine the optimal pairwise transformation parameters, without an a priori estimate of the initial sensor pose. To provide embedded error metrics, we developed a set theory framework in which a circular transformation is traversed between disjoint tie point subsets. This provides an upper estimate of the Root Mean Square Error (RMSE) confidence associated with each pairwise transformation. Output RMSE errors are commensurate with the RMSE of input tie points locations. Thus, while the mean output RMSE=16.3cm, improved results could be achieved with a more precise laser scanning system. This study 1) quantifies the RMSE of the proposed marker-free registration approach, 2) assesses the validity of embedded confidence metrics using receiver operator characteristic (ROC) curves, and 3) informs optimal sample spacing considerations for TLS data collection in New England forests. Furthermore, while the implications for rapid, accurate, and precise forest inventory are obvious, the conceptual framework outlined here could potentially be extended to built environments.« less
Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin
2018-04-20
An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.
Infiltration and runoff generation processes in fire-affected soils
Moody, John A.; Ebel, Brian A.
2014-01-01
Post-wildfire runoff was investigated by combining field measurements and modelling of infiltration into fire-affected soils to predict time-to-start of runoff and peak runoff rate at the plot scale (1 m2). Time series of soil-water content, rainfall and runoff were measured on a hillslope burned by the 2010 Fourmile Canyon Fire west of Boulder, Colorado during cyclonic and convective rainstorms in the spring and summer of 2011. Some of the field measurements and measured soil physical properties were used to calibrate a one-dimensional post-wildfire numerical model, which was then used as a ‘virtual instrument’ to provide estimates of the saturated hydraulic conductivity and high-resolution (1 mm) estimates of the soil-water profile and water fluxes within the unsaturated zone.Field and model estimates of the wetting-front depth indicated that post-wildfire infiltration was on average confined to shallow depths less than 30 mm. Model estimates of the effective saturated hydraulic conductivity, Ks, near the soil surface ranged from 0.1 to 5.2 mm h−1. Because of the relatively small values of Ks, the time-to-start of runoff (measured from the start of rainfall), tp, was found to depend only on the initial soil-water saturation deficit (predicted by the model) and a measured characteristic of the rainfall profile (referred to as the average rainfall acceleration, equal to the initial rate of change in rainfall intensity). An analytical model was developed from the combined results and explained 92–97% of the variance of tp, and the numerical infiltration model explained 74–91% of the variance of the peak runoff rates. These results are from one burned site, but they strongly suggest that tp in fire-affected soils (which often have low values of Ks) is probably controlled more by the storm profile and the initial soil-water saturation deficit than by soil hydraulic properties.
Estimation of teleported and gained parameters in a non-inertial frame
NASA Astrophysics Data System (ADS)
Metwally, N.
2017-04-01
Quantum Fisher information is introduced as a measure of estimating the teleported information between two users, one of which is uniformly accelerated. We show that the final teleported state depends on the initial parameters, in addition to the gained parameters during the teleportation process. The estimation degree of these parameters depends on the value of the acceleration, the used single mode approximation (within/beyond), the type of encoded information (classic/quantum) in the teleported state, and the entanglement of the initial communication channel. The estimation degree of the parameters can be maximized if the partners teleport classical information.
Previsic, Mirko; Karthikeyan, Anantha; Lewis, Tony; McCarthy, John
2017-07-26
Capex numbers are in $/kW, Opex numbers in $/kW-yr. Cost Estimates provided herein are based on concept design and basic engineering data and have high levels of uncertainties embedded. This reference economic scenario was done for a very large device version of the OE Buoy technology, which is not presently on Ocean Energy's technology development pathway but will be considered in future business plan development. The DOE reference site condition is considered a low power-density site, compared with many of the planned initial deployment locations for the OE Buoy. Many of the sites considered for the initial commercial deployment of the OE buoy feature much higher wave power densities and shorter period waves. Both of these characteristics will improve the OE buoy's commercial viability.
NASA Technical Reports Server (NTRS)
Bell, Jerome A.; Stephens, Elaine; Barton, Gregg
1991-01-01
An overview is provided of the Space Exploration Initiative (SEI) concepts for telecommunications, information systems, and navigation (TISN), and engineering and architecture issues are discussed. The SEI program data system is reviewed to identify mission TISN interfaces, and reference TISN concepts are described for nominal, degraded, and mission-critical data services. The infrastructures reviewed include telecommunications for robotics support, autonomous navigation without earth-based support, and information networks for tracking and data acquisition. Four options for TISN support architectures are examined which relate to unique SEI exploration strategies. Detailed support estimates are given for: (1) a manned stay on Mars; (2) permanent lunar and Martian settlements; short-duration missions; and (4) systematic exploration of the moon and Mars.
Cota-Ruiz, Juan; Rosiles, Jose-Gerardo; Sifuentes, Ernesto; Rivas-Perea, Pablo
2012-01-01
This research presents a distributed and formula-based bilateration algorithm that can be used to provide initial set of locations. In this scheme each node uses distance estimates to anchors to solve a set of circle-circle intersection (CCI) problems, solved through a purely geometric formulation. The resulting CCIs are processed to pick those that cluster together and then take the average to produce an initial node location. The algorithm is compared in terms of accuracy and computational complexity with a Least-Squares localization algorithm, based on the Levenberg-Marquardt methodology. Results in accuracy vs. computational performance show that the bilateration algorithm is competitive compared with well known optimized localization algorithms.
NASA Astrophysics Data System (ADS)
Wada, Y.; Flörke, M.; Hanasaki, N.; Eisner, S.; Fischer, G.; Tramberend, S.; Satoh, Y.; van Vliet, M. T. H.; Yillia, P.; Ringler, C.; Burek, P.; Wiberg, D.
2016-01-01
To sustain growing food demand and increasing standard of living, global water use increased by nearly 6 times during the last 100 years, and continues to grow. As water demands get closer and closer to the water availability in many regions, each drop of water becomes increasingly valuable and water must be managed more efficiently and intensively. However, soaring water use worsens water scarcity conditions already prevalent in semi-arid and arid regions, increasing uncertainty for sustainable food production and economic development. Planning for future development and investments requires that we prepare water projections for the future. However, estimations are complicated because the future of the world's waters will be influenced by a combination of environmental, social, economic, and political factors, and there is only limited knowledge and data available about freshwater resources and how they are being used. The Water Futures and Solutions (WFaS) initiative coordinates its work with other ongoing scenario efforts for the sake of establishing a consistent set of new global water scenarios based on the shared socio-economic pathways (SSPs) and the representative concentration pathways (RCPs). The WFaS "fast-track" assessment uses three global water models, namely H08, PCR-GLOBWB, and WaterGAP. This study assesses the state of the art for estimating and projecting water use regionally and globally in a consistent manner. It provides an overview of different approaches, the uncertainty, strengths and weaknesses of the various estimation methods, types of management and policy decisions for which the current estimation methods are useful. We also discuss additional information most needed to be able to improve water use estimates and be able to assess a greater range of management options across the water-energy-climate nexus.
Two-pass imputation algorithm for missing value estimation in gene expression time series.
Tsiporkova, Elena; Boeva, Veselka
2007-10-01
Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different initial rough imputation methods.
NASA Technical Reports Server (NTRS)
Wada, Y.; Florke, M.; Hanasaki, N.; Eisner, S.; Fischer, G.; Tramberend, S.; Satoh, Y.; van Vliet, M. T. H.; Yillia, P.; Ringler, C.;
2016-01-01
To sustain growing food demand and increasing standard of living, global water use increased by nearly 6 times during the last 100 years, and continues to grow. As water demands get closer and closer to the water availability in many regions, each drop of water becomes increasingly valuable and water must be managed more efficiently and intensively. However, soaring water use worsens water scarcity conditions already prevalent in semi-arid and arid regions, increasing uncertainty for sustainable food production and economic development. Planning for future development and investments requires that we prepare water projections for the future. However, estimations are complicated because the future of the world's waters will be influenced by a combination of environmental, social, economic, and political factors, and there is only limited knowledge and data available about freshwater resources and how they are being used. The Water Futures and Solutions (WFaS) initiative coordinates its work with other ongoing scenario efforts for the sake of establishing a consistent set of new global water scenarios based on the shared socio-economic pathways (SSPs) and the representative concentration pathways (RCPs). The WFaS "fast track" assessment uses three global water models, namely H08, PCR-GLOBWB, and WaterGAP. This study assesses the state of the art for estimating and projecting water use regionally and globally in a consistent manner. It provides an overview of different approaches, the uncertainty, strengths and weaknesses of the various estimation methods, types of management and policy decisions for which the current estimation methods are useful. We also discuss additional information most needed to be able to improve water use estimates and be able to assess a greater range of management options across the water-energy-climate nexus.
NASA Astrophysics Data System (ADS)
Newman, Andrew B.; Smith, Russell J.; Conroy, Charlie; Villaume, Alexa; van Dokkum, Pieter
2017-08-01
We present new observations of the three nearest early-type galaxy (ETG) strong lenses discovered in the SINFONI Nearby Elliptical Lens Locator Survey (SNELLS). Based on their lensing masses, these ETGs were inferred to have a stellar initial mass function (IMF) consistent with that of the Milky Way, not the bottom-heavy IMF that has been reported as typical for high-σ ETGs based on lensing, dynamical, and stellar population synthesis techniques. We use these unique systems to test the consistency of IMF estimates derived from different methods. We first estimate the stellar M */L using lensing and stellar dynamics. We then fit high-quality optical spectra of the lenses using an updated version of the stellar population synthesis models developed by Conroy & van Dokkum. When examined individually, we find good agreement among these methods for one galaxy. The other two galaxies show 2-3σ tension with lensing estimates, depending on the dark matter contribution, when considering IMFs that extend to 0.08 M ⊙. Allowing a variable low-mass cutoff or a nonparametric form of the IMF reduces the tension among the IMF estimates to <2σ. There is moderate evidence for a reduced number of low-mass stars in the SNELLS spectra, but no such evidence in a composite spectrum of matched-σ ETGs drawn from the SDSS. Such variation in the form of the IMF at low stellar masses (m ≲ 0.3 M ⊙), if present, could reconcile lensing/dynamical and spectroscopic IMF estimates for the SNELLS lenses and account for their lighter M */L relative to the mean matched-σ ETG. We provide the spectra used in this study to facilitate future comparisons.
NASA Technical Reports Server (NTRS)
Challa, M. S.; Natanson, G. A.; Baker, D. F.; Deutschmann, J. K.
1994-01-01
This paper describes real-time attitude determination results for the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX), a gyroless spacecraft, using a Kalman filter/Euler equation approach denoted the real-time sequential filter (RTSF). The RTSF is an extended Kalman filter whose state vector includes the attitude quaternion and corrections to the rates, which are modeled as Markov processes with small time constants. The rate corrections impart a significant robustness to the RTSF against errors in modeling the environmental and control torques, as well as errors in the initial attitude and rates, while maintaining a small state vector. SAMPLEX flight data from various mission phases are used to demonstrate the robustness of the RTSF against a priori attitude and rate errors of up to 90 deg and 0.5 deg/sec, respectively, as well as a sensitivity of 0.0003 deg/sec in estimating rate corrections in torque computations. In contrast, it is shown that the RTSF attitude estimates without the rate corrections can degrade rapidly. RTSF advantages over single-frame attitude determination algorithms are also demonstrated through (1) substantial improvements in attitude solutions during sun-magnetic field coalignment and (2) magnetic-field-only attitude and rate estimation during the spacecraft's sun-acquisition mode. A robust magnetometer-only attitude-and-rate determination method is also developed to provide for the contingency when both sun data as well as a priori knowledge of the spacecraft state are unavailable. This method includes a deterministic algorithm used to initialize the RTSF with coarse estimates of the spacecraft attitude and rates. The combined algorithm has been found effective, yielding accuracies of 1.5 deg in attitude and 0.01 deg/sec in the rates and convergence times as little as 400 sec.
NASA Astrophysics Data System (ADS)
Arason, P.; Barsotti, S.; De'Michieli Vitturi, M.; Jónsson, S.; Arngrímsson, H.; Bergsson, B.; Pfeffer, M. A.; Petersen, G. N.; Bjornsson, H.
2016-12-01
Plume height and mass eruption rate are the principal scale parameters of explosive volcanic eruptions. Weather radars are important instruments in estimating plume height, due to their independence of daylight, weather and visibility. The Icelandic Meteorological Office (IMO) operates two fixed position C-band weather radars and two mobile X-band radars. All volcanoes in Iceland can be monitored by IMO's radar network, and during initial phases of an eruption all available radars will be set to a more detailed volcano scan. When the radar volume data is retrived at IMO-headquarters in Reykjavík, an automatic analysis is performed on the radar data above the proximity of the volcano. The plume height is automatically estimated taking into account the radar scanning strategy, beam width, and a likely reflectivity gradient at the plume top. This analysis provides a distribution of the likely plume height. The automatically determined plume height estimates from the radar data are used as input to a numerical suite that calculates the eruptive source parameters through an inversion algorithm. This is done by using the coupled system DAKOTA-PlumeMoM which solves the 1D plume model equations iteratively by varying the input values of vent radius and vertical velocity. The model accounts for the effect of wind on the plume dynamics, using atmospheric vertical profiles extracted from the ECMWF numerical weather prediction model. Finally, the resulting estimates of mass eruption rate are used to initialize the dispersal model VOL-CALPUFF to assess hazard due to tephra fallout, and communicated to London VAAC to support their modelling activity for aviation safety purposes.
NASA Astrophysics Data System (ADS)
Wada, Y.; Flörke, M.; Hanasaki, N.; Eisner, S.; Fischer, G.; Tramberend, S.; Satoh, Y.; van Vliet, M. T. H.; Yillia, P.; Ringler, C.; Wiberg, D.
2015-08-01
To sustain growing food demand and increasing standard of living, global water use increased by nearly 6 times during the last 100 years and continues to grow. As water demands get closer and closer to the water availability in many regions, each drop of water becomes increasingly valuable and water must be managed more efficiently and intensively. However, soaring water use worsens water scarcity condition already prevalent in semi-arid and arid regions, increasing uncertainty for sustainable food production and economic development. Planning for future development and investments requires that we prepare water projections for the future. However, estimations are complicated because the future of world's waters will be influenced by a combination of environmental, social, economic, and political factors, and there is only limited knowledge and data available about freshwater resources and how they are being used. The Water Futures and Solutions initiative (WFaS) coordinates its work with other on-going scenario efforts for the sake of establishing a consistent set of new global water scenarios based on the Shared Socioeconomic Pathways (SSPs) and the Representative Concentration Pathways (RCPs). The WFaS "fast-track" assessment uses three global water models, namely H08, PCR-GLOBWB, and WaterGAP. This study assesses the state of the art for estimating and projecting water use regionally and globally in a consistent manner. It provides an overview of different approaches, the uncertainty, strengths and weaknesses of the various estimation methods, types of management and policy decisions for which the current estimation methods are useful. We also discuss additional information most needed to be able to improve water use estimates and be able to assess a greater range of management options across the water-energy-climate nexus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Andrew B.; Smith, Russell J.; Conroy, Charlie
2017-08-20
We present new observations of the three nearest early-type galaxy (ETG) strong lenses discovered in the SINFONI Nearby Elliptical Lens Locator Survey (SNELLS). Based on their lensing masses, these ETGs were inferred to have a stellar initial mass function (IMF) consistent with that of the Milky Way, not the bottom-heavy IMF that has been reported as typical for high- σ ETGs based on lensing, dynamical, and stellar population synthesis techniques. We use these unique systems to test the consistency of IMF estimates derived from different methods. We first estimate the stellar M {sub *}/ L using lensing and stellar dynamics.more » We then fit high-quality optical spectra of the lenses using an updated version of the stellar population synthesis models developed by Conroy and van Dokkum. When examined individually, we find good agreement among these methods for one galaxy. The other two galaxies show 2–3 σ tension with lensing estimates, depending on the dark matter contribution, when considering IMFs that extend to 0.08 M {sub ⊙}. Allowing a variable low-mass cutoff or a nonparametric form of the IMF reduces the tension among the IMF estimates to <2 σ . There is moderate evidence for a reduced number of low-mass stars in the SNELLS spectra, but no such evidence in a composite spectrum of matched- σ ETGs drawn from the SDSS. Such variation in the form of the IMF at low stellar masses ( m ≲ 0.3 M {sub ⊙}), if present, could reconcile lensing/dynamical and spectroscopic IMF estimates for the SNELLS lenses and account for their lighter M {sub *}/ L relative to the mean matched- σ ETG. We provide the spectra used in this study to facilitate future comparisons.« less
Hughes, Joshua D; Bond, Kamila M; Mekary, Rania A; Dewan, Michael C; Rattani, Abbas; Baticulon, Ronnie; Kato, Yoko; Azevedo-Filho, Hildo; Morcos, Jacques J; Park, Kee B
2018-04-09
There is increasing acknowledgement that surgical care is important in global health initiatives. In particular, neurosurgical care is as limited as 1 per 10 million people in parts of the world. We performed a systematic literature review to examine the worldwide incidence of central nervous system vascular lesions and a meta-analysis of aneurysmal subarachnoid hemorrhage (aSAH) to define the disease burden and inform neurosurgical global health efforts. A systematic review and meta-analysis were conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines to estimate the global epidemiology of central nervous system vascular lesions, including unruptured and ruptured aneurysms, arteriovenous malformations, cavernous malformations, dural arteriovenous fistulas, developmental venous anomalies, and vein of Galen malformations. Results were organized by World Health Organization regions. After literature review, because of a lack of data from particular World Health Organization regions, we determined we could only provide an estimate of aSAH. Using data from studies with aSAH and 12 high-quality stroke studies from regions lacking data, we meta-analyzed the yearly crude incidence of aSAH per 100,000 persons. Estimates were generated via random-effects models. From an initial yield of 1492 studies, 46 manuscripts on aSAH incidence were included. The final meta-analysis included 58 studies from 31 different countries. We estimated the global crude incidence for aSAH to be 6.67 per 100,000 persons with a wide variation across WHO regions from 0.71 to 12.38 per 100,000 persons. Worldwide, almost 500,000 individuals will suffer from aSAH each year, with almost two-thirds in low- and middle-income countries. Copyright © 2018 Elsevier Inc. All rights reserved.
Yackel Adams, A.A.; Skagen, S.K.; Savidge, J.A.
2007-01-01
Many North American prairie bird populations have recently declined, and the causes of these declines remain largely unknown. To determine whether population limitation occurs during breeding, we evaluated the stability of a population of prairie birds using population-specific values for fecundity and postfledging survival. During 2001-2003, we radiomarked 67 female Lark Buntings (Calamospiza melanocorys) to determine annual fecundity and evaluate contributing factors such as nest survival and breeding response (number of breeding attempts and dispersal). Collectively, 67 females built 112 nests (1.67 ± 0.07 nests female−1 season−1; range: 1–3); 34 were second nests and 11 were third nests. Daily nest survival estimates were similar for initial and later nests with overall nest survival (DSR19) of 30.7% and 31.7%, respectively. Nest predation was the most common cause of failure (92%). Capture and radiomarking of females did not affect nest survival. Lark Bunting dispersal probabilities increased among females that fledged young from initial nests and females that lost their original nests late in the season. Conservative and liberal estimates of mean annual fecundity were 0.96 ±0.11 and 1.24 ± 0.09 female offspring per female, respectively. Given the fecundity and juvenile-survival estimates for this population, annual adult survival values of 71–77% are necessary to achieve a stable population. Because adult survival of prairie passerines ranges between 55% and 65%, this study area may not be capable of sustaining a stable population in the absence of immigration. We contrast our population assessment with one that assumes indirect values of fecundity and juvenile survival. To elucidate limiting factors, estimation of population-specific demographic parameters is desirable. We present an approach for selecting species and areas for evaluation of population stability.
Multiple scene attitude estimator performance for LANDSAT-1
NASA Technical Reports Server (NTRS)
Rifman, S. S.; Monuki, A. T.; Shortwell, C. P.
1979-01-01
Initial results are presented to demonstrate the performance of a linear sequential estimator (Kalman Filter) used to estimate a LANDSAT 1 spacecraft attitude time series defined for four scenes. With the revised estimator a GCP poor scene - a scene with no usable geodetic control points (GCPs) - can be rectified to higher accuracies than otherwise based on the use of GCPs in adjacent scenes. Attitude estimation errors was determined by the use of GCPs located in the GCP-poor test scene, but which are not used to update the Kalman filter. Initial results achieved indicate that errors of 500m (rms) can be attained for the GCP-poor scenes. Operational factors are related to various scenarios.
A Design Study of Onboard Navigation and Guidance During Aerocapture at Mars. M.S. Thesis
NASA Technical Reports Server (NTRS)
Fuhry, Douglas Paul
1988-01-01
The navigation and guidance of a high lift-to-drag ratio sample return vehicle during aerocapture at Mars are investigated. Emphasis is placed on integrated systems design, with guidance algorithm synthesis and analysis based on vehicle state and atmospheric density uncertainty estimates provided by the navigation system. The latter utilizes a Kalman filter for state vector estimation, with useful update information obtained through radar altimeter measurements and density altitude measurements based on IMU-measured drag acceleration. A three-phase guidance algorithm, featuring constant bank numeric predictor/corrector atmospheric capture and exit phases and an extended constant altitude cruise phase, is developed to provide controlled capture and depletion of orbital energy, orbital plane control, and exit apoapsis control. Integrated navigation and guidance systems performance are analyzed using a four degree-of-freedom computer simulation. The simulation environment includes an atmospheric density model with spatially correlated perturbations to provide realistic variations over the vehicle trajectory. Navigation filter initial conditions for the analysis are based on planetary approach optical navigation results. Results from a selection of test cases are presented to give insight into systems performance.
Evaluating Sleep Disturbance: A Review of Methods
NASA Technical Reports Server (NTRS)
Smith, Roy M.; Oyung, R.; Gregory, K.; Miller, D.; Rosekind, M.; Rosekind, Mark R. (Technical Monitor)
1996-01-01
There are three general approaches to evaluating sleep disturbance in regards to noise: subjective, behavioral, and physiological. Subjective methods range from standardized questionnaires and scales to self-report measures designed for specific research questions. There are two behavioral methods that provide useful sleep disturbance data. One behavioral method is actigraphy, a motion detector that provides an empirical estimate of sleep quantity and quality. An actigraph, worn on the non-dominant wrist, provides a 24-hr estimate of the rest/activity cycle. The other method involves a behavioral response, either to a specific probe or stimuli or subject initiated (e.g., indicating wakefulness). The classic, gold standard for evaluating sleep disturbance is continuous physiological monitoring of brain, eye, and muscle activity. This allows detailed distinctions of the states and stages of sleep, awakenings, and sleep continuity. Physiological delta can be obtained in controlled laboratory settings and in natural environments. Current ambulatory physiological recording equipment allows evaluation in home and work settings. These approaches will be described and the relative strengths and limitations of each method will be discussed.
Dall, Timothy M; Zhang, Yiduo; Chen, Yaozhu J; Wagner, Rachel C Askarinam; Hogan, Paul F; Fagan, Nancy K; Olaiya, Samuel T; Tornberg, David N
2007-01-01
To estimate medical and indirect costs to the Department of Defense (DoD) that are associated with tobacco use, being overweight or obese, and high alcohol consumption. Retrospective, quantitative research. Healthcare provided in military treatment facilities and by providers participating in the military health system. The 4.3 million beneficiaries under age 65 years who were enrolled in the military TRICARE Prime health plan option in 2006. The findings come from a cost-of-disease model developed by combining information from DoD and civilian health surveys and studies; DoD healthcare encounter data for 4.1 million beneficiaries; and epidemiology literature on the increased risk of comorbidities from unhealthy behaviors. DoD spends an estimated $2.1 billion per year for medical care associated with tobacco use ($564 million), excess weight and obesity ($1.1 billion), and high alcohol consumption ($425 million). DoD incurs nonmedical costs related to tobacco use, excess weight and obesity, and high alcohol consumption in excess of $965 million per year. Unhealthy lifestyles are significant contributors to the cost of providing healthcare services to the nation's military personnel, military retirees, and their dependents. The continued rise in healthcare costs could impact other DoD programs and could potentially affect areas related to military capability and readiness. In 2006, DoD initiated Healthy Choices for Life initiatives to address the high cost of unhealthy lifestyles and behaviors, and the DoD continues to monitor lifestyle trends through the DoD Lifestyle Assessment Program.
Predictability Experiments With the Navy Operational Global Atmospheric Prediction System
NASA Astrophysics Data System (ADS)
Reynolds, C. A.; Gelaro, R.; Rosmond, T. E.
2003-12-01
There are several areas of research in numerical weather prediction and atmospheric predictability, such as targeted observations and ensemble perturbation generation, where it is desirable to combine information about the uncertainty of the initial state with information about potential rapid perturbation growth. Singular vectors (SVs) provide a framework to accomplish this task in a mathematically rigorous and computationally feasible manner. In this study, SVs are calculated using the tangent and adjoint models of the Navy Operational Global Atmospheric Prediction System (NOGAPS). The analysis error variance information produced by the NRL Atmospheric Variational Data Assimilation System is used as the initial-time SV norm. These VAR SVs are compared to SVs for which total energy is both the initial and final time norms (TE SVs). The incorporation of analysis error variance information has a significant impact on the structure and location of the SVs. This in turn has a significant impact on targeted observing applications. The utility and implications of such experiments in assessing the analysis error variance estimates will be explored. Computing support has been provided by the Department of Defense High Performance Computing Center at the Naval Oceanographic Office Major Shared Resource Center at Stennis, Mississippi.
Existential Risk and Cost-Effective Biosecurity
Snyder-Beattie, Andrew
2017-01-01
In the decades to come, advanced bioweapons could threaten human existence. Although the probability of human extinction from bioweapons may be low, the expected value of reducing the risk could still be large, since such risks jeopardize the existence of all future generations. We provide an overview of biotechnological extinction risk, make some rough initial estimates for how severe the risks might be, and compare the cost-effectiveness of reducing these extinction-level risks with existing biosecurity work. We find that reducing human extinction risk can be more cost-effective than reducing smaller-scale risks, even when using conservative estimates. This suggests that the risks are not low enough to ignore and that more ought to be done to prevent the worst-case scenarios. PMID:28806130
Peak flood estimation using gene expression programming
NASA Astrophysics Data System (ADS)
Zorn, Conrad R.; Shamseldin, Asaad Y.
2015-12-01
As a case study for the Auckland Region of New Zealand, this paper investigates the potential use of gene-expression programming (GEP) in predicting specific return period events in comparison to the established and widely used Regional Flood Estimation (RFE) method. Initially calibrated to 14 gauged sites, the GEP derived model was further validated to 10 and 100 year flood events with a relative errors of 29% and 18%, respectively. This is compared to the RFE method providing 48% and 44% errors for the same flood events. While the effectiveness of GEP in predicting specific return period events is made apparent, it is argued that the derived equations should be used in conjunction with those existing methodologies rather than as a replacement.
FISM 2.0: Improved Spectral Range, Resolution, and Accuracy
NASA Technical Reports Server (NTRS)
Chamberlin, Phillip C.
2012-01-01
The Flare Irradiance Spectral Model (FISM) was first released in 2005 to provide accurate estimates of the solar VUV (0.1-190 nm) irradiance to the Space Weather community. This model was based on TIMED SEE as well as UARS and SORCE SOLSTICE measurements, and was the first model to include a 60 second temporal variation to estimate the variations due to solar flares. Along with flares, FISM also estimates the tradition solar cycle and solar rotational variations over months and decades back to 1947. This model has been highly successful in providing driving inputs to study the affect of solar irradiance variations on the Earth's ionosphere and thermosphere, lunar dust charging, as well as the Martian ionosphere. The second version of FISM, FISM2, is currently being updated to be based on the more accurate SDO/EVE data, which will provide much more accurate estimations in the 0.1-105 nm range, as well as extending the 'daily' model variation up to 300 nm based on the SOLSTICE measurements. with the spectral resolution of SDO/EVE along with SOLSTICE and the TIMED and SORCE XPS 'model' products, the entire range from 0.1-300 nm will also be available at 0.1 nm, allowing FISM2 to be improved a similar 0.1nm spectral bins. FISM also will have a TSI component that will estimate the total radiated energy during flares based on the few TSI flares observed to date. Presented here will be initial results of the FISM2 modeling efforts, as well as some challenges that will need to be overcome in order for FISM2 to accurately model the solar variations on time scales of seconds to decades.
NASA Technical Reports Server (NTRS)
Engelland, Shawn A.; Capps, Alan
2011-01-01
Current aircraft departure release times are based on manual estimates of aircraft takeoff times. Uncertainty in takeoff time estimates may result in missed opportunities to merge into constrained en route streams and lead to lost throughput. However, technology exists to improve takeoff time estimates by using the aircraft surface trajectory predictions that enable air traffic control tower (ATCT) decision support tools. NASA s Precision Departure Release Capability (PDRC) is designed to use automated surface trajectory-based takeoff time estimates to improve en route tactical departure scheduling. This is accomplished by integrating an ATCT decision support tool with an en route tactical departure scheduling decision support tool. The PDRC concept and prototype software have been developed, and an initial test was completed at air traffic control facilities in Dallas/Fort Worth. This paper describes the PDRC operational concept, system design, and initial observations.
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.
Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki
2017-12-09
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.
Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor
Park, Jinho; Park, Hasil
2017-01-01
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826
Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira
2015-12-18
For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.
Aryal, Umesh Raj; Petzold, Max; Krettek, Alexandra
2013-03-02
The perceived risks and benefits of smoking may play an important role in determining adolescents' susceptibility to initiating smoking. Our study examined the perceived risks and benefits of smoking among adolescents who demonstrated susceptibility or non susceptibility to smoking initiation. In October-November 2011, we conducted a population-based cross-sectional study in Jhaukhel and Duwakot Villages in Nepal. Located in the mid-hills of Bhaktapur District, 13 kilometers east of Kathmandu, Jhaukhel and Duwakot represent the prototypical urbanizing villages that surround Nepal's major urban centers, where young people have easy access to tobacco products and are influenced by advertising. Jhaukhel and Duwakot had a total population of 13,669, of which 15% were smokers. Trained enumerators used a semi-structured questionnaire to interview 352 randomly selected 14- to 16-year-old adolescents. The enumerators asked the adolescents to estimate their likelihood (0%-100%) of experiencing various smoking-related risks and benefits in a hypothetical scenario. Principal component analysis extracted four perceived risk and benefit components, excluding addiction risk: (i) physical risk I (lung cancer, heart disease, wrinkles, bad colds); (ii) physical risk II (bad cough, bad breath, trouble breathing); (iii) social risk (getting into trouble, smelling like an ashtray); and (iv) social benefit (looking cool, feeling relaxed, becoming popular, and feeling grown-up). The adjusted odds ratio of susceptibility increased 1.20-fold with each increased quartile in perception of physical Risk I. Susceptibility to smoking was 0.27- and 0.90-fold less among adolescents who provided the highest estimates of physical Risk II and social risk, respectively. Similarly, susceptibility was 2.16-fold greater among adolescents who provided the highest estimates of addiction risk. Physical risk I, addiction risk, and social benefits of cigarette smoking related positively, and physical risk II and social risk related negatively, with susceptibility to smoking. To discourage or prevent adolescents from initiating smoking, future intervention programs should focus on communicating not only the health risks but also the social and addiction risks as well as counteract the social benefits of smoking.
Initial Results in Global Flood Monitoring System (GFMS) Using GPM Data
NASA Astrophysics Data System (ADS)
Wu, H.; Adler, R. F.; Kirschbaum, D.; Huffman, G. J.; Tian, Y.
2016-12-01
The Global Flood Monitoring System (GFMS) (http://flood.umd.edu) has been developed and used to provide real-time flood detection and streamflow estimates over the last few years with significant success shown by validation against global flood event data sets and observed streamflow variations. It has become a tool for various national and international organizations to appraise flood conditions in various areas, including where rainfall and hydrology information is limited. The GFMS has been using the TRMM Multi-satellite Precipitation Analysis (TMPA) as its main rainfall input. Now, with the advent of NASA's Global Precipitation Measurement (GPM) mission there is an opportunity to significantly improve global flood monitoring and forecasting. GPM's Integrated Multi-satellitE Retrievals for GPM (IMERG) multi-satellite product is designed to take advantage of various technical advances in the field and combine that with an efficient processing system producing "early" (4 hrs) and "late" (12 hrs) products for operational use. The products are also more uniform in results than TMPA among the various satellites going into the analysis and available at finer time and space resolutions. On the road to replacing TMPA with the IMERG in the operational version of the GFMS parallel systems were run for periods to understand the impact of the new type of data on the streamflow and flood estimates. Results of this comparison are the basis for this presentation. It is expected that an improvement will be noted both in the accuracy of the precipitation estimates and a smoother transition in and out of heavy rain events, helping to reduce "shock" in the hydrology model. The finer spatial resolution should also help in this regard. The GFMS will be initially run at its primary resolution of 1/8th degree latitude/longitude with both data sets to isolate the impact of the rain information change. Other aspects will also be examined, including higher latitude events, where GPM precipitation algorithms should also provide improvements. This initial work will help focus full implementation of the IMERG into GFMS and the retrospective calculations to be done for the full TRMM/GPM era.
Auditing of suppliers as the requirement of quality management systems in construction
NASA Astrophysics Data System (ADS)
Harasymiuk, Jolanta; Barski, Janusz
2017-07-01
The choice of a supplier of construction materials can be important factor of increase or reduction of building works costs. Construction materials present from 40 for 70% of investment task depending on kind of works being provided for realization. There is necessity of estimate of suppliers from the point of view of effectiveness of construction undertaking and necessity from the point of view of conformity of taken operation by executives of construction job and objects within the confines of systems of managements quality being initiated in their organizations. The estimate of suppliers of construction materials and subexecutives of special works is formal requirement in quality management systems, which meets the requirements of the ISO 9001 standard. The aim of this paper is to show possibilities of making use of anaudit for estimate of credibility and reliability of the supplier of construction materials. The article describes kinds of audits, that were carried in quality management systems, with particular taking into consideration audits called as second-site. One characterizes the estimate criterions of qualitative ability and method of choice of the supplier of construction materials. The paper shows also propositions of exemplary questions, that would be estimated in audit process, the way of conducting of this estimate and conditionality of estimate.
Vaccine approaches to malaria control and elimination: Insights from mathematical models.
White, Michael T; Verity, Robert; Churcher, Thomas S; Ghani, Azra C
2015-12-22
A licensed malaria vaccine would provide a valuable new tool for malaria control and elimination efforts. Several candidate vaccines targeting different stages of the malaria parasite's lifecycle are currently under development, with one candidate, RTS,S/AS01 for the prevention of Plasmodium falciparum infection, having recently completed Phase III trials. Predicting the public health impact of a candidate malaria vaccine requires using clinical trial data to estimate the vaccine's efficacy profile--the initial efficacy following vaccination and the pattern of waning of efficacy over time. With an estimated vaccine efficacy profile, the effects of vaccination on malaria transmission can be simulated with the aid of mathematical models. Here, we provide an overview of methods for estimating the vaccine efficacy profiles of pre-erythrocytic vaccines and transmission-blocking vaccines from clinical trial data. In the case of RTS,S/AS01, model estimates from Phase II clinical trial data indicate a bi-phasic exponential profile of efficacy against infection, with efficacy waning rapidly in the first 6 months after vaccination followed by a slower rate of waning over the next 4 years. Transmission-blocking vaccines have yet to be tested in large-scale Phase II or Phase III clinical trials so we review ongoing work investigating how a clinical trial might be designed to ensure that vaccine efficacy can be estimated with sufficient statistical power. Finally, we demonstrate how parameters estimated from clinical trials can be used to predict the impact of vaccination campaigns on malaria using a mathematical model of malaria transmission. Copyright © 2015 Elsevier Ltd. All rights reserved.
The efficacy of respondent-driven sampling for the health assessment of minority populations.
Badowski, Grazyna; Somera, Lilnabeth P; Simsiman, Brayan; Lee, Hye-Ryeon; Cassel, Kevin; Yamanaka, Alisha; Ren, JunHao
2017-10-01
Respondent driven sampling (RDS) is a relatively new network sampling technique typically employed for hard-to-reach populations. Like snowball sampling, initial respondents or "seeds" recruit additional respondents from their network of friends. Under certain assumptions, the method promises to produce a sample independent from the biases that may have been introduced by the non-random choice of "seeds." We conducted a survey on health communication in Guam's general population using the RDS method, the first survey that has utilized this methodology in Guam. It was conducted in hopes of identifying a cost-efficient non-probability sampling strategy that could generate reasonable population estimates for both minority and general populations. RDS data was collected in Guam in 2013 (n=511) and population estimates were compared with 2012 BRFSS data (n=2031) and the 2010 census data. The estimates were calculated using the unweighted RDS sample and the weighted sample using RDS inference methods and compared with known population characteristics. The sample size was reached in 23days, providing evidence that the RDS method is a viable, cost-effective data collection method, which can provide reasonable population estimates. However, the results also suggest that the RDS inference methods used to reduce bias, based on self-reported estimates of network sizes, may not always work. Caution is needed when interpreting RDS study findings. For a more diverse sample, data collection should not be conducted in just one location. Fewer questions about network estimates should be asked, and more careful consideration should be given to the kind of incentives offered to participants. Copyright © 2017. Published by Elsevier Ltd.
Improving the quantification of contrast enhanced ultrasound using a Bayesian approach
NASA Astrophysics Data System (ADS)
Rizzo, Gaia; Tonietto, Matteo; Castellaro, Marco; Raffeiner, Bernd; Coran, Alessandro; Fiocco, Ugo; Stramare, Roberto; Grisan, Enrico
2017-03-01
Contrast Enhanced Ultrasound (CEUS) is a sensitive imaging technique to assess tissue vascularity, that can be useful in the quantification of different perfusion patterns. This can be particularly important in the early detection and staging of arthritis. In a recent study we have shown that a Gamma-variate can accurately quantify synovial perfusion and it is flexible enough to describe many heterogeneous patterns. Moreover, we have shown that through a pixel-by-pixel analysis the quantitative information gathered characterizes more effectively the perfusion. However, the SNR ratio of the data and the nonlinearity of the model makes the parameter estimation difficult. Using classical non-linear-leastsquares (NLLS) approach the number of unreliable estimates (those with an asymptotic coefficient of variation greater than a user-defined threshold) is significant, thus affecting the overall description of the perfusion kinetics and of its heterogeneity. In this work we propose to solve the parameter estimation at the pixel level within a Bayesian framework using Variational Bayes (VB), and an automatic and data-driven prior initialization. When evaluating the pixels for which both VB and NLLS provided reliable estimates, we demonstrated that the parameter values provided by the two methods are well correlated (Pearson's correlation between 0.85 and 0.99). Moreover, the mean number of unreliable pixels drastically reduces from 54% (NLLS) to 26% (VB), without increasing the computational time (0.05 s/pixel for NLLS and 0.07 s/pixel for VB). When considering the efficiency of the algorithms as computational time per reliable estimate, VB outperforms NLLS (0.11 versus 0.25 seconds per reliable estimate respectively).
Crystal Growth of ZnSe and Related Ternary Compound Semiconductors by Physical Vapor Transport
NASA Technical Reports Server (NTRS)
Cushman, Paula P.
1997-01-01
Preliminary definition of all of the necessary materials, labor, services, and facilities necessary to provide science requirement definition, initiate hardware development activities, and provide an update flight program proposal consistent with the NRA selection letter. The major tasks identified in this SOW are in the general category of science requirements determination, instrument definition, and updated flight program proposal. The Contractor shall define preliminary management, technical and integration requirements for the program, including improved cost/schedule estimates. The Contractor shall identify new technology requirements, define experiment accommodations and operational requirements and negotiate procurement of any long lead items, if required, with the government.
Crystal Growth of ZnSe and Related Ternary Compound Semiconductors by Physical Vapor Transport
NASA Technical Reports Server (NTRS)
Su, Ching-Hua
1997-01-01
Preliminary definition of all of the necessary materials, labor, services, and facilities necessary to provide science requirement definition, initiate hardware development activities, and provide an updated flight program proposal consistent with the NRA selection letter. The major tasks identified in this SOW are in the general category of science requirements determination, instrument definition, and updated flight program proposal. The Contractor shall define preliminary management, technical and integration requirements for the program, including improved cost/schedule estimates. The Contractor shall identify new technology requirements, define experiment accommodations and operational requirements and negotiate procurement of any long lead items, if required, with the government.
NASA Astrophysics Data System (ADS)
Mainhagu, J.; Brusseau, M. L.
2016-09-01
The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.
Marine fisheries declines viewed upside down: human impacts on consumer-driven nutrient recycling.
Layman, Craig A; Allgeier, Jacob E; Rosemond, Amy D; Dahlgren, Craig P; Yeager, Lauren A
2011-03-01
We quantified how two human impacts (overfishing and habitat fragmentation) in nearshore marine ecosystems may affect ecosystem function by altering the role of fish as nutrient vectors. We empirically quantified size-specific excretion rates of one of the most abundant fishes (gray snapper, Lutjanus griseus) in The Bahamas and combined these with surveys of fish abundance to estimate population-level excretion rates. The study was conducted across gradients of two human disturbances: overfishing and ecosystem fragmentation (estuaries bisected by roads), to evaluate how each could result in reduced population-level nutrient cycling by consumers. Mean estimated N and P excretion rates for gray snapper populations were on average 456% and 541% higher, respectively, in unfished sites. Ecosystem fragmentation resulted in significant reductions of recycling rates by snapper, with degree of creek fragmentation explaining 86% and 72% of the variance in estimated excretion for dissolved N and P, respectively. Additionally, we used nutrient limitation assays and primary producer nutrient content to provide a simple example of how marine fishery declines may affect primary production. This study provides an initial step toward integrating marine fishery declines and consumer-driven nutrient recycling to more fully understand the implications of human impacts in marine ecosystems.
Establishing endangered species recovery criteria using predictive simulation modeling
McGowan, Conor P.; Catlin, Daniel H.; Shaffer, Terry L.; Gratto-Trevor, Cheri L.; Aron, Carol
2014-01-01
Listing a species under the Endangered Species Act (ESA) and developing a recovery plan requires U.S. Fish and Wildlife Service to establish specific and measurable criteria for delisting. Generally, species are listed because they face (or are perceived to face) elevated risk of extinction due to issues such as habitat loss, invasive species, or other factors. Recovery plans identify recovery criteria that reduce extinction risk to an acceptable level. It logically follows that the recovery criteria, the defined conditions for removing a species from ESA protections, need to be closely related to extinction risk. Extinction probability is a population parameter estimated with a model that uses current demographic information to project the population into the future over a number of replicates, calculating the proportion of replicated populations that go extinct. We simulated extinction probabilities of piping plovers in the Great Plains and estimated the relationship between extinction probability and various demographic parameters. We tested the fit of regression models linking initial abundance, productivity, or population growth rate to extinction risk, and then, using the regression parameter estimates, determined the conditions required to reduce extinction probability to some pre-defined acceptable threshold. Binomial regression models with mean population growth rate and the natural log of initial abundance were the best predictors of extinction probability 50 years into the future. For example, based on our regression models, an initial abundance of approximately 2400 females with an expected mean population growth rate of 1.0 will limit extinction risk for piping plovers in the Great Plains to less than 0.048. Our method provides a straightforward way of developing specific and measurable recovery criteria linked directly to the core issue of extinction risk. Published by Elsevier Ltd.
Detection the nonlinear ultrasonic signals based on modified Duffing equations
NASA Astrophysics Data System (ADS)
Zhang, Yuhua; Mao, Hanling; Mao, Hanying; Huang, Zhenfeng
The nonlinear ultrasonic signals, like second harmonic generation (SHG) signals, could reflect the nonlinearity of material induced by fatigue damage in nonlinear ultrasonic technique which are weak nonlinear signals and usually submerged by strong background noise. In this paper the modified Duffing equations are applied to detect the SHG signals relating to the fatigue damage of material. Due to the Duffing equation could only detect the signal with specific frequency and initial phase, firstly the frequency transformation is carried on the Duffing equation which could detect the signal with any frequency. Then the influence of initial phases of to-be-detected signal and reference signal on the detection result is studied in detail, four modified Duffing equations are proposed to detect actual engineering signals with any initial phase. The relationship between the response amplitude and the total driving force is applied to estimate the amplitude of weak periodic signal. The detection results show the modified Duffing equations could effectively detect the second harmonic in SHG signals. When the SHG signals include strong background noise, the noise doesn't change the motion state of Duffing equation and the second harmonic signal could be detected until the SNR of noisy SHG signals are -26.3, yet the frequency spectrum method could only identify when the SNR is greater than 0.5. When estimation the amplitude of second harmonic signal, the estimation error of Duffing equation is obviously less than the frequency spectrum analysis method under the same noise level, which illustrates the Duffing equation has the noise immune capacity. The presence of the second harmonic signal in nonlinear ultrasonic experiments could provide an insight about the early fatigue damage of engineering components.
Vehicle Health Management Communications Requirements for AeroMACS
NASA Technical Reports Server (NTRS)
Kerczewski, Robert J.; Clements, Donna J.; Apaza, Rafael D.
2012-01-01
As the development of standards for the aeronautical mobile airport communications system (AeroMACS) progresses, the process of identifying and quantifying appropriate uses for the system is progressing. In addition to defining important elements of AeroMACS standards, indentifying the systems uses impacts AeroMACS bandwidth requirements. Although an initial 59 MHz spectrum allocation for AeroMACS was established in 2007, the allocation may be inadequate; studies have indicated that 100 MHz or more of spectrum may be required to support airport surface communications. Hence additional spectrum allocations have been proposed. Vehicle health management (VHM) systems, which can produce large volumes of vehicle health data, were not considered in the original bandwidth requirements analyses, and are therefore of interest in supporting proposals for additional AeroMACS spectrum. VHM systems are an emerging development in air vehicle safety, and preliminary estimates of the amount of data that will be produced and transmitted off an aircraft, both in flight and on the ground, have been prepared based on estimates of data produced by on-board vehicle health sensors and initial concepts of data processing approaches. This allowed an initial estimate of VHM data transmission requirements for the airport surface. More recently, vehicle-level systems designed to process and analyze VHM data and draw conclusions on the current state of vehicle health have been undergoing testing and evaluation. These systems make use of vehicle system data that is mostly different from VHM data considered previously for airport surface transmission, and produce processed system outputs that will be also need to be archived, thus generating additional data load for AeroMACS. This paper provides an analysis of airport surface data transmission requirements resulting from the vehicle level reasoning systems, within the context of overall VHM data requirements.
Troxler, Tiffany G.; Gaiser, Evelyn; Barr, Jordan; Fuentes, Jose D.; Jaffe, Rudolf; Childers, Daniel L.; Collado-Vides, Ligia; Rivera-Monroy, Victor H.; Castañeda-Moya, Edward; Anderson, William; Chambers, Randy; Chen, Meilian; Coronado-Molina, Carlos; Davis, Stephen E.; Engel, Victor C.; Fitz, Carl; Fourqurean, James; Frankovich, Tom; Kominoski, John; Madden, Chris; Malone, Sparkle L.; Oberbauer, Steve F.; Olivas, Paulo; Richards, Jennifer; Saunders, Colin; Schedlbauer, Jessica; Scinto, Leonard J.; Sklar, Fred; Smith, Thomas J.; Smoak, Joseph M.; Starr, Gregory; Twilley, Robert; Whelan, Kevin
2013-01-01
Recent studies suggest that coastal ecosystems can bury significantly more C than tropical forests, indicating that continued coastal development and exposure to sea level rise and storms will have global biogeochemical consequences. The Florida Coastal Everglades Long Term Ecological Research (FCE LTER) site provides an excellent subtropical system for examining carbon (C) balance because of its exposure to historical changes in freshwater distribution and sea level rise and its history of significant long-term carbon-cycling studies. FCE LTER scientists used net ecosystem C balance and net ecosystem exchange data to estimate C budgets for riverine mangrove, freshwater marsh, and seagrass meadows, providing insights into the magnitude of C accumulation and lateral aquatic C transport. Rates of net C production in the riverine mangrove forest exceeded those reported for many tropical systems, including terrestrial forests, but there are considerable uncertainties around those estimates due to the high potential for gain and loss of C through aquatic fluxes. C production was approximately balanced between gain and loss in Everglades marshes; however, the contribution of periphyton increases uncertainty in these estimates. Moreover, while the approaches used for these initial estimates were informative, a resolved approach for addressing areas of uncertainty is critically needed for coastal wetland ecosystems. Once resolved, these C balance estimates, in conjunction with an understanding of drivers and key ecosystem feedbacks, can inform cross-system studies of ecosystem response to long-term changes in climate, hydrologic management, and other land use along coastlines
Quantifying the effect of experimental design choices for in vitro scratch assays.
Johnston, Stuart T; Ross, Joshua V; Binder, Benjamin J; Sean McElwain, D L; Haridas, Parvathi; Simpson, Matthew J
2016-07-07
Scratch assays are often used to investigate potential drug treatments for chronic wounds and cancer. Interpreting these experiments with a mathematical model allows us to estimate the cell diffusivity, D, and the cell proliferation rate, λ. However, the influence of the experimental design on the estimates of D and λ is unclear. Here we apply an approximate Bayesian computation (ABC) parameter inference method, which produces a posterior distribution of D and λ, to new sets of synthetic data, generated from an idealised mathematical model, and experimental data for a non-adhesive mesenchymal population of fibroblast cells. The posterior distribution allows us to quantify the amount of information obtained about D and λ. We investigate two types of scratch assay, as well as varying the number and timing of the experimental observations captured. Our results show that a scrape assay, involving one cell front, provides more precise estimates of D and λ, and is more computationally efficient to interpret than a wound assay, with two opposingly directed cell fronts. We find that recording two observations, after making the initial observation, is sufficient to estimate D and λ, and that the final observation time should correspond to the time taken for the cell front to move across the field of view. These results provide guidance for estimating D and λ, while simultaneously minimising the time and cost associated with performing and interpreting the experiment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multiparametric estimation of brain hemodynamics with MR fingerprinting ASL.
Su, Pan; Mao, Deng; Liu, Peiying; Li, Yang; Pinho, Marco C; Welch, Babu G; Lu, Hanzhang
2017-11-01
Assessment of brain hemodynamics without exogenous contrast agents is of increasing importance in clinical applications. This study aims to develop an MR perfusion technique that can provide noncontrast and multiparametric estimation of hemodynamic markers. We devised an arterial spin labeling (ASL) method based on the principle of MR fingerprinting (MRF), referred to as MRF-ASL. By taking advantage of the rich information contained in MRF sequence, up to seven hemodynamic parameters can be estimated concomitantly. Feasibility demonstration, flip angle optimization, comparison with Look-Locker ASL, reproducibility test, sensitivity to hypercapnia challenge, and initial clinical application in an intracranial steno-occlusive process, Moyamoya disease, were performed to evaluate this technique. Magnetic resonance fingerprinting ASL provided estimation of up to seven parameters, including B1+, tissue T 1 , cerebral blood flow (CBF), tissue bolus arrival time (BAT), pass-through arterial BAT, pass-through blood volume, and pass-through blood travel time. Coefficients of variation of the estimated parameters ranged from 0.2 to 9.6%. Hypercapnia resulted in an increase in CBF by 57.7%, and a decrease in BAT by 13.7 and 24.8% in tissue and vessels, respectively. Patients with Moyamoya disease showed diminished CBF and lengthened BAT that could not be detected with regular ASL. Magnetic resonance fingerprinting ASL is a promising technique for noncontrast, multiparametric perfusion assessment. Magn Reson Med 78:1812-1823, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Vasiliu, Oana E; Stover, Jeffrey A; Mays, Marissa J E; Bissette, Jennifer M; Dolan, Carrie B; Sirbu, Corina M
2009-01-01
We investigated the effect of providing mailing cost reimbursements to local health departments on the timeliness of the reporting of sexually transmitted diseases (STDs) in Virginia. The Division of Disease Prevention, Virginia Department of Health, provided mailing cost reimbursements to 31 Virginia health districts from October 2002 to December 2004. The difference (in days) between the diagnosis date (or date the STD paperwork was initiated) and the date the case/STD report was entered into the STD surveillance database was used in a negative binomial regression model against time (as divided into three periods-before, during, and after reimbursement) to estimate the effect of providing mailing cost reimbursements on reporting timeliness. We observed significant decreases in the number of days between diagnosis and reporting of a case, which were sustained after the reimbursement period ended, in 25 of the 31 health districts included in the analysis. We observed a significant initial decrease (during the reimbursement period) followed by a significant increase in the after-reimbursement phase in one health district. Two health districts had a significant initial decrease, while one health district had a significant decrease in reporting timeliness in the period after reimbursement. Two health districts showed no significant changes in the number of days to report to the central office. Providing reimbursements for mailing costs was statistically associated with improved STD reporting timeliness in almost all of Virginia's health districts. Sustained improvement after the reimbursement period ended is likely indicative of improved local health department reporting habits.
Chapelle, Francis H.; Thomas, Lashun K.; Bradley, Paul M.; Rectanus, Heather V.; Widdowson, Mark A.
2012-01-01
Aquifer sediment and groundwater chemistry data from 15 Department of Defense facilities located throughout the United States were collected and analyzed with the goal of estimating the amount of natural organic carbon needed to initiate reductive dechlorination in groundwater systems. Aquifer sediments were analyzed for hydroxylamine and NaOH-extractable organic carbon, yielding a probable underestimate of potentially bioavailable organic carbon (PBOC). Aquifer sediments were also analyzed for total organic carbon (TOC) using an elemental combustion analyzer, yielding a probable overestimate of bioavailable carbon. Concentrations of PBOC correlated linearly with TOC with a slope near one. However, concentrations of PBOC were consistently five to ten times lower than TOC. When mean concentrations of dissolved oxygen observed at each site were plotted versus PBOC, it showed that anoxic conditions were initiated at approximately 200 mg/kg of PBOC. Similarly, the accumulation of reductive dechlorination daughter products relative to parent compounds increased at a PBOC concentration of approximately 200 mg/kg. Concentrations of total hydrolysable amino acids (THAA) in sediments also increased at approximately 200 mg/kg, and bioassays showed that sediment CO2 production correlated positively with THAA. The results of this study provide an estimate for threshold amounts of bioavailable carbon present in aquifer sediments (approximately 200 mg/kg of PBOC; approximately 1,000 to 2,000 mg/kg of TOC) needed to support reductive dechlorination in groundwater systems.
Determination on Damage Mechanism of the Planet Gear of Heavy Vehicle Final Drive
NASA Astrophysics Data System (ADS)
Ramdan, RD; Setiawan, R.; Sasmita, F.; Suratman, R.; Taufiqulloh
2018-02-01
The works focus on the investigation of damage mechanism of fractured in the form of spalling of the planet gears from the final drive assembly of 160-ton heavy vehicles. The objective of this work is to clearly understand the mechanism of damage. The work is the first stage of the on-going research on the remaining life estimation of such gears. The understanding of the damage mechanism is critical in order to provide accurate estimate of the gear’s remaining life with observed initial damage. The analysis was performed based on the metallurgy laboratory works, including visual observation, macro-micro fractography by optical stereo and optical microscope and micro-vickers hardness test. From visual observation it was observed pitting that form lining defect at common position, which is at gear flank position. From spalling sample it was observed ratchet mark at the boundary between macro pitting and the edge of fractured parts. Further observation on the cross-section of the samples by optical microscope confirm that initial micro pitting occur without spalling of the case hardened surface. Spalling occur when pitting achieve certain critical size, and occur at multiple initiation site of crack propagation. From the present research it was concluded that pitting was resulted due to repeated contact fatigue. In addition, development of micro to macro pitting as well as spalling occur at certain direction towards the top of the gear teeth.
Burdette, Amy M; Webb, Noah S; Hill, Terrence D; Jokinen-Gordon, Hanna
2017-01-01
Although racial and ethnic differences in HPV vaccination initiation are well established, it is unclear whether these disparities have changed over time. The role of health provider recommendations in reducing any racial and ethnic inequalities is also uncertain. This study addresses these gaps in the literature. Repeated cross-sectional design. Using data from the National Immunization Survey-Teen (2008-2013), we estimated a series of binary logistic regressions to model race-specific trends in (1) provider recommendations to vaccinate against HPV and (2) HPV vaccine initiation for males (n = 56,632) and females (n = 77,389). Provider recommendations to vaccinate and HPV vaccination uptake have increased over time for adolescent males and females and across all racial and ethnic groups. Among girls, minority youths have seen a sharper increase in provider recommendations and HPV vaccination uptake than their White counterparts. Among boys, minority teens maintain higher overall rates of HPV vaccine uptake, however, Hispanics have lagged behind non-Hispanic Whites in the rate of increase in provider recommendations and HPV vaccinations. Our results suggest that racial and ethnic disparities in provider recommendations and HPV vaccinations have waned over time among males and females. While these trends are welcomed, additional interventions are warranted to increase overall rates of vaccination across race, ethnicity, and gender. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Hyperspectral data discrimination methods
NASA Astrophysics Data System (ADS)
Casasent, David P.; Chen, Xuewen
2000-12-01
Hyperspectral data provides spectral response information that provides detailed chemical, moisture, and other description of constituent parts of an item. These new sensor data are useful in USDA product inspection. However, such data introduce problems such as the curse of dimensionality, the need to reduce the number of features used to accommodate realistic small training set sizes, and the need to employ discriminatory features and still achieve good generalization (comparable training and test set performance). Several two-step methods are compared to a new and preferable single-step spectral decomposition algorithm. Initial results on hyperspectral data for good/bad almonds and for good/bad (aflatoxin infested) corn kernels are presented. The hyperspectral application addressed differs greatly from prior USDA work (PLS) in which the level of a specific channel constituent in food was estimated. A validation set (separate from the test set) is used in selecting algorithm parameters. Threshold parameters are varied to select the best Pc operating point. Initial results show that nonlinear features yield improved performance.
A Monte Carlo–Based Bayesian Approach for Measuring Agreement in a Qualitative Scale
Pérez Sánchez, Carlos Javier
2014-01-01
Agreement analysis has been an active research area whose techniques have been widely applied in psychology and other fields. However, statistical agreement among raters has been mainly considered from a classical statistics point of view. Bayesian methodology is a viable alternative that allows the inclusion of subjective initial information coming from expert opinions, personal judgments, or historical data. A Bayesian approach is proposed by providing a unified Monte Carlo–based framework to estimate all types of measures of agreement in a qualitative scale of response. The approach is conceptually simple and it has a low computational cost. Both informative and non-informative scenarios are considered. In case no initial information is available, the results are in line with the classical methodology, but providing more information on the measures of agreement. For the informative case, some guidelines are presented to elicitate the prior distribution. The approach has been applied to two applications related to schizophrenia diagnosis and sensory analysis. PMID:29881002
Development and Validation of Personality Disorder Spectra Scales for the MMPI-2-RF.
Sellbom, Martin; Waugh, Mark H; Hopwood, Christopher J
2018-01-01
The purpose of this study was to develop and validate a set of MMPI-2-RF (Ben-Porath & Tellegen, 2008/2011) personality disorder (PD) spectra scales. These scales could serve the purpose of assisting with DSM-5 PD diagnosis and help link categorical and dimensional conceptions of personality pathology within the MMPI-2-RF. We developed and provided initial validity results for scales corresponding to the 10 PD constructs listed in the DSM-5 using data from student, community, clinical, and correctional samples. Initial validation efforts indicated good support for criterion validity with an external PD measure as well as with dimensional personality traits included in the DSM-5 alternative model for PDs. Construct validity results using psychosocial history and therapists' ratings in a large clinical sample were generally supportive as well. Overall, these brief scales provide clinicians using MMPI-2-RF data with estimates of DSM-5 PD constructs that can support cross-model connections between categorical and dimensional assessment approaches.
B. Lane Rivenbark; C. Rhett Jackson
2004-01-01
Regional average evapotranspiration estimates developed by water balance techniques are frequently used to estimate average discharge in ungaged strttams. However, the lower stream size range for the validity of these techniques has not been explored. Flow records were collected and evaluated for 16 small streams in the Southern Appalachians to test whether the...
On the Error of the Dixon Plot for Estimating the Inhibition Constant between Enzyme and Inhibitor
ERIC Educational Resources Information Center
Fukushima, Yoshihiro; Ushimaru, Makoto; Takahara, Satoshi
2002-01-01
In textbook treatments of enzyme inhibition kinetics, adjustment of the initial inhibitor concentration for inhibitor bound to enzyme is often neglected. For example, in graphical plots such as the Dixon plot for estimation of an inhibition constant, the initial concentration of inhibitor is usually plotted instead of the true inhibitor…
NASA Astrophysics Data System (ADS)
van Dijk, Albert I. J. M.; Peña-Arancibia, Jorge L.; Wood, Eric F.; Sheffield, Justin; Beck, Hylke E.
2013-05-01
Ideally, a seasonal streamflow forecasting system would ingest skilful climate forecasts and propagate these through calibrated hydrological models initialized with observed catchment conditions. At global scale, practical problems exist in each of these aspects. For the first time, we analyzed theoretical and actual skill in bimonthly streamflow forecasts from a global ensemble streamflow prediction (ESP) system. Forecasts were generated six times per year for 1979-2008 by an initialized hydrological model and an ensemble of 1° resolution daily climate estimates for the preceding 30 years. A post-ESP conditional sampling method was applied to 2.6% of forecasts, based on predictive relationships between precipitation and 1 of 21 climate indices prior to the forecast date. Theoretical skill was assessed against a reference run with historic forcing. Actual skill was assessed against streamflow records for 6192 small (<10,000 km2) catchments worldwide. The results show that initial catchment conditions provide the main source of skill. Post-ESP sampling enhanced skill in equatorial South America and Southeast Asia, particularly in terms of tercile probability skill, due to the persistence and influence of the El Niño Southern Oscillation. Actual skill was on average 54% of theoretical skill but considerably more for selected regions and times of year. The realized fraction of the theoretical skill probably depended primarily on the quality of precipitation estimates. Forecast skill could be predicted as the product of theoretical skill and historic model performance. Increases in seasonal forecast skill are likely to require improvement in the observation of precipitation and initial hydrological conditions.
NASA Astrophysics Data System (ADS)
Herda, Maxime; Rodrigues, L. Miguel
2018-03-01
The present contribution investigates the dynamics generated by the two-dimensional Vlasov-Poisson-Fokker-Planck equation for charged particles in a steady inhomogeneous background of opposite charges. We provide global in time estimates that are uniform with respect to initial data taken in a bounded set of a weighted L^2 space, and where dependencies on the mean-free path τ and the Debye length δ are made explicit. In our analysis the mean free path covers the full range of possible values: from the regime of evanescent collisions τ → ∞ to the strongly collisional regime τ → 0. As a counterpart, the largeness of the Debye length, that enforces a weakly nonlinear regime, is used to close our nonlinear estimates. Accordingly we pay a special attention to relax as much as possible the τ -dependent constraint on δ ensuring exponential decay with explicit τ -dependent rates towards the stationary solution. In the strongly collisional limit τ → 0, we also examine all possible asymptotic regimes selected by a choice of observation time scale. Here also, our emphasis is on strong convergence, uniformity with respect to time and to initial data in bounded sets of a L^2 space. Our proofs rely on a detailed study of the nonlinear elliptic equation defining stationary solutions and a careful tracking and optimization of parameter dependencies of hypocoercive/hypoelliptic estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xuejun; Tang, Qiuhong; Liu, Xingcai
Real-time monitoring and predicting drought development with several months in advance is of critical importance for drought risk adaptation and mitigation. In this paper, we present a drought monitoring and seasonal forecasting framework based on the Variable Infiltration Capacity (VIC) hydrologic model over Southwest China (SW). The satellite precipitation data are used to force VIC model for near real-time estimate of land surface hydrologic conditions. As initialized with satellite-aided monitoring, the climate model-based forecast (CFSv2_VIC) and ensemble streamflow prediction (ESP)-based forecast (ESP_VIC) are both performed and evaluated through their ability in reproducing the evolution of the 2009/2010 severe drought overmore » SW. The results show that the satellite-aided monitoring is able to provide reasonable estimate of forecast initial conditions (ICs) in a real-time manner. Both of CFSv2_VIC and ESP_VIC exhibit comparable performance against the observation-based estimates for the first month, whereas the predictive skill largely drops beyond 1-month. Compared to ESP_VIC, CFSv2_VIC shows better performance as indicated by the smaller ensemble range. This study highlights the value of this operational framework in generating near real-time ICs and giving a reliable prediction with 1-month ahead, which has great implications for drought risk assessment, preparation and relief.« less
Waterbird use of catfish ponds and migratory bird habitat initiative wetlands in Mississippi
Feaga, James S.; Vilella, Francisco; Kaminski, Richard M.; Davis, J. Brian
2015-01-01
Aquaculture can provide important surrogate habitats for waterbirds. In response to the 2010 Deepwater Horizon oil spill, the National Resource Conservation Service enacted the Migratory Bird Habitat Initiative through which incentivized landowners provided wetland habitats for migrating waterbirds. Diversity and abundance of waterbirds in six production and four idled aquaculture facilities in the Mississippi Alluvial Valley were estimated during the winters of 2011–2013. Wintering waterbirds exhibited similar densities on production (i.e., ∼22 birds/ha) and idled (i.e., ∼20 birds/ha) sites. A total of 42 species were found using both types of aquaculture wetlands combined, but there was considerable departure in bird guilds occupying the two wetland types. The primary users of production ponds were diving and dabbling ducks and American coots. However, idled ponds, with varying water depths (e.g., mudflats to 20 cm) and diverse emergent vegetation-water interspersion, attracted over 30 species of waterbirds and, on average, had more species of waterbirds from fall through early spring than catfish production ponds. Conservation through the Migratory Bird Habitat Initiative was likely responsible for this difference. Our results suggest production and idled Migratory Bird Habitat Initiative aquaculture impoundments produced suitable conditions for various waterbird species and highlight the importance of conservation programs on private lands that promote diversity in vegetation structure and water depths to enhance waterbird diversity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vice President Biden and Secretary Chu were joined by California Governor Arnold Schwarzenegger and Solyndra CEO Dr. Chris Grone
Vice President Biden and Secretary Chu were joined by California Governor Arnold Schwarzenegger and Solyndra CEO Dr. Chris Grone to announce that the Department of Energy has finalized a $535 million loan guarantee for Solyndra, Inc., which manufactures innovative cylindrical solar photovoltaic panels that provide clean, renewable energy. The funding will finance construction of the first phase of the company's new manufacturing facility. Solyndra estimates the new plant will initially create 3,000 construction jobs, and lead to as many as 1,000 jobs once the facility opens.
A Comparison of Nonlinear Filters for Orbit Determination and Estimation
1986-06-01
Com- mand uses a nonlinear least squares filter for element set maintenance for all objects orbiting the Earth (3). These objects, including active...initial state vector is the singularly averaged classical orbital element set provided by SPACECOM/DOA. The state vector in this research consists of...GSF (G) - - 26.0 36.7 GSF(A) 32.1 77.4 38.8 59.6 The Air Force Space Command is responsible for main- taining current orbital element sets for about
NASA Technical Reports Server (NTRS)
Golombek, M. P.; Banerdt, W. B.
1985-01-01
While it is generally agreed that the strength of a planet's lithosphere is controlled by a combination of brittle sliding and ductile flow laws, predicting the geometry and initial characteristics of faults due to failure from stresses imposed on the lithospheric strength envelope has not been thoroughly explored. Researchers used lithospheric strength envelopes to analyze the extensional features found on Ganymede. This application provides a quantitative means of estimating early thermal profiles on Ganymede, thereby constraining its early thermal evolution.
Solyndra Loan Guarantee Announcement
Vice President Biden and Secretary Chu were joined by California Governor Arnold Schwarzenegger and Solyndra CEO Dr. Chris Grone
2017-12-09
Vice President Biden and Secretary Chu were joined by California Governor Arnold Schwarzenegger and Solyndra CEO Dr. Chris Grone to announce that the Department of Energy has finalized a $535 million loan guarantee for Solyndra, Inc., which manufactures innovative cylindrical solar photovoltaic panels that provide clean, renewable energy. The funding will finance construction of the first phase of the company's new manufacturing facility. Solyndra estimates the new plant will initially create 3,000 construction jobs, and lead to as many as 1,000 jobs once the facility opens.
Orbital Spacecraft Consumables Resupply System (OSCRS). Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1987-01-01
The objective was to establish an earth storable fluid tanker concept which satisfies the initial resupply requirements for the Gamma Ray Observatory (GRO) at a reasonable front end cost while providing growth potential for foreseeable future earth storable fluid resupply mission requirements. The estimated costs required to design, develop, qualify, fabricate, and deliver a flight tanker and its associated control avionics, ground support equipment (GSE), and processing facilities, and the contractors costs to support the first operations mission are reviewed.
2013-09-30
COVERED 00-00-2013 to 00-00-2013 4. TITLE AND SUBTITLE Tracking and Predicting Fine Scale Sea Ice Motion by Constructing Super-Resolution Images...limited, but potentially provide more detailed data. Initial assessments have been made on MODIS data in terms of its suitability. While clouds obscure...estimates. 2 Data from Aqua, Terra, and Suomi NPP satellites were investigated. Aqua and Terra are older satellites that fly the MODIS instrument
Evaluating Realized Impacts of DOE/EERE R&D Programs. Standard impact evaluation method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruegg, Rosalie; O'Connor, Alan C.; Loomis, Ross J.
2014-08-01
This document provides guidance for evaluators who conduct impact assessments of research and development (R&D) programs for the U.S. Department of Energy’s (DOE) Office of Energy Efficiency and Renewable Energy (EERE). It is also targeted at EERE program staff responsible for initiating and managing commissioned impact studies. The guide specifies how to estimate economic benefits and costs, energy saved and installed or generated, environmental impacts, energy security impacts, and knowledge impacts of R&D investments in advanced energy technologies.
2015-05-01
for issuing this critical change: Inability to achieve PKI Increment 2 Full Deployment Decision ( FDD ) within five years of program initiation...March 1, 2014 deadline), and Delay of over one year in the original FDD estimate provided to the Congress (1 March 2014 deadline). The proximate...to support a 1 March 2014 FDD .” The Director, Performance Assessments and Root Cause Analyses (PARCA), asked the Institute for Defense Analyses
Department of the Navy Supporting Data for FY1991 Budget Estimates Descriptive Summaries
1990-01-01
deployments of the F/A-18 aircraft. f. (U) Engineering and technical support for AAS-38 tracker and F/A-18 C/D WSSA. g. (U) Provided support to ATARS program...for preliminary testing of RECCE/ ATARS common nose and associated air data computer (ADC) algorithms. h. (U) Initiated integration of full HARPOON and...to ATARS program for testing of flight control computer software. 28 UNCLASSIFIED 0]PgLa .e 2 4136N Budget Activity: 4 Program Elemhk* Title: F/A-18
2009-10-29
Presidential Initiative to End Hunger in Africa ( IEHA )—which represented the U.S. strategy to help fulfill the MDG goal of halving hunger by 2015...was constrained in funding and limited in scope. In 2005, USAID, the primary agency that implemented IEHA , committed to providing an estimated $200...Development Assistance (DA) and other accounts. IEHA was intended to build an African-led partnership to cut hunger and poverty by investing in efforts
Ergodicity of the generalized lemon billiards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jingyu; Mohr, Luke; Zhang, Hong-Kun, E-mail: hongkun@math.umass.edu
2013-12-15
In this paper, we study a two-parameter family of convex billiard tables, by taking the intersection of two round disks (with different radii) in the plane. These tables give a generalization of the one-parameter family of lemon-shaped billiards. Initially, there is only one ergodic table among all lemon tables. In our generalized family, we observe numerically the prevalence of ergodicity among the some perturbations of that table. Moreover, numerical estimates of the mixing rate of the billiard dynamics on some ergodic tables are also provided.
Simmons, Joseph P; LeBoeuf, Robyn A; Nelson, Leif D
2010-12-01
Increasing accuracy motivation (e.g., by providing monetary incentives for accuracy) often fails to increase adjustment away from provided anchors, a result that has led researchers to conclude that people do not effortfully adjust away from such anchors. We challenge this conclusion. First, we show that people are typically uncertain about which way to adjust from provided anchors and that this uncertainty often causes people to believe that they have initially adjusted too far away from such anchors (Studies 1a and 1b). Then, we show that although accuracy motivation fails to increase the gap between anchors and final estimates when people are uncertain about the direction of adjustment, accuracy motivation does increase anchor-estimate gaps when people are certain about the direction of adjustment, and that this is true regardless of whether the anchors are provided or self-generated (Studies 2, 3a, 3b, and 5). These results suggest that people do effortfully adjust away from provided anchors but that uncertainty about the direction of adjustment makes that adjustment harder to detect than previously assumed. This conclusion has important theoretical implications, suggesting that currently emphasized distinctions between anchor types (self-generated vs. provided) are not fundamental and that ostensibly competing theories of anchoring (selective accessibility and anchoring-and-adjustment) are complementary. PsycINFO Database Record (c) 2010 APA, all rights reserved.
An Optimization-Based State Estimatioin Framework for Large-Scale Natural Gas Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jalving, Jordan; Zavala, Victor M.
We propose an optimization-based state estimation framework to track internal spacetime flow and pressure profiles of natural gas networks during dynamic transients. We find that the estimation problem is ill-posed (because of the infinite-dimensional nature of the states) and that this leads to instability of the estimator when short estimation horizons are used. To circumvent this issue, we propose moving horizon strategies that incorporate prior information. In particular, we propose a strategy that initializes the prior using steady-state information and compare its performance against a strategy that does not initialize the prior. We find that both strategies are capable ofmore » tracking the state profiles but we also find that superior performance is obtained with steady-state prior initialization. We also find that, under the proposed framework, pressure sensor information at junctions is sufficient to track the state profiles. We also derive approximate transport models and show that some of these can be used to achieve significant computational speed-ups without sacrificing estimation performance. We show that the estimator can be easily implemented in the graph-based modeling framework Plasmo.jl and use a multipipeline network study to demonstrate the developments.« less
Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.
2016-11-08
We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less
3D Reconstruction of the Source and Scale of Buried Young Flood Channels on Mars
NASA Astrophysics Data System (ADS)
Morgan, Gareth A.; Campbell, Bruce A.; Carter, Lynn M.; Plaut, Jeffrey J.; Phillips, Roger J.
2013-05-01
Outflow channels on Mars are interpreted as the product of gigantic floods due to the catastrophic eruption of groundwater that may also have initiated episodes of climate change. Marte Vallis, the largest of the young martian outflow channels (<500 million years old), is embayed by lava flows that hinder detailed studies and comparisons with older channel systems. Understanding Marte Vallis is essential to our assessment of recent Mars hydrologic activity during a period otherwise considered to be cold and dry. Using data from the Shallow Radar sounder on the Mars Reconnaissance Orbiter, we present a three-dimensional (3D) reconstruction of buried channels on Mars and provide estimates of paleohydrologic parameters. Our work shows that Cerberus Fossae provided the waters that carved Marte Vallis, and it extended an additional 180 kilometers to the east before the emplacement of the younger lava flows. We identified two stages of channel incision and determined that channel depths were more than twice those of previous estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.
We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less
3D reconstruction of the source and scale of buried young flood channels on Mars.
Morgan, Gareth A; Campbell, Bruce A; Carter, Lynn M; Plaut, Jeffrey J; Phillips, Roger J
2013-05-03
Outflow channels on Mars are interpreted as the product of gigantic floods due to the catastrophic eruption of groundwater that may also have initiated episodes of climate change. Marte Vallis, the largest of the young martian outflow channels (<500 million years old), is embayed by lava flows that hinder detailed studies and comparisons with older channel systems. Understanding Marte Vallis is essential to our assessment of recent Mars hydrologic activity during a period otherwise considered to be cold and dry. Using data from the Shallow Radar sounder on the Mars Reconnaissance Orbiter, we present a three-dimensional (3D) reconstruction of buried channels on Mars and provide estimates of paleohydrologic parameters. Our work shows that Cerberus Fossae provided the waters that carved Marte Vallis, and it extended an additional 180 kilometers to the east before the emplacement of the younger lava flows. We identified two stages of channel incision and determined that channel depths were more than twice those of previous estimates.
Mountains, Melting Pot, and Microcosm: Health Care Delay and Dengue/Zika Interplay on Hawaii Island.
Baenziger, Nancy L
2016-11-01
Human history in the Hawaiian Islands offers a sobering study in the population dynamics of infectious disease. The indigenous population numbering an estimated half million people prior to Western contact in 1778 was reduced to less than 24,000 by 1920. Much of the decline occurred in the earliest decades after contact with Western diseases including measles, chicken pox, polio, tuberculosis, and venereal disease. A recent outbreak on the Island of Hawaii (also called the Big Island) of imported dengue fever, an illness endemic in 100 countries affecting an estimated 100-400 million people worldwide, provides insights into the problems and prospects for health care policy in managing mosquito-borne disease in a multicultural setting of geographic isolation and health care provider shortage. This incident represents in microcosm a practice run, applicable in many contexts, for an initial localized appearance of Zika virus infection, with important lessons for effective health care management in a rapidly moving and fluid arena.
Effects of practice on the Wechsler Adult Intelligence Scale-IV across 3- and 6-month intervals.
Estevis, Eduardo; Basso, Michael R; Combs, Dennis
2012-01-01
A total of 54 participants (age M = 20.9; education M = 14.9; initial Full Scale IQ M = 111.6) were administered the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) at baseline and again either 3 or 6 months later. Scores on the Full Scale IQ, Verbal Comprehension, Working Memory, Perceptual Reasoning, Processing Speed, and General Ability Indices improved approximately 7, 5, 4, 5, 9, and 6 points, respectively, and increases were similar regardless of whether the re-examination occurred over 3- or 6-month intervals. Reliable change indices (RCI) were computed using the simple difference and bivariate regression methods, providing estimated base rates of change across time. The regression method provided more accurate estimates of reliable change than did the simple difference between baseline and follow-up scores. These findings suggest that prior exposure to the WAIS-IV results in significant score increments. These gains reflect practice effects instead of genuine intellectual changes, which may lead to errors in clinical judgment.
NASA Instrument Cost/Schedule Model
NASA Technical Reports Server (NTRS)
Habib-Agahi, Hamid; Mrozinski, Joe; Fox, George
2011-01-01
NASA's Office of Independent Program and Cost Evaluation (IPCE) has established a number of initiatives to improve its cost and schedule estimating capabilities. 12One of these initiatives has resulted in the JPL developed NASA Instrument Cost Model. NICM is a cost and schedule estimator that contains: A system level cost estimation tool; a subsystem level cost estimation tool; a database of cost and technical parameters of over 140 previously flown remote sensing and in-situ instruments; a schedule estimator; a set of rules to estimate cost and schedule by life cycle phases (B/C/D); and a novel tool for developing joint probability distributions for cost and schedule risk (Joint Confidence Level (JCL)). This paper describes the development and use of NICM, including the data normalization processes, data mining methods (cluster analysis, principal components analysis, regression analysis and bootstrap cross validation), the estimating equations themselves and a demonstration of the NICM tool suite.
Empirical Bayes Estimation of Coalescence Times from Nucleotide Sequence Data.
King, Leandra; Wakeley, John
2016-09-01
We demonstrate the advantages of using information at many unlinked loci to better calibrate estimates of the time to the most recent common ancestor (TMRCA) at a given locus. To this end, we apply a simple empirical Bayes method to estimate the TMRCA. This method is both asymptotically optimal, in the sense that the estimator converges to the true value when the number of unlinked loci for which we have information is large, and has the advantage of not making any assumptions about demographic history. The algorithm works as follows: we first split the sample at each locus into inferred left and right clades to obtain many estimates of the TMRCA, which we can average to obtain an initial estimate of the TMRCA. We then use nucleotide sequence data from other unlinked loci to form an empirical distribution that we can use to improve this initial estimate. Copyright © 2016 by the Genetics Society of America.
Energy balance during underwater implosion of ductile metallic cylinders.
Chamberlin, Ryan E; Guzas, Emily L; Ambrico, Joseph M
2014-11-01
Energy-based metrics are developed and applied to a numerical test case of implosion of an underwater pressure vessel. The energy metrics provide estimates of the initial energy in the system (potential energy), the energy released into the fluid as a pressure pulse, the energy absorbed by the imploding structure, and the energy absorbed by air trapped within the imploding structure. The primary test case considered is the implosion of an aluminum cylinder [diameter: 2.54 cm (1 in.), length: 27.46 cm (10.81 in.)] that collapses flat in a mode-2 shape with minimal fracture. The test case indicates that the structure absorbs the majority (92%) of the initial energy in the system. Consequently, the energy emitted as a pressure pulse into the fluid is a small fraction, approximately 5%, of the initial energy. The energy absorbed by the structure and the energy emitted into the fluid are calculated for additional simulations of underwater pressure vessel implosions. For all cases investigated, there is minimal fracture in the collapse, the structure absorbs more than 80% of the initial energy of the system, and the released pressure pulse carries away less than 6% of the initial energy.
Fractional Gaussian model in global optimization
NASA Astrophysics Data System (ADS)
Dimri, V. P.; Srivastava, R. P.
2009-12-01
Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.
Kaye, Elena A; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts
2012-10-01
To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat et al., "MR-guided adaptive focusing of ultrasound," IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734-1747 (2010)] was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients' phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy.
Kaye, Elena A.; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts
2012-01-01
Purpose: To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. Methods: The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat , “MR-guided adaptive focusing of ultrasound,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734–1747 (2010)]10.1109/TUFFC.2010.1612 was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients’ phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Results: Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. Conclusions: The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy. PMID:23039661
A Model for Calculated Privacy and Trust in pHealth Ecosystems.
Ruotsalainen, Pekka; Blobel, Bernd
2018-01-01
A pHealth ecosystem is a community of service users and providers. It is also a dynamic socio-technical system. One of its main goals is to help users to maintain their personal health status. Another goal is to give economic benefit to stakeholders which use personal health information existing in the ecosystem. In pHealth ecosystems, a huge amount of health related data is collected and used by service providers such as data extracted from the regulated health record and information related to personal characteristics, genetics, lifestyle and environment. In pHealth ecosystems, there are different kinds of service providers such as regulated health care service providers, unregulated health service providers, ICT service providers, researchers and industrial organizations. This fact together with the multidimensional personal health data used raises serious privacy concerns. Privacy is a necessary enabler for successful pHealth, but it is also an elastic concept without any universally agreed definition. Regardless of what kind of privacy model is used in dynamic socio-technical systems, it is difficult for a service user to know the privacy level of services in real life situations. As privacy and trust are interrelated concepts, the authors have developed a hybrid solution where knowledge got from regulatory privacy requirements and publicly available privacy related documents is used for calculation of service providers' specific initial privacy value. This value is then used as an estimate for the initial trust score. In this solution, total trust score is a combination of recommended trust, proposed trust and initial trust. Initial privacy level is a weighted arithmetic mean of knowledge and user selected weights. The total trust score for any service provider in the ecosystem can be calculated deploying either a beta trust model or the Fuzzy trust calculation method. The prosed solution is easy to use and to understand, and it can be also automated. It is possible to develop a computer application that calculates a situation-specific trust score, and to make it freely available on the Internet.
Automatic picker of P & S first arrivals and robust event locator
NASA Astrophysics Data System (ADS)
Pinsky, V.; Polozov, A.; Hofstetter, A.
2003-12-01
We report on further development of automatic all distances location procedure designed for a regional network. The procedure generalizes the previous "loca l" (R < 500 km) and "regional" (500 < R < 2000 km) routines and comprises: a) preliminary data processing (filtering and de-spiking), b) phase identificatio n, c) P, S first arrival picking, d) preliminary location and e) robust grid-search optimization procedure. Innovations concern phase identification, automa tic picking and teleseismic location. A platform free flexible Java interface was recently created, allowing easy parameter tuning and on/off switching to t he full-scale manual picking mode. Identification of the regional P and S phase is provided by choosing between the two largest peaks in the envelope curve. For automatic on-time estimation we utilize now ratio of two STAs, calculated in two consecutive and equal time windows (instead of previously used Akike Information Criterion). "Teleseismic " location is split in two stages: preliminary and final one. The preliminary part estimates azimuth and apparent velocity by fitting a plane wave to the P automatic pickings. The apparent velocity criterion is used to decide about strategy of the following computations: teleseismic or regional. The preliminary estimates of azimuth and apparent velocity provide starting value for the final teleseismic and regional location. Apparent velocity is used to get first a pproximation distance to the source on the basis of the P, Pn, Pg travel-timetables. The distance estimate together with the preliminary azimuth estimate provides first approximations of the source latitude and longitude via sine and cosine theorems formulated for the spherical triangle. Final location is based on robust grid-search optimization procedure, weighting the number of pickings that simultaneously fit the model travel times. The grid covers initial location and becomes finer while approaching true hypocenter. The target function is a sum of the bell-shaped characteristic functions, used to emphasize true pickings and eliminate outliers. The final solution is a grid point that provides maximum to the target function. The procedure was applied to a list of ML > 4 earthquakes recorded by the Israel Seismic Network (ISN) in the 1999-2002 time period. Most of them are badly constrained relative the network. However, the results of location with average normalized error relative bulletin solutions e=dr/R of 5% were obtained, in each of the distance ranges. The first version of the procedure was incorporated in the national Early Warning System in 2001. Recently, we started to send automatic Early Warn ing reports, to the EMSC Real Time Bulletin. Initially reported some teleseismic location discrepancies have been eliminated by introduction of station corrections.
A science-based, watershed strategy to support effective remediation of abandoned mine lands
Buxton, Herbert T.; Nimick, David A.; Von Guerard, Paul; Church, Stan E.; Frazier, Ann G.; Gray, John R.; Lipin, Bruce R.; Marsh, Sherman P.; Woodward, Daniel F.; Kimball, Briant A.; Finger, Susan E.; Ischinger, Lee S.; Fordham, John C.; Power, Martha S.; Bunch, Christine M.; Jones, John W.
1997-01-01
A U.S. Geological Survey Abandoned Mine Lands Initiative will develop a strategy for gathering and communicating the scientific information needed to formulate effective and cost-efficient remediation of abandoned mine lands. A watershed approach will identify, characterize, and remediate contaminated sites that have the most profound effect on water and ecosystem quality within a watershed. The Initiative will be conducted during 1997 through 2001 in two pilot watersheds, the Upper Animas River watershed in Colorado and the Boulder River watershed in Montana. Initiative efforts are being coordinated with the U.S. Forest Service, Bureau of Land Management, National Park Service, and other stakeholders which are using the resulting scientific information to design and implement remediation activities. The Initiative has the following eight objective-oriented components: estimate background (pre-mining) conditions; define baseline (current) conditions; identify target sites (major contaminant sources); characterize target sites and processes affecting contaminant dispersal; characterize ecosystem health and controlling processes at target sites; develop remediation goals and monitoring network; provide an integrated, quality-assured and accessible data network; and document lessons learned for future applications of the watershed approach.
Vehicle speed affects both pre-skid braking kinematics and average tire/roadway friction.
Heinrichs, Bradley E; Allin, Boyd D; Bowler, James J; Siegmund, Gunter P
2004-09-01
Vehicles decelerate between brake application and skid onset. To better estimate a vehicle's speed and position at brake application, we investigated how vehicle deceleration varied with initial speed during both the pre-skid and skidding intervals on dry asphalt. Skid-to-stop tests were performed from four initial speeds (20, 40, 60, and 80 km/h) using three different grades of tire (economy, touring, and performance) on a single vehicle and a single road surface. Average skidding friction was found to vary with initial speed and tire type. The post-brake/pre-skid speed loss, elapsed time, distance travelled, and effective friction were found to vary with initial speed. Based on these data, a method using skid mark length to predict vehicle speed and position at brake application rather than skid onset was shown to improve estimates of initial vehicle speed by up to 10 km/h and estimates of vehicle position at brake application by up to 8 m compared to conventional methods that ignore the post-brake/pre-skid interval. Copyright 2003 Elsevier Ltd.
Samsudin, Hayati; Auras, Rafael; Burgess, Gary; Dolan, Kirk; Soto-Valdez, Herlinda
2018-03-01
A two-step solution based on the boundary conditions of Crank's equations for mass transfer in a film was developed. Three driving factors, the diffusion (D), partition (K p,f ) and convective mass transfer coefficients (h), govern the sorption and/or desorption kinetics of migrants from polymer films. These three parameters were simultaneously estimated. They provide in-depth insight into the physics of a migration process. The first step was used to find the combination of D, K p,f and h that minimized the sums of squared errors (SSE) between the predicted and actual results. In step 2, an ordinary least square (OLS) estimation was performed by using the proposed analytical solution containing D, K p,f and h. Three selected migration studies of PLA/antioxidant-based films were used to demonstrate the use of this two-step solution. Additional parameter estimation approaches such as sequential and bootstrap were also performed to acquire a better knowledge about the kinetics of migration. The proposed model successfully provided the initial guesses for D, K p,f and h. The h value was determined without performing a specific experiment for it. By determining h together with D, under or overestimation issues pertaining to a migration process can be avoided since these two parameters are correlated. Copyright © 2017 Elsevier Ltd. All rights reserved.
Estimate of incidence and cost of recreational waterborne illness on United States surface waters.
DeFlorio-Barker, Stephanie; Wing, Coady; Jones, Rachael M; Dorevitch, Samuel
2018-01-09
Activities such as swimming, paddling, motor-boating, and fishing are relatively common on US surface waters. Water recreators have a higher rate of acute gastrointestinal illness, along with other illnesses including respiratory, ear, eye, and skin symptoms, compared to non-water recreators. The quantity and costs of such illnesses are unknown on a national scale. Recreational waterborne illness incidence and severity were estimated using data from prospective cohort studies of water recreation, reports of recreational waterborne disease outbreaks, and national water recreation statistics. Costs associated with medication use, healthcare provider visits, emergency department (ED) visits, hospitalizations, lost productivity, long-term sequelae, and mortality were aggregated. An estimated 4 billion surface water recreation events occur annually, resulting in an estimated 90 million illnesses nationwide and costs of $2.2- $3.7 billion annually (central 90% of values). Illnesses of moderate severity (visit to a health care provider or ED) were responsible for over 65% of the economic burden (central 90% of values: $1.4- $2.4 billion); severe illnesses (result in hospitalization or death) were responsible for approximately 8% of the total economic burden (central 90% of values: $108- $614 million). Recreational waterborne illnesses are associated with a substantial economic burden. These findings may be useful in cost-benefit analysis for water quality improvement and other risk reduction initiatives.
Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W
2012-01-01
The Food and Drug Administration's Mini-Sentinel pilot program initially aimed to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of pulmonary fibrosis and interstitial lung disease. PubMed and Iowa Drug Information Service Web searches were conducted to identify citations applicable to the pulmonary fibrosis/interstitial lung disease HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify pulmonary fibrosis and interstitial lung disease, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on pulmonary fibrosis and interstitial lung disease algorithms and validation estimates. Only five studies provided codes; none provided validation estimates. Because interstitial lung disease includes a broad spectrum of diseases, including pulmonary fibrosis, the scope of these studies varied, as did the corresponding diagnostic codes used. Research needs to be conducted on designing validation studies to test pulmonary fibrosis and interstitial lung disease algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.
Schmidt, Rita; Webb, Andrew
2016-01-01
Electrical Properties Tomography (EPT) using MRI is a technique that has been developed to provide a new contrast mechanism for in vivo imaging. Currently the most common method relies on the solution of the homogeneous Helmholtz equation, which has limitations in accurate estimation at tissue interfaces. A new method proposed in this work combines a Maxwell's integral equation representation of the problem, and the use of high permittivity materials (HPM) to control the RF field, in order to reconstruct the electrical properties image. The magnetic field is represented by an integral equation considering each point as a contrast source. This equation can be solved in an inverse method. In this study we use a reference simulation or scout scan of a uniform phantom to provide an initial estimate for the inverse solution, which allows the estimation of the complex permittivity within a single iteration. Incorporating two setups with and without the HPM improves the reconstructed result, especially with respect to the very low electric field in the center of the sample. Electromagnetic simulations of the brain were performed at 3T to generate the B1(+) field maps and reconstruct the electric properties images. The standard deviations of the relative permittivity and conductivity were within 14% and 18%, respectively for a volume consisting of white matter, gray matter and cerebellum. Copyright © 2015 Elsevier Inc. All rights reserved.
A theoretical framework to predict the most likely ion path in particle imaging.
Collins-Fekete, Charles-Antoine; Volz, Lennart; Portillo, Stephen K N; Beaulieu, Luc; Seco, Joao
2017-03-07
In this work, a generic rigorous Bayesian formalism is introduced to predict the most likely path of any ion crossing a medium between two detection points. The path is predicted based on a combination of the particle scattering in the material and measurements of its initial and final position, direction and energy. The path estimate's precision is compared to the Monte Carlo simulated path. Every ion from hydrogen to carbon is simulated in two scenarios, (1) where the range is fixed and (2) where the initial velocity is fixed. In the scenario where the range is kept constant, the maximal root-mean-square error between the estimated path and the Monte Carlo path drops significantly between the proton path estimate (0.50 mm) and the helium path estimate (0.18 mm), but less so up to the carbon path estimate (0.09 mm). However, this scenario is identified as the configuration that maximizes the dose while minimizing the path resolution. In the scenario where the initial velocity is fixed, the maximal root-mean-square error between the estimated path and the Monte Carlo path drops significantly between the proton path estimate (0.29 mm) and the helium path estimate (0.09 mm) but increases for heavier ions up to carbon (0.12 mm). As a result, helium is found to be the particle with the most accurate path estimate for the lowest dose, potentially leading to tomographic images of higher spatial resolution.
Density estimation in tiger populations: combining information for strong inference
Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.
2012-01-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
A Framework for Automating Cost Estimates in Assembly Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calton, T.L.; Peters, R.R.
1998-12-09
When a product concept emerges, the manufacturing engineer is asked to sketch out a production strategy and estimate its cost. The engineer is given an initial product design, along with a schedule of expected production volumes. The engineer then determines the best approach to manufacturing the product, comparing a variey of alternative production strategies. The engineer must consider capital cost, operating cost, lead-time, and other issues in an attempt to maximize pro$ts. After making these basic choices and sketching the design of overall production, the engineer produces estimates of the required capital, operating costs, and production capacity. 177is process maymore » iterate as the product design is refined in order to improve its pe~ormance or manufacturability. The focus of this paper is on the development of computer tools to aid manufacturing engineers in their decision-making processes. This computer sof~are tool provides aj?amework in which accurate cost estimates can be seamlessly derivedfiom design requirements at the start of any engineering project. Z+e result is faster cycle times through first-pass success; lower ll~e cycie cost due to requirements-driven design and accurate cost estimates derived early in the process.« less
Density estimation in tiger populations: combining information for strong inference.
Gopalaswamy, Arjun M; Royle, J Andrew; Delampady, Mohan; Nichols, James D; Karanth, K Ullas; Macdonald, David W
2012-07-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture-recapture data. The model, which combined information, provided the most precise estimate of density (8.5 +/- 1.95 tigers/100 km2 [posterior mean +/- SD]) relative to a model that utilized only one data source (photographic, 12.02 +/- 3.02 tigers/100 km2 and fecal DNA, 6.65 +/- 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Statistical Bayesian method for reliability evaluation based on ADT data
NASA Astrophysics Data System (ADS)
Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong
2018-05-01
Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.
NASA Astrophysics Data System (ADS)
Lin, Y.; Bajcsy, P.; Valocchi, A. J.; Kim, C.; Wang, J.
2007-12-01
Natural systems are complex, thus extensive data are needed for their characterization. However, data acquisition is expensive; consequently we develop models using sparse, uncertain information. When all uncertainties in the system are considered, the number of alternative conceptual models is large. Traditionally, the development of a conceptual model has relied on subjective professional judgment. Good judgment is based on experience in coordinating and understanding auxiliary information which is correlated to the model but difficult to be quantified into the mathematical model. For example, groundwater recharge and discharge (R&D) processes are known to relate to multiple information sources such as soil type, river and lake location, irrigation patterns and land use. Although hydrologists have been trying to understand and model the interaction between each of these information sources and R&D processes, it is extremely difficult to quantify their correlations using a universal approach due to the complexity of the processes, the spatiotemporal distribution and uncertainty. There is currently no single method capable of estimating R&D rates and patterns for all practical applications. Chamberlin (1890) recommended use of "multiple working hypotheses" (alternative conceptual models) for rapid advancement in understanding of applied and theoretical problems. Therefore, cross analyzing R&D rates and patterns from various estimation methods and related field information will likely be superior to using only a single estimation method. We have developed the Pattern Recognition Utility (PRU), to help GIS users recognize spatial patterns from noisy 2D image. This GIS plug-in utility has been applied to help hydrogeologists establish alternative R&D conceptual models in a more efficient way than conventional methods. The PRU uses numerical methods and image processing algorithms to estimate and visualize shallow R&D patterns and rates. It can provide a fast initial estimate prior to planning labor intensive and time consuming field R&D measurements. Furthermore, the Spatial Pattern 2 Learn (SP2L) was developed to cross analyze results from the PRU with ancillary field information, such as land coverage, soil type, topographic maps and previous estimates. The learning process of SP2L cross examines each initially recognized R&D pattern with the ancillary spatial dataset, and then calculates a quantifiable reliability index for each R&D map using a supervised machine learning technique called decision tree. This JAVA based software package is capable of generating alternative R&D maps if the user decides to apply certain conditions recognized by the learning process. The reliability indices from SP2L will improve the traditionally subjective approach to initiating conceptual models by providing objectively quantifiable conceptual bases for further probabilistic and uncertainty analyses. Both the PRU and SP2L have been designed to be user-friendly and universal utilities for pattern recognition and learning to improve model predictions from sparse measurements by computer-assisted integration of spatially dense geospatial image data and machine learning of model dependencies.
ERIC Educational Resources Information Center
Yu, Li
2017-01-01
The unemployment problem of college students in China has drawn much attention from academics and society. Using the 2011 College Student Labor Market (CSLM) survey data from Tsinghua University, this paper estimated the effects of college quality on initial employment, including employment status and employment unit ownership for fresh college…