A New Method for Generating Probability Tables in the Unresolved Resonance Region
Holcomb, Andrew M.; Leal, Luiz C.; Rahnema, Farzad; ...
2017-04-18
One new method for constructing probability tables in the unresolved resonance region (URR) has been developed. This new methodology is an extensive modification of the single-level Breit-Wigner (SLBW) pseudo-resonance pair sequence method commonly used to generate probability tables in the URR. The new method uses a Monte Carlo process to generate many pseudo-resonance sequences by first sampling the average resonance parameter data in the URR and then converting the sampled resonance parameters to the more robust R-matrix limited (RML) format. Furthermore, for each sampled set of pseudo-resonance sequences, the temperature-dependent cross sections are reconstructed on a small grid around themore » energy of reference using the Reich-Moore formalism and the Leal-Hwang Doppler broadening methodology. We then use the effective cross sections calculated at the energies of reference to construct probability tables in the URR. The RML cross-section reconstruction algorithm has been rigorously tested for a variety of isotopes, including 16O, 19F, 35Cl, 56Fe, 63Cu, and 65Cu. The new URR method also produced normalized cross-section factor probability tables for 238U that were found to be in agreement with current standards. The modified 238U probability tables were shown to produce results in excellent agreement with several standard benchmarks, including the IEU-MET-FAST-007 (BIG TEN), IEU-MET-FAST-003, and IEU-COMP-FAST-004 benchmarks.« less
Approved Methods and Algorithms for DoD Risk-Based Explosives Siting
2009-07-21
Parameter used in determining probability of hit ( Phit ) by debris. [Table 31, Table 32, Table 33, Eq. (157), Eq. (158)] CCa Variable “Actual...being in the glass hazard area”. [Eq. (60), Eq. (78)] Phit Variable “Probability of hit”. An array value indexed by consequence and mass bin...Eq. (156), Eq. (157)] Phit (f) Variable “Probability of hit for fatality”. [Eq. (157), Eq. (158)] Phit (maji) Variable “Probability of hit for major
ERIC Educational Resources Information Center
Satake, Eiki; Vashlishan Murray, Amy
2015-01-01
This paper presents a comparison of three approaches to the teaching of probability to demonstrate how the truth table of elementary mathematical logic can be used to teach the calculations of conditional probabilities. Students are typically introduced to the topic of conditional probabilities--especially the ones that involve Bayes' rule--with…
Wavelength assignment algorithm considering the state of neighborhood links for OBS networks
NASA Astrophysics Data System (ADS)
Tanaka, Yu; Hirota, Yusuke; Tode, Hideki; Murakami, Koso
2005-10-01
Recently, Optical WDM technology is introduced into backbone networks. On the other hand, as the future optical switching scheme, Optical Burst Switching (OBS) systems become a realistic solution. OBS systems do not consider buffering in intermediate nodes. Thus, it is an important issue to avoid overlapping wavelength reservation between partially interfered paths. To solve this problem, so far, the wavelength assignment scheme which has priority management tables has been proposed. This method achieves the reduction of burst blocking probability. However, this priority management table requires huge memory space. In this paper, we propose a wavelength assignment algorithm that reduces both the number of priority management tables and burst blocking probability. To reduce priority management tables, we allocate and manage them for each link. To reduce burst blocking probability, our method announces information about the change of their priorities to intermediate nodes. We evaluate its performance in terms of the burst blocking probability and the reduction rate of priority management tables.
Du, Yuanwei; Guo, Yubin
2015-01-01
The intrinsic mechanism of multimorbidity is difficult to recognize and prediction and diagnosis are difficult to carry out accordingly. Bayesian networks can help to diagnose multimorbidity in health care, but it is difficult to obtain the conditional probability table (CPT) because of the lack of clinically statistical data. Today, expert knowledge and experience are increasingly used in training Bayesian networks in order to help predict or diagnose diseases, but the CPT in Bayesian networks is usually irrational or ineffective for ignoring realistic constraints especially in multimorbidity. In order to solve these problems, an evidence reasoning (ER) approach is employed to extract and fuse inference data from experts using a belief distribution and recursive ER algorithm, based on which evidence reasoning method for constructing conditional probability tables in Bayesian network of multimorbidity is presented step by step. A multimorbidity numerical example is used to demonstrate the method and prove its feasibility and application. Bayesian network can be determined as long as the inference assessment is inferred by each expert according to his/her knowledge or experience. Our method is more effective than existing methods for extracting expert inference data accurately and is fused effectively for constructing CPTs in a Bayesian network of multimorbidity.
Probability: A Matter of Life and Death
ERIC Educational Resources Information Center
Hassani, Mehdi; Kippen, Rebecca; Mills, Terence
2016-01-01
Life tables are mathematical tables that document probabilities of dying and life expectancies at different ages in a society. Thus, the life table contains some essential features of the health of a population. Probability is often regarded as a difficult branch of mathematics. Life tables provide an interesting approach to introducing concepts…
[Comments on the use of the "life-table method" in orthopedics].
Hassenpflug, J; Hahne, H J; Hedderich, J
1992-01-01
In the description of long term results, e.g. of joint replacements, survivorship analysis is used increasingly in orthopaedic surgery. The survivorship analysis is more useful to describe the frequency of failure rather than global statements in percentage. The relative probability of failure for fixed intervals is drawn from the number of controlled patients and the frequency of failure. The complementary probabilities of success are linked in their temporal sequence thus representing the probability of survival at a fixed endpoint. Necessary condition for the use of this procedure is the exact definition of moment and manner of failure. It is described how to establish survivorship tables.
Approved Methods and Algorithms for DoD Risk-Based Explosives Siting
2007-02-02
glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury
ERIC Educational Resources Information Center
Clinton, Virginia; Morsanyi, Kinga; Alibali, Martha W.; Nathan, Mitchell J.
2016-01-01
Learning from visual representations is enhanced when learners appropriately integrate corresponding visual and verbal information. This study examined the effects of two methods of promoting integration, color coding and labeling, on learning about probabilistic reasoning from a table and text. Undergraduate students (N = 98) were randomly…
Characteristics of Tables for Disseminating Biobehavioral Results.
Schneider, Barbara St Pierre; Nagelhout, Ed; Feng, Du
2018-01-01
To report the complexity and richness of study variables within biological nursing research, authors often use tables; however, the ease with which consumers understand, synthesize, evaluate, and build upon findings depends partly upon table design. To assess and compare table characteristics within research and review articles published in Biological Research for Nursing and Nursing Research. A total of 10 elements in tables from 48 biobehavioral or biological research or review articles were analyzed. To test six hypotheses, a two-level hierarchical linear model was used for each of the continuous table elements, and a two-level hierarchical generalized linear model was used for each of the categorical table elements. Additionally, the inclusion of probability values in statistical tables was examined. The mean number of tables per article was 3. Tables in research articles were more likely to contain quantitative content, while tables in review articles were more likely to contain both quantitative and qualitative content. Tables in research articles had a greater number of rows, columns, and column-heading levels than tables in review articles. More than one half of statistical tables in research articles had a separate probability column or had probability values within the table, whereas approximately one fourth had probability notes. Authors and journal editorial staff may be generating tables that better depict biobehavioral content than those identified in specific style guidelines. However, authors and journal editorial staff may want to consider table design in terms of audience, including alternative visual displays.
NASA Astrophysics Data System (ADS)
Jonauskas, V.; Gaigalas, G.; Kučas, S.
2018-01-01
In the original paper [1], some minor misprints have occurred in Table 3 for wavelengths of the W32+ and W34+ ions. Furthermore, from the FAC calculations, the emission probabilities instead ofabsorption probabilities were presented (Table 3). The wavelengths, transition probabilities and oscillator strengths of magnetic dipole transitions were misprinted for W31+, W32+, W33+, and W34+ in Table 4.
More on approximations of Poisson probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, C
1980-05-01
Calculation of Poisson probabilities frequently involves calculating high factorials, which becomes tedious and time-consuming with regular calculators. The usual way to overcome this difficulty has been to find approximations by making use of the table of the standard normal distribution. A new transformation proposed by Kao in 1978 appears to perform better for this purpose than traditional transformations. In the present paper several approximation methods are stated and compared numerically, including an approximation method that utilizes a modified version of Kao's transformation. An approximation based on a power transformation was found to outperform those based on the square-root type transformationsmore » as proposed in literature. The traditional Wilson-Hilferty approximation and Makabe-Morimura approximation are extremely poor compared with this approximation. 4 tables. (RWR)« less
40 CFR 280.40 - General requirements for all UST systems.
Code of Federal Regulations, 2012 CFR
2012-07-01
... release detection that: (1) Can detect a release from any portion of the tank and the connected... shown in the table) with a probability of detection (Pd) of 0.95 and a probability of false alarm (Pfa) of 0.05. Method Section Date after which Pd/Pfa must be demonstrated Manual Tank Gauging 280.43(b...
40 CFR 280.40 - General requirements for all UST systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
... release detection that: (1) Can detect a release from any portion of the tank and the connected... shown in the table) with a probability of detection (Pd) of 0.95 and a probability of false alarm (Pfa) of 0.05. Method Section Date after which Pd/Pfa must be demonstrated Manual Tank Gauging 280.43(b...
40 CFR 280.40 - General requirements for all UST systems.
Code of Federal Regulations, 2010 CFR
2010-07-01
... release detection that: (1) Can detect a release from any portion of the tank and the connected... shown in the table) with a probability of detection (Pd) of 0.95 and a probability of false alarm (Pfa) of 0.05. Method Section Date after which Pd/Pfa must be demonstrated Manual Tank Gauging 280.43(b...
40 CFR 280.40 - General requirements for all UST systems.
Code of Federal Regulations, 2011 CFR
2011-07-01
... release detection that: (1) Can detect a release from any portion of the tank and the connected... shown in the table) with a probability of detection (Pd) of 0.95 and a probability of false alarm (Pfa) of 0.05. Method Section Date after which Pd/Pfa must be demonstrated Manual Tank Gauging 280.43(b...
40 CFR 280.40 - General requirements for all UST systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
... release detection that: (1) Can detect a release from any portion of the tank and the connected... shown in the table) with a probability of detection (Pd) of 0.95 and a probability of false alarm (Pfa) of 0.05. Method Section Date after which Pd/Pfa must be demonstrated Manual Tank Gauging 280.43(b...
Critical Values for Lawshe's Content Validity Ratio: Revisiting the Original Methods of Calculation
ERIC Educational Resources Information Center
Ayre, Colin; Scally, Andrew John
2014-01-01
The content validity ratio originally proposed by Lawshe is widely used to quantify content validity and yet methods used to calculate the original critical values were never reported. Methods for original calculation of critical values are suggested along with tables of exact binomial probabilities.
The Dependence Structure of Conditional Probabilities in a Contingency Table
ERIC Educational Resources Information Center
Joarder, Anwar H.; Al-Sabah, Walid S.
2002-01-01
Conditional probability and statistical independence can be better explained with contingency tables. In this note some special cases of 2 x 2 contingency tables are considered. In turn an interesting insight into statistical dependence as well as independence of events is obtained.
Applicability of the Moyers' Probability Tables in Adolescents with Different Facial Biotypes
Carrillo, Jorge J. Pavani; Rubial, Maria C.; Albornoz, Cristina; Villalba, Silvina; Damiani, Patricia; de Cravero, Marta Rugani
2017-01-01
Introduction: The Moyers’ probability tables are used in mixed dentition analysis to estimate the extent of space required for the alignment of canines and premolars, by correlating the mesiodistal size of lower incisors with the size of permanent canines and premolars. Objective: This study intended to evaluate the applicability of the Moyer's probability tables for predicting the mesiodistal space needed for the correct location of premolars and permanent canines non-erupted, in adolescents of the city of Cordoba, Argentina, who show different facial biotypes. Materials and Methods: Models and tele-radiographies of 478 adolescents of both genders from 10 to 15 years of age were analyzed. The tele-radiographies were measured manually in order to determine the facial biotype. The models were scanned with a gauged scanner (HP 3670) and measured by using Image Pro Plus 4.5 software. Results: According to this study, the comparison between the Moyer´s probability table, and the table created at the National University of Córdoba (UNC) (at 95%, 75%, and 50%) shows that, in both tables, a higher value of mesiodistal width of lower incisors corresponds to a bigger difference in the space needed for permanent canines and premolars; being the need for space for permanents canines and premolars bigger in the UNC´s table. On the other hand, when contrasting the values of mesiodistal space for permanent canines and premolars associated with each facial biotype, the discrepancies between groups were not statistically significant (P >0.05). However, we found differences in the size of the space required according to the mesiodistal width range of the lower incisors for each biotype: a) The comparison of lower-range values, with a mesialdistal width of lower incisors less than 22 mm, the space required for permanent canines and premolars resulted smaller in patients with dolichofacial biotype than in patients with mesofacial and braquifacial biotypes. The latter biotypes have meager differences between them. b) The comparison of mid-range values, with a mesialdistal width of lower incisors from 22 to 25 millimeters, shows that the values of required alignment space are similar in the three facial biotypes. c) Finally, the comparison of upper range values, with a mesialdistal width of lower incisors greater than 25 millimeters, indicates that the space required for dolichofacial biotypes tends to be higher than in mesofacial and brachyfacial biotypes. Conclusion: The Moyer´s probability tables should be created to meet the needs of the population under study, with no consideration of patients’ facial biotypes. PMID:28567145
Applications of Some Artificial Intelligence Methods to Satellite Soundings
NASA Technical Reports Server (NTRS)
Munteanu, M. J.; Jakubowicz, O.
1985-01-01
Hard clustering of temperature profiles and regression temperature retrievals were used to refine the method using the probabilities of membership of each pattern vector in each of the clusters derived with discriminant analysis. In hard clustering the maximum probability is taken and the corresponding cluster as the correct cluster are considered discarding the rest of the probabilities. In fuzzy partitioned clustering these probabilities are kept and the final regression retrieval is a weighted regression retrieval of several clusters. This method was used in the clustering of brightness temperatures where the purpose was to predict tropopause height. A further refinement is the division of temperature profiles into three major regions for classification purposes. The results are summarized in the tables total r.m.s. errors are displayed. An approach based on fuzzy logic which is intimately related to artificial intelligence methods is recommended.
Accelerated Gaussian mixture model and its application on image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Jianhui; Zhang, Yuanyuan; Ding, Yihua; Long, Chengjiang; Yuan, Zhiyong; Zhang, Dengyi
2013-03-01
Gaussian mixture model (GMM) has been widely used for image segmentation in recent years due to its superior adaptability and simplicity of implementation. However, traditional GMM has the disadvantage of high computational complexity. In this paper an accelerated GMM is designed, for which the following approaches are adopted: establish the lookup table for Gaussian probability matrix to avoid the repetitive probability calculations on all pixels, employ the blocking detection method on each block of pixels to further decrease the complexity, change the structure of lookup table from 3D to 1D with more simple data type to reduce the space requirement. The accelerated GMM is applied on image segmentation with the help of OTSU method to decide the threshold value automatically. Our algorithm has been tested through image segmenting of flames and faces from a set of real pictures, and the experimental results prove its efficiency in segmentation precision and computational cost.
Streamflow characteristics of streams in southeastern Afghanistan
Vining, Kevin C.
2010-01-01
Statistical summaries of streamflow data for all historical streamgaging stations that have available data in the southeastern Afghanistan provinces of Ghazni, Khost, Logar, Paktya, and Wardak, and a portion of Kabul Province are presented in this report. The summaries for each streamgaging station include a station desciption, table of statistics of monthly and annual mean discharges, table of monthly and annual flow duration, table of probability of occurrence of annual high discharges, table of probability of occurrence of annual low discharges, table of annual peak discharge and corresponding gage height for the period of record, and table of monthly and annual mean discharges for the period of record.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-01
... a low, medium, or high probability of retiring early. The determination is based on the year a... the expected retirement age after the probability of early retirement has been determined using Table I. These tables establish, by probability category, the expected retirement age based on both the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-01
..., medium, or high probability of retiring early. The determination is based on the year a participant would... the expected retirement age after the probability of early retirement has been determined using Table I. These tables establish, by probability category, the expected retirement age based on both the...
A Framework for Network Visualisation (Un Cadre Pour la Visualisation des Reseaux)
2010-02-01
Business, Communities, and Government” (online), http://www.orgnet.com/inflow3.html (Access Date: 13 June 2008). [32] Batagelj , V., Mrvar , A. and...and NATO Workshops 1996 – 2008 1-7 Table 2-1 Perceptual Modes and Probable Display and Interaction Consequences 2-18 Table 2-2 Informational...RSG) was created in 1996 with the aim of developing methods for presenting to human users the implications of the contents of large, complex and
Guymon, Gary L.; Yen, Chung-Cheng
1990-01-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
NASA Astrophysics Data System (ADS)
Guymon, Gary L.; Yen, Chung-Cheng
1990-07-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-30
... participant has a low, medium, or high probability of retiring early. The determination is based on the year a... the expected retirement age after the probability of early retirement has been determined using Table I. These tables establish, by probability category, the expected retirement age based on both the...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-02
... has a low, medium, or high probability of retiring early. The determination is based on the year a... the expected retirement age after the probability of early retirement has been determined using Table I. These tables establish, by probability category, the expected retirement age based on both the...
Kepler Reliability and Occurrence Rates
NASA Astrophysics Data System (ADS)
Bryson, Steve
2016-10-01
The Kepler mission has produced tables of exoplanet candidates (``KOI table''), as well as tables of transit detections (``TCE table''), hosted at the Exoplanet Archive (http://exoplanetarchive.ipac.caltech.edu). Transit detections in the TCE table that are plausibly due to a transiting object are selected for inclusion in the KOI table. KOI table entries that have not been identified as false positives (FPs) or false alarms (FAs) are classified as planet candidates (PCs, Mullally et al. 2015). A subset of PCs have been confirmed as planetary transits with greater than 99% probability, but most PCs have <99% probability of being true planets. The fraction of PCs that are true transiting planets is the PC reliability rate. The overall PC population is believed to have a reliability rate >90% (Morton & Johnson 2011).
Noble, J.E.; Bush, P.W.; Kasmarek, M.C.; Barbie, D.L.
1996-01-01
In 1989, the U.S. Geological Survey, in cooperation with the Harris-Galveston Coastal Subsidence District, began a field study to determine the depth to the water table and to estimate the rate of recharge in outcrops of the Chicot and Evangeline aquifers near Houston, Texas. The study area comprises about 2,000 square miles of outcrops of the Chicot and Evangeline aquifers in northwest Harris County, Montgomery County, and southern Walker County. Because of the scarcity of measurable water-table wells, depth to the water table below land surface was estimated using a surface geophysical technique, seismic refraction. The water table in the study area generally ranges from about 10 to 30 foot below land surface and typically is deeper in areas of relatively high land-surface altitude than in areas of relatively low land- surface altitude. The water table has demonstrated no long-term trends since ground-water development began, with the probable exception of the water table in the Katy area: There the water table is more than 75 feet deep, probably due to ground-water pumpage from deeper zones. An estimated rate of recharge in the aquifer outcrops was computed using the interface method in which environmental tritium is a ground-water tracer. The estimated average total recharge rate in the study area is 6 inches per year. This rate is an upper bound on the average recharge rate during the 37 years 1953-90 because it is based on the deepest penetration (about 80 feet) of postnuclear-testing tritium concentrations. The rate, which represents one of several components of a complex regional hydrologic budget, is considered reasonable but is not definitive because of uncertainty regarding the assumptions and parameters used in its computation.
1990-09-01
MEASURING C2 EFFECTIVENESS WUIlT DECISION PROBABILITY SEPTEMBER 1990 TABLE OF CONTENTS 1.0 IN T R O D U CTIIO N...i ’i | " i | , TABLE OF CONTENTS (Continued) Ra. 5.0 EXPRESSING REQUIREMENTS WITH PROBABILITY .................................... 15 5.1...gaitrering and maintaining the data needed, and complfeting and reviewing the coiierction of Intoirmallon Send continnts regarding MAr burden asilmate o
2009-01-01
Background Marginal posterior genotype probabilities need to be computed for genetic analyses such as geneticcounseling in humans and selective breeding in animal and plant species. Methods In this paper, we describe a peeling based, deterministic, exact algorithm to compute efficiently genotype probabilities for every member of a pedigree with loops without recourse to junction-tree methods from graph theory. The efficiency in computing the likelihood by peeling comes from storing intermediate results in multidimensional tables called cutsets. Computing marginal genotype probabilities for individual i requires recomputing the likelihood for each of the possible genotypes of individual i. This can be done efficiently by storing intermediate results in two types of cutsets called anterior and posterior cutsets and reusing these intermediate results to compute the likelihood. Examples A small example is used to illustrate the theoretical concepts discussed in this paper, and marginal genotype probabilities are computed at a monogenic disease locus for every member in a real cattle pedigree. PMID:19958551
Mikou, M; Ghosne, N; El Baydaoui, R; Zirari, Z; Kuntz, F
2015-05-01
Performance characteristics of the megavoltage photon dose measurements with EPR and table sugar were analyzed. An advantage of sugar as a dosimetric material is its tissue equivalency. The minimal detectable dose was found to be 1.5Gy for both the 6 and 18MV photons. The dose response curves are linear up to at least 20Gy. The energy dependence of the dose response in the megavoltage energy range is very weak and probably statistically insignificant. Reproducibility of measurements of various doses in this range performed with the peak-to-peak and double-integral methods is reported. The method can be used in real-time dosimetry in radiation therapy. Copyright © 2015 Elsevier Ltd. All rights reserved.
Two-Way Tables: Issues at the Heart of Statistics and Probability for Students and Teachers
ERIC Educational Resources Information Center
Watson, Jane; Callingham, Rosemary
2014-01-01
Some problems exist at the intersection of statistics and probability, creating a dilemma in relation to the best approach to assist student understanding. Such is the case with problems presented in two-way tables representing conditional information. The difficulty can be confounded if the context within which the problem is set is one where…
Land, K C; Guralnik, J M; Blazer, D G
1994-05-01
A fundamental limitation of current multistate life table methodology-evident in recent estimates of active life expectancy for the elderly-is the inability to estimate tables from data on small longitudinal panels in the presence of multiple covariates (such as sex, race, and socioeconomic status). This paper presents an approach to such an estimation based on an isomorphism between the structure of the stochastic model underlying a conventional specification of the increment-decrement life table and that of Markov panel regression models for simple state spaces. We argue that Markov panel regression procedures can be used to provide smoothed or graduated group-specific estimates of transition probabilities that are more stable across short age intervals than those computed directly from sample data. We then join these estimates with increment-decrement life table methods to compute group-specific total, active, and dependent life expectancy estimates. To illustrate the methods, we describe an empirical application to the estimation of such life expectancies specific to sex, race, and education (years of school completed) for a longitudinal panel of elderly persons. We find that education extends both total life expectancy and active life expectancy. Education thus may serve as a powerful social protective mechanism delaying the onset of health problems at older ages.
An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory
Yen, Chung-Cheng; Guymon, Gary L.
1990-01-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory
NASA Astrophysics Data System (ADS)
Yen, Chung-Cheng; Guymon, Gary L.
1990-07-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
Rapid extraction of image texture by co-occurrence using a hybrid data structure
NASA Astrophysics Data System (ADS)
Clausi, David A.; Zhao, Yongping
2002-07-01
Calculation of co-occurrence probabilities is a popular method for determining texture features within remotely sensed digital imagery. Typically, the co-occurrence features are calculated by using a grey level co-occurrence matrix (GLCM) to store the co-occurring probabilities. Statistics are applied to the probabilities in the GLCM to generate the texture features. This method is computationally intensive since the matrix is usually sparse leading to many unnecessary calculations involving zero probabilities when applying the statistics. An improvement on the GLCM method is to utilize a grey level co-occurrence linked list (GLCLL) to store only the non-zero co-occurring probabilities. The GLCLL suffers since, to achieve preferred computational speeds, the list should be sorted. An improvement on the GLCLL is to utilize a grey level co-occurrence hybrid structure (GLCHS) based on an integrated hash table and linked list approach. Texture features obtained using this technique are identical to those obtained using the GLCM and GLCLL. The GLCHS method is implemented using the C language in a Unix environment. Based on a Brodatz test image, the GLCHS method is demonstrated to be a superior technique when compared across various window sizes and grey level quantizations. The GLCHS method required, on average, 33.4% ( σ=3.08%) of the computational time required by the GLCLL. Significant computational gains are made using the GLCHS method.
Exact Bayesian p-values for a test of independence in a 2 × 2 contingency table with missing data.
Lin, Yan; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett; Lipshultz, Steven
2017-01-01
Altham (Altham PME. Exact Bayesian analysis of a 2 × 2 contingency table, and Fisher's "exact" significance test. J R Stat Soc B 1969; 31: 261-269) showed that a one-sided p-value from Fisher's exact test of independence in a 2 × 2 contingency table is equal to the posterior probability of negative association in the 2 × 2 contingency table under a Bayesian analysis using an improper prior. We derive an extension of Fisher's exact test p-value in the presence of missing data, assuming the missing data mechanism is ignorable (i.e., missing at random or completely at random). Further, we propose Bayesian p-values for a test of independence in a 2 × 2 contingency table with missing data using alternative priors; we also present results from a simulation study exploring the Type I error rate and power of the proposed exact test p-values. An example, using data on the association between blood pressure and a cardiac enzyme, is presented to illustrate the methods.
VizieR Online Data Catalog: Ba V, Ba VI, and Ba VII oscillator strengths (Rauch+, 2014)
NASA Astrophysics Data System (ADS)
Rauch, T.; Werner, K.; Quinet, P.; Kruk, J. W.
2014-04-01
table1.dat contains calculated HFR oscillator strengths (loggf) and transition probabilities (gA, in 1/s) in Ba V. CF is the cancellation factor as defined by Cowan (1981). In columns 3 and 6, e is written for even and o for odd. table2.dat contains calculated HFR oscillator strengths (loggf) and transition probabilities (gA, in 1/s) in Ba VI. CF is the cancellation factor as defined by Cowan (1981). In columns 3 and 6, e is written for even and o for odd. table3.dat contains calculated HFR oscillator strengths (loggf) and transition probabilities (gA, in 1/s) in Ba VII. CF is the cancellation factor as defined by Cowan (1981). In columns 3 and 6, e is written for even and o for odd. (3 data files).
Transition Probabilities for Hydrogen-Like Atoms
NASA Astrophysics Data System (ADS)
Jitrik, Oliverio; Bunge, Carlos F.
2004-12-01
E1, M1, E2, M2, E3, and M3 transition probabilities for hydrogen-like atoms are calculated with point-nucleus Dirac eigenfunctions for Z=1-118 and up to large quantum numbers l=25 and n=26, increasing existing data more than a thousandfold. A critical evaluation of the accuracy shows a higher reliability with respect to previous works. Tables for hydrogen containing a subset of the results are given explicitly, listing the states involved in each transition, wavelength, term energies, statistical weights, transition probabilities, oscillator strengths, and line strengths. The complete results, including 1 863 574 distinct transition probabilities, lifetimes, and branching fractions are available at http://www.fisica.unam.mx/research/tables/spectra/1el
The meaning of diagnostic test results: a spreadsheet for swift data analysis.
Maceneaney, P M; Malone, D E
2000-03-01
To design a spreadsheet program to: (a) analyse rapidly diagnostic test result data produced in local research or reported in the literature; (b) correct reported predictive values for disease prevalence in any population; (c) estimate the post-test probability of disease in individual patients. Microsoft Excel(TM)was used. Section A: a contingency (2 x 2) table was incorporated into the spreadsheet. Formulae for standard calculations [sample size, disease prevalence, sensitivity and specificity with 95% confidence intervals, predictive values and likelihood ratios (LRs)] were linked to this table. The results change automatically when the data in the true or false negative and positive cells are changed. Section B: this estimates predictive values in any population, compensating for altered disease prevalence. Sections C-F: Bayes' theorem was incorporated to generate individual post-test probabilities. The spreadsheet generates 95% confidence intervals, LRs and a table and graph of conditional probabilities once the sensitivity and specificity of the test are entered. The latter shows the expected post-test probability of disease for any pre-test probability when a test of known sensitivity and specificity is positive or negative. This spreadsheet can be used on desktop and palmtop computers. The MS Excel(TM)version can be downloaded via the Internet from the URL ftp://radiography.com/pub/Rad-data99.xls A spreadsheet is useful for contingency table data analysis and assessment of the clinical meaning of diagnostic test results. Copyright 2000 The Royal College of Radiologists.
Statistical Tools for Determining Fitness to Fly
1981-09-01
program. (a) Number of Cards in file: 13 (b) Layout of Card 1: iIi Field Length a•e. Variable 1 8 Real EFAIL : Average # of failures for size of control...Method Compute Survival Probability and Frequency Tables 4-4 END 25P FLUW CHARTS 27 QSTART Input EFAIL ,CYEAR,NVAR,NAV,XINC BB~i i=1,3 name (1) i=1,4 Call
The Probability Distribution for a Biased Spinner
ERIC Educational Resources Information Center
Foster, Colin
2012-01-01
This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)
Saleh, Dina K.
2010-01-01
Statistical summaries of streamflow data for all long-term streamflow-gaging stations in the Tigris River and Euphrates River Basins in Iraq are presented in this report. The summaries for each streamflow-gaging station include (1) a station description, (2) a graph showing annual mean discharge for the period of record, (3) a table of extremes and statistics for monthly and annual mean discharge, (4) a graph showing monthly maximum, minimum, and mean discharge, (5) a table of monthly and annual mean discharges for the period of record, (6) a graph showing annual flow duration, (7) a table of monthly and annual flow duration, (8) a table of high-flow frequency data (maximum mean discharge for 3-, 7-, 15-, and 30-day periods for selected exceedance probabilities), and (9) a table of low-flow frequency data (minimum mean discharge for 3-, 7-, 15-, 30-, 60-, 90-, and 183-day periods for selected non-exceedance probabilities).
2015-03-12
26 Table 3: Optometry Clinic Frequency Count... Optometry Clinic Frequency Count.................................................................. 86 Table 22: Probability Distribution Summary Table...Clinic, the Audiology Clinic, and the Optometry Clinic. Methodology Overview The overarching research goal is to identify feasible solutions to
VizieR Online Data Catalog: Transition probabilities in TeII + TeIII spectra (Zhang+, 2013)
NASA Astrophysics Data System (ADS)
Zhang, W.; Palmeri, P.; Quinet, P.; Biemont, E.
2013-02-01
Computed weighted oscillator strengths (loggf) and transition probabilities (gA) for Te II (Table 8) and Te III (Table 9). Transitions with wavelengths <1um, loggf>-1 and CF>0.05 are only quoted. Air wavelengths are given above 200 nm. In Table 8 the levels are taken from Kamida et al (Kamida, A., Ralchenko, Yu., Reader, J., and NIST ASD Team (2012). NIST Atomic Spectra Database (ver. 5.0), [Online]. Available: http://physics.nist.gov/asd [2012, September 20]. National Institute of Standards and Technology, Gaithersburg, MD.). In Table 9 the levels are those given in Tauheed & Naz (Tauheed, A., Naz, A. 2011, Journal of the Korean Physical Society 59, 2910) with the exceptions of the 5p6p levels which were taken from Kramida et al. The wavelengths were computed from the experimental levels of Kramida et al and Tauheed & Naz. (2 data files).
ERIC Educational Resources Information Center
Satake, Eiki; Amato, Philip P.
2008-01-01
This paper presents an alternative version of formulas of conditional probabilities and Bayes' rule that demonstrate how the truth table of elementary mathematical logic applies to the derivations of the conditional probabilities of various complex, compound statements. This new approach is used to calculate the prior and posterior probabilities…
ERIC Educational Resources Information Center
Beam, John
2012-01-01
Students and mathematicians alike have long struggled to understand the nature of probability. This article explores the use of gambling activities as a basis for defining probabilities. (Contains 1 table and 1 figure.)
ERIC Educational Resources Information Center
Obersteiner, Andreas; Bernhard, Matthias; Reiss, Kristina
2015-01-01
Understanding contingency table analysis is a facet of mathematical competence in the domain of data and probability. Previous studies have shown that even young children are able to solve specific contingency table problems, but apply a variety of strategies that are actually invalid. The purpose of this paper is to describe primary school…
Stocking and yield of Virginia pine stands in Prince Georges County, Maryland
Thomas W., Jr. Church
1955-01-01
Development of yield tables is prerequisite to designing forest-management plans. Yield tables have been prepared for Virginia pine in Maryland, North Carolina, and Pennsylvania. But the differences among yields in these three states are great. These differences are probably due chiefly to site. Therefore it would be desirable to have yield tables based on fairly local...
Competing risks to breast cancer mortality in Catalonia
Vilaprinyo, Ester; Gispert, Rosa; Martínez-Alonso, Montserrat; Carles, Misericòrdia; Pla, Roger; Espinàs, Josep-Alfons; Rué, Montserrat
2008-01-01
Background Breast cancer mortality has experienced important changes over the last century. Breast cancer occurs in the presence of other competing risks which can influence breast cancer incidence and mortality trends. The aim of the present work is: 1) to assess the impact of breast cancer deaths among mortality from all causes in Catalonia (Spain), by age and birth cohort and 2) to estimate the risk of death from other causes than breast cancer, one of the inputs needed to model breast cancer mortality reduction due to screening or therapeutic interventions. Methods The multi-decrement life table methodology was used. First, all-cause mortality probabilities were obtained by age and cohort. Then mortality probability for breast cancer was subtracted from the all-cause mortality probabilities to obtain cohort life tables for causes other than breast cancer. These life tables, on one hand, provide an estimate of the risk of dying from competing risks, and on the other hand, permit to assess the impact of breast cancer deaths on all-cause mortality using the ratio of the probability of death for causes other than breast cancer by the all-cause probability of death. Results There was an increasing impact of breast cancer on mortality in the first part of the 20th century, with a peak for cohorts born in 1945–54 in the 40–49 age groups (for which approximately 24% of mortality was due to breast cancer). Even though for cohorts born after 1955 there was only information for women under 50, it is also important to note that the impact of breast cancer on all-cause mortality decreased for those cohorts. Conclusion We have quantified the effect of removing breast cancer mortality in different age groups and birth cohorts. Our results are consistent with US findings. We also have obtained an estimate of the risk of dying from competing-causes mortality, which will be used in the assessment of the effect of mammography screening on breast cancer mortality in Catalonia. PMID:19014473
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
Methods for estimating drought streamflow probabilities for Virginia streams
Austin, Samuel H.
2014-01-01
Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.
20 CFR 725.521 - Commutation of payments; lump sum awards.
Code of Federal Regulations, 2010 CFR
2010-04-01
... probability of the death of the disabled miner and/or other persons entitled to benefits before the expiration... Welfare, and the probability of the remarriage of a surviving spouse shall be determined in accordance with the remarriage tables of the Dutch Royal Insurance Institution. The probability of the happening...
NASA Astrophysics Data System (ADS)
Bouy, H.; Bertin, E.; Sarro, L. M.; Barrado, D.; Moraux, E.; Bouvier, J.; Cuillandre, J.-C.; Berihuete, A.; Olivares, J.; Beletsky, Y.
2015-05-01
Context. The DANCe survey provides photometric and astrometric (position and proper motion) measurements for approximately 2 million unique sources in a region encompassing ~80 deg2 centered on the Pleiades cluster. Aims: We aim at deriving a complete census of the Pleiades and measure the mass and luminosity functions of the cluster. Methods: Using the probabilistic selection method previously described, we identified high probability members in the DANCe (i ≥ 14 mag) and Tycho-2 (V ≲ 12 mag) catalogues and studied the properties of the cluster over the corresponding luminosity range. Results: We find a total of 2109 high-probability members, of which 812 are new, making it the most extensive and complete census of the cluster to date. The luminosity and mass functions of the cluster are computed from the most massive members down to ~0.025 M⊙. The size, sensitivity, and quality of the sample result in the most precise luminosity and mass functions observed to date for a cluster. Conclusions: Our census supersedes previous studies of the Pleiades cluster populations, in terms of both sensitivity and accuracy. Based on service observations made with the William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias.Table 1 and Appendices are available in electronic form at http://www.aanda.orgDANCe catalogs (Tables 6 and 7) and full Tables 2-5 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/577/A148
NASA Technical Reports Server (NTRS)
Munoz, E. F.; Silverman, M. P.
1979-01-01
A single-step most-probable-number method for determining the number of fecal coliform bacteria present in sewage treatment plant effluents is discussed. A single growth medium based on that of Reasoner et al. (1976) and consisting of 5.0 gr. proteose peptone, 3.0 gr. yeast extract, 10.0 gr. lactose, 7.5 gr. NaCl, 0.2 gr. sodium lauryl sulfate, and 0.1 gr. sodium desoxycholate per liter is used. The pH is adjusted to 6.5, and samples are incubated at 44.5 deg C. Bacterial growth is detected either by measuring the increase with time in the electrical impedance ratio between the innoculated sample vial and an uninnoculated reference vial or by visual examination for turbidity. Results obtained by the single-step method for chlorinated and unchlorinated effluent samples are in excellent agreement with those obtained by the standard method. It is suggested that in automated treatment plants impedance ratio data could be automatically matched by computer programs with the appropriate dilution factors and most probable number tables already in the computer memory, with the corresponding result displayed as fecal coliforms per 100 ml of effluent.
Orbital electron capture by the nucleus
NASA Technical Reports Server (NTRS)
Bambynek, W.; Behrens, H.; Chen, M. H.; Crasemann, B.; Fitzpatrick, M. L.; Ledingham, K. W. D.; Genz, H.; Mutterer, M.; Intemann, R. L.
1976-01-01
The theory of nuclear electron capture is reviewed in the light of current understanding of weak interactions. Experimental methods and results regarding capture probabilities, capture ratios, and EC/Beta(+) ratios are summarized. Radiative electron capture is discussed, including both theory and experiment. Atomic wave function overlap and electron exchange effects are covered, as are atomic transitions that accompany nuclear electron capture. Tables are provided to assist the reader in determining quantities of interest for specific cases.
NASA Technical Reports Server (NTRS)
Omidvar, K.
1980-01-01
Branching ratios in hydrogen-like atoms due to electric-dipole transitions are tabulated for the initial principal and angular momentum quantum number n, lambda, and final principal and angular momentum quantum numbers n, lambda. In table 1, transition probabilities are given for transitions n, lambda, yields n, where sums have been made with respect to lambda. In this table, 2 or = n' or = 10, o or = lambda' or = n'-1, and 1 or = n or = n'-1. In addition, averages with respect to lambda' and sums with respect to n, and lifetimes are given. In table 2, branching ratios are given for transitions n' lambda' yields ni, where sums have been made with respect to lambda. In these tables, 2 or = n' or = 10, 0 or = lambda', n'-1, and 1 or = n or = n'-1. Averages with respect to lambda' are also given. In table 3, branching ratios are given for transitions n' lambda' yields in lambda, where 1 or = n or = 5, 0 or = lambda or = n-1, n n' or = 15, and 0 or = lambda' or = n(s), where n(s), is the smaller of the two numbers n'-1 and 6. Averages with respect to lambda' are given.
Statistical computation of tolerance limits
NASA Technical Reports Server (NTRS)
Wheeler, J. T.
1993-01-01
Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.
Cellular Automata Generalized To An Inferential System
NASA Astrophysics Data System (ADS)
Blower, David J.
2007-11-01
Stephen Wolfram popularized elementary one-dimensional cellular automata in his book, A New Kind of Science. Among many remarkable things, he proved that one of these cellular automata was a Universal Turing Machine. Such cellular automata can be interpreted in a different way by viewing them within the context of the formal manipulation rules from probability theory. Bayes's Theorem is the most famous of such formal rules. As a prelude, we recapitulate Jaynes's presentation of how probability theory generalizes classical logic using modus ponens as the canonical example. We emphasize the important conceptual standing of Boolean Algebra for the formal rules of probability manipulation and give an alternative demonstration augmenting and complementing Jaynes's derivation. We show the complementary roles played in arguments of this kind by Bayes's Theorem and joint probability tables. A good explanation for all of this is afforded by the expansion of any particular logic function via the disjunctive normal form (DNF). The DNF expansion is a useful heuristic emphasized in this exposition because such expansions point out where relevant 0s should be placed in the joint probability tables for logic functions involving any number of variables. It then becomes a straightforward exercise to rely on Boolean Algebra, Bayes's Theorem, and joint probability tables in extrapolating to Wolfram's cellular automata. Cellular automata are seen as purely deductive systems, just like classical logic, which probability theory is then able to generalize. Thus, any uncertainties which we might like to introduce into the discussion about cellular automata are handled with ease via the familiar inferential path. Most importantly, the difficult problem of predicting what cellular automata will do in the far future is treated like any inferential prediction problem.
Decompressing recompression chamber attendants during Australian submarine rescue operations.
Reid, Michael P; Fock, Andrew; Doolette, David J
2017-09-01
Inside chamber attendants rescuing survivors from a pressurised, distressed submarine may themselves accumulate a decompression obligation which may exceed the limits of Defense and Civil Institute of Environmental Medicine tables presently used by the Royal Australian Navy. This study assessed the probability of decompression sickness (P DCS ) for medical attendants supervising survivors undergoing oxygen-accelerated saturation decompression according to the National Oceanic and Atmospheric Administration (NOAA) 17.11 table. Estimated probability of decompression sickness (P DCS ), the units pulmonary oxygen toxicity dose (UPTD) and the volume of oxygen required were calculated for attendants breathing air during the NOAA table compared with the introduction of various periods of oxygen breathing. The P DCS in medical attendants breathing air whilst supervising survivors receiving NOAA decompression is up to 4.5%. For the longest predicted profile (830 minutes at 253 kPa) oxygen breathing at 30, 60 and 90 minutes at 132 kPa partial pressure of oxygen reduced the air-breathing-associated P DCS to less than 3.1 %, 2.1% and 1.4% respectively. The probability of at least one incident of DCS among attendants, with consequent strain on resources, is high if attendants breathe air throughout their exposure. The introduction of 90 minutes of oxygen breathing greatly reduces the probability of this interruption to rescue operations.
Generating constrained randomized sequences: item frequency matters.
French, Robert M; Perruchet, Pierre
2009-11-01
All experimental psychologists understand the importance of randomizing lists of items. However, randomization is generally constrained, and these constraints-in particular, not allowing immediately repeated items-which are designed to eliminate particular biases, frequently engender others. We describe a simple Monte Carlo randomization technique that solves a number of these problems. However, in many experimental settings, we are concerned not only with the number and distribution of items but also with the number and distribution of transitions between items. The algorithm mentioned above provides no control over this. We therefore introduce a simple technique that uses transition tables for generating correctly randomized sequences. We present an analytic method of producing item-pair frequency tables and item-pair transitional probability tables when immediate repetitions are not allowed. We illustrate these difficulties and how to overcome them, with reference to a classic article on word segmentation in infants. Finally, we provide free access to an Excel file that allows users to generate transition tables with up to 10 different item types, as well as to generate appropriately distributed randomized sequences of any length without immediately repeated elements. This file is freely available from http://leadserv.u-bourgogne.fr/IMG/xls/TransitionMatrix.xls.
An Exercise to Introduce Power
ERIC Educational Resources Information Center
Seier, Edith; Liu, Yali
2013-01-01
In introductory statistics courses, the concept of power is usually presented in the context of testing hypotheses about the population mean. We instead propose an exercise that uses a binomial probability table to introduce the idea of power in the context of testing a population proportion. (Contains 2 tables, and 2 figures.)
Rank and independence in contingency table
NASA Astrophysics Data System (ADS)
Tsumoto, Shusaku
2004-04-01
A contingency table summarizes the conditional frequencies of two attributes and shows how these two attributes are dependent on each other. Thus, this table is a fundamental tool for pattern discovery with conditional probabilities, such as rule discovery. In this paper, a contingency table is interpreted from the viewpoint of statistical independence and granular computing. The first important observation is that a contingency table compares two attributes with respect to the number of equivalence classes. For example, a n x n table compares two attributes with the same granularity, while a m x n(m >= n) table compares two attributes with different granularities. The second important observation is that matrix algebra is a key point of analysis of this table. Especially, the degree of independence, rank plays a very important role in evaluating the degree of statistical independence. Relations between rank and the degree of dependence are also investigated.
Evidential Networks for Fault Tree Analysis with Imprecise Knowledge
NASA Astrophysics Data System (ADS)
Yang, Jianping; Huang, Hong-Zhong; Liu, Yu; Li, Yan-Feng
2012-06-01
Fault tree analysis (FTA), as one of the powerful tools in reliability engineering, has been widely used to enhance system quality attributes. In most fault tree analyses, precise values are adopted to represent the probabilities of occurrence of those events. Due to the lack of sufficient data or imprecision of existing data at the early stage of product design, it is often difficult to accurately estimate the failure rates of individual events or the probabilities of occurrence of the events. Therefore, such imprecision and uncertainty need to be taken into account in reliability analysis. In this paper, the evidential networks (EN) are employed to quantify and propagate the aforementioned uncertainty and imprecision in fault tree analysis. The detailed conversion processes of some logic gates to EN are described in fault tree (FT). The figures of the logic gates and the converted equivalent EN, together with the associated truth tables and the conditional belief mass tables, are also presented in this work. The new epistemic importance is proposed to describe the effect of ignorance degree of event. The fault tree of an aircraft engine damaged by oil filter plugs is presented to demonstrate the proposed method.
A Bayesian model averaging method for improving SMT phrase table
NASA Astrophysics Data System (ADS)
Duan, Nan
2013-03-01
Previous methods on improving translation quality by employing multiple SMT models usually carry out as a second-pass decision procedure on hypotheses from multiple systems using extra features instead of using features in existing models in more depth. In this paper, we propose translation model generalization (TMG), an approach that updates probability feature values for the translation model being used based on the model itself and a set of auxiliary models, aiming to alleviate the over-estimation problem and enhance translation quality in the first-pass decoding phase. We validate our approach for translation models based on auxiliary models built by two different ways. We also introduce novel probability variance features into the log-linear models for further improvements. We conclude our approach can be developed independently and integrated into current SMT pipeline directly. We demonstrate BLEU improvements on the NIST Chinese-to-English MT tasks for single-system decodings.
Pigeons, Facebook and the Birthday Problem
ERIC Educational Resources Information Center
Russell, Matthew
2013-01-01
The unexpectedness of the birthday problem has long been used by teachers of statistics in discussing basic probability calculation. An activity is described that engages students in understanding probability and sampling using the popular Facebook social networking site. (Contains 2 figures and 1 table.)
Oral contraceptive discontinuation and its aftermath in 19 developing countries.
Ali, Mohamed M; Cleland, John
2010-01-01
The purpose of the article was to document oral contraceptive (OC) discontinuation and switching in a large number of low- and middle-income countries, and to assess the effects of women's education and reason for use (spacing vs. limitation). An attempt was made to explain intercountry variations. Calendar data from 19 Demographic and Health Surveys conducted between 1999 and 2005 were used. Data were analyzed by single- and multiple-decrement life tables and by Cox proportional hazard model. The probability of stopping OC use within 12 months for reasons that implied dissatisfaction with the method ranged from 15% in Indonesia to over 40% in Bolivia and Peru with a median value of 28%. On average, 35% switched to a modern method within 3 months and 16% to a less effective 'traditional' method. Both education and reason for use were strongly related to the probability of switching to a modern method. Discontinuation was lower and switching higher in countries judged to have strong family planning programs. Both discontinuation of use and inadequate switching to alternative methods are major but neglected problems in the family planning services of many developing countries.
Spectral Retrieval of Latent Heating Profiles from TRMM PR Data: Comparison of Look-Up Tables
NASA Technical Reports Server (NTRS)
Shige, Shoichi; Takayabu, Yukari N.; Tao, Wei-Kuo; Johnson, Daniel E.; Shie, Chung-Lin
2003-01-01
The primary goal of the Tropical Rainfall Measuring Mission (TRMM) is to use the information about distributions of precipitation to determine the four dimensional (i.e., temporal and spatial) patterns of latent heating over the whole tropical region. The Spectral Latent Heating (SLH) algorithm has been developed to estimate latent heating profiles for the TRMM Precipitation Radar (PR) with a cloud- resolving model (CRM). The method uses CRM- generated heating profile look-up tables for the three rain types; convective, shallow stratiform, and anvil rain (deep stratiform with a melting level). For convective and shallow stratiform regions, the look-up table refers to the precipitation top height (PTH). For anvil region, on the other hand, the look- up table refers to the precipitation rate at the melting level instead of PTH. For global applications, it is necessary to examine the universality of the look-up table. In this paper, we compare the look-up tables produced from the numerical simulations of cloud ensembles forced with the Tropical Ocean Global Atmosphere (TOGA) Coupled Atmosphere-Ocean Response Experiment (COARE) data and the GARP Atlantic Tropical Experiment (GATE) data. There are some notable differences between the TOGA-COARE table and the GATE table, especially for the convective heating. First, there is larger number of deepest convective profiles in the TOGA-COARE table than in the GATE table, mainly due to the differences in SST. Second, shallow convective heating is stronger in the TOGA COARE table than in the GATE table. This might be attributable to the difference in the strength of the low-level inversions. Third, altitudes of convective heating maxima are larger in the TOGA COARE table than in the GATE table. Levels of convective heating maxima are located just below the melting level, because warm-rain processes are prevalent in tropical oceanic convective systems. Differences in levels of convective heating maxima probably reflect differences in melting layer heights. We are now extending our study to simulations of other field experiments (e.g. SCSMEX and ARM) in order to examine the universality of the look-up table. The impact of look-up tables on the retrieved latent heating profiles will also be assessed.
[Employment "survival" among nursing workers at a public hospital].
Anselmi, M L; Duarte, G G; Angerami, E L
2001-07-01
This study aimed at estimating the employment "survival" time of nursing workers after their admission to a public hospital as a turnover index. The Life Table method was used in order to calculate the employment survival probability by X years for each one of the categories of workers. The results showed an accentuated turnover of the work force in the studied period. The categories nursing auxiliary and nurse presented low stability in employment while the category nursing technician was more stable.
The Burden of Social Proof: Shared Thresholds and Social Influence
ERIC Educational Resources Information Center
MacCoun, Robert J.
2012-01-01
[Correction Notice: An erratum for this article was reported in Vol 119(2) of Psychological Review (see record 2012-06153-001). In the article, incorrect versions of figures 3 and 6 were included. Also, Table 8 should have included the following information in the table footnote "P(A V) = probability of acquittal given unanimous verdict." All…
Treatment planning for prostate focal laser ablation in the face of needle placement uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cepek, Jeremy, E-mail: jcepek@robarts.ca; Fenster, Aaron; Lindner, Uri
2014-01-15
Purpose: To study the effect of needle placement uncertainty on the expected probability of achieving complete focal target destruction in focal laser ablation (FLA) of prostate cancer. Methods: Using a simplified model of prostate cancer focal target, and focal laser ablation region shapes, Monte Carlo simulations of needle placement error were performed to estimate the probability of completely ablating a region of target tissue. Results: Graphs of the probability of complete focal target ablation are presented over clinically relevant ranges of focal target sizes and shapes, ablation region sizes, and levels of needle placement uncertainty. In addition, a table ismore » provided for estimating the maximum target size that is treatable. The results predict that targets whose length is at least 5 mm smaller than the diameter of each ablation region can be confidently ablated using, at most, four laser fibers if the standard deviation in each component of needle placement error is less than 3 mm. However, targets larger than this (i.e., near to or exceeding the diameter of each ablation region) require more careful planning. This process is facilitated by using the table provided. Conclusions: The probability of completely ablating a focal target using FLA is sensitive to the level of needle placement uncertainty, especially as the target length approaches and becomes greater than the diameter of ablated tissue that each individual laser fiber can achieve. The results of this work can be used to help determine individual patient eligibility for prostate FLA, to guide the planning of prostate FLA, and to quantify the clinical benefit of using advanced systems for accurate needle delivery for this treatment modality.« less
VizieR Online Data Catalog: KOI transit probabilities of multi-planet syst. (Brakensiek+, 2016)
NASA Astrophysics Data System (ADS)
Brakensiek, J.; Ragozzine, D.
2016-06-01
Using CORBITS, we computed the transit probabilities of all the KOIs with at least three candidate or confirmed transiting planets and report the results in Table 2 for a variety of inclination distributions. See section 4.6. (1 data file).
Possibilities of forecasting hypercholesterinemia in pilots
NASA Technical Reports Server (NTRS)
Vivilov, P.
1980-01-01
The dependence of the frequency of hypercholesterinemia on the age, average annual flying time, functional category, qualification class, and flying specialty of 300 pilots was investigated. The risk probability coefficient of hypercholesterinemia was computed. An evaluation table was developed which gives an 84% probability of forcasting risk of hypercholesterinemia.
Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction
NASA Technical Reports Server (NTRS)
Cohen, A. C.
1971-01-01
A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.
NASA Astrophysics Data System (ADS)
Hughes, P. D. M.; Mauquoy, D.; van Bellen, S.; Roland, T. P.; Loader, N.; Street-Perrott, F. A.; Daley, T.
2017-12-01
The deep ombrotrophic peat bogs of Chile are located throughout the latitudes dominated by the southern westerly wind belt. The domed surfaces of these peatlands make them sensitive to variability in summer atmospheric moisture balance and the near-continuous accumulation of deep peat strata throughout the Holocene to the present day means that these sites provide undisturbed archives of palaeoclimatic change. We have reconstructed late-Holocene bog water table depths - which can be related to changes in the regional balance of precipitation to evaporation (P-E) - from a suite of peat bogs located in three areas of Tierra del Feugo, Chile, under the main path of the SWWB. Water-table depths were reconstructed from sub-fossil testate amoebae assemblages using a conventional transfer function to infer past water-table depths, based on taxonomic classification of tests but also an innovative trait-based transfer function to infer the same parameter. Water table reconstructions derived from the two methods were consistent within sites. They show that mire water tables have been relatively stable in the last 2000 years across Tierra del Feugo. Higher water table levels, most probably indicating increased effective precipitation, were found between c. 1400 and 900 cal. BP., whereas a consistent drying trend was reconstructed across the region in the most recent peat strata. This shift may represent a pronounced regional decrease in precipitation and/or a change to warmer conditions linked to strengthening of the SWWB. However, other factors such as recent thinning of the ozone layer over Tierra del Fuego could have contributed to recent shifts in some testate amoebae species.
Probability Distributions of Minkowski Distances between Discrete Random Variables.
ERIC Educational Resources Information Center
Schroger, Erich; And Others
1993-01-01
Minkowski distances are used to indicate similarity of two vectors in an N-dimensional space. How to compute the probability function, the expectation, and the variance for Minkowski distances and the special cases City-block distance and Euclidean distance. Critical values for tests of significance are presented in tables. (SLD)
Agents Overcoming Resource Independent Scaling Threats (AORIST)
2004-10-01
20 Table 8: Tilted Consumer Preferences Experiment (m=8, N=61, G=2, C=60, Mean over 13 experiments...probabilities. Non-uniform consumer preferences create a new potential for sub-optimal system performance and thus require an additional adaptive...distribu- tion of the capacities across the sup- plier population must match the non- uniform consumer preferences . The second plot in Table 8
Probability in reasoning: a developmental test on conditionals.
Barrouillet, Pierre; Gauffroy, Caroline
2015-04-01
Probabilistic theories have been claimed to constitute a new paradigm for the psychology of reasoning. A key assumption of these theories is captured by what they call the Equation, the hypothesis that the meaning of the conditional is probabilistic in nature and that the probability of If p then q is the conditional probability, in such a way that P(if p then q)=P(q|p). Using the probabilistic truth-table task in which participants are required to evaluate the probability of If p then q sentences, the present study explored the pervasiveness of the Equation through ages (from early adolescence to adulthood), types of conditionals (basic, causal, and inducements) and contents. The results reveal that the Equation is a late developmental achievement only endorsed by a narrow majority of educated adults for certain types of conditionals depending on the content they involve. Age-related changes in evaluating the probability of all the conditionals studied closely mirror the development of truth-value judgements observed in previous studies with traditional truth-table tasks. We argue that our modified mental model theory can account for this development, and hence for the findings related with the probability task, which do not consequently support the probabilistic approach of human reasoning over alternative theories. Copyright © 2014 Elsevier B.V. All rights reserved.
New International Skeleton Tables for the Thermodynamic Properties of Ordinary Water Substance
NASA Astrophysics Data System (ADS)
Sato, H.; Uematsu, M.; Watanabe, K.; Saul, A.; Wagner, W.
1988-10-01
The current knowledge of thermodynamic properties of ordinary water substance is summarized in a condensed form of a set of skeleton steam tables, where the most probable values with the reliabilities on specific volume and enthalpy are provided in the range of temperatures from 273 to 1073 K and pressures from 101.325 kPa to 1 GPa and at the saturation state from the triple point to the critical point. These tables have been accepted as the IAPS Skeleton Tables 1985 for the Thermodynamic Properties of Ordinary Water Substance(IST-85) by the International Association for the Properties of Steam(IAPS). The former International Skeleton Steam Tables, October 1963(IST-63), have been withdrawn by IAPS. About 17 000 experimental thermodynamic data were assessed and classified previously by Working Group 1 of IAPS. About 10 000 experimental data were collected and evaluated in detail and especially about 7000 specific-volume data among them were critically analyzed with respect to their errors using the statistical method originally developed at Keio University by the first three authors. As a result, specific-volume and enthalpy values with associated reliabilities were determined at 1455 grid points of 24 isotherms and 61 isobars in the single-fluid phase state and at 54 temperatures along the saturation curve. The background, analytical procedure, and reliability of IST-85 as well as the assessment of the existing experimental data and equations of state are also discussed in this paper.
Statistical Requirements For Pass-Fail Testing Of Contraband Detection Systems
NASA Astrophysics Data System (ADS)
Gilliam, David M.
2011-06-01
Contraband detection systems for homeland security applications are typically tested for probability of detection (PD) and probability of false alarm (PFA) using pass-fail testing protocols. Test protocols usually require specified values for PD and PFA to be demonstrated at a specified level of statistical confidence CL. Based on a recent more theoretical treatment of this subject [1], this summary reviews the definition of CL and provides formulas and spreadsheet functions for constructing tables of general test requirements and for determining the minimum number of tests required. The formulas and tables in this article may be generally applied to many other applications of pass-fail testing, in addition to testing of contraband detection systems.
Pavlou, Andrew T.; Ji, Wei; Brown, Forrest B.
2016-01-23
Here, a proper treatment of thermal neutron scattering requires accounting for chemical binding through a scattering law S(α,β,T). Monte Carlo codes sample the secondary neutron energy and angle after a thermal scattering event from probability tables generated from S(α,β,T) tables at discrete temperatures, requiring a large amount of data for multiscale and multiphysics problems with detailed temperature gradients. We have previously developed a method to handle this temperature dependence on-the-fly during the Monte Carlo random walk using polynomial expansions in 1/T to directly sample the secondary energy and angle. In this paper, the on-the-fly method is implemented into MCNP6 andmore » tested in both graphite-moderated and light water-moderated systems. The on-the-fly method is compared with the thermal ACE libraries that come standard with MCNP6, yielding good agreement with integral reactor quantities like k-eigenvalue and differential quantities like single-scatter secondary energy and angle distributions. The simulation runtimes are comparable between the two methods (on the order of 5–15% difference for the problems tested) and the on-the-fly fit coefficients only require 5–15 MB of total data storage.« less
VizieR Online Data Catalog: Close encounters to the Sun in Gaia DR1 (Bailer-Jones, 2018)
NASA Astrophysics Data System (ADS)
Bailer-Jones, C. A. L.
2017-08-01
The table gives the perihelion (closest approach) parameters of stars in the Gaia-DR1 TGAS catalogue which are found by numerical integration through a Galactic potential to approach within 10pc of the Sun. These parameters are the time (relative to the Gaia measurement epoch), heliocentric distance, and heliocentric speed of the star at perihelion. Uncertainties in these have been calculated by a Monte Carlo sampling of the data to give the posterior probability density function (PDF) over the parameters. For each parameter three summary values of this PDF are reported: the median, the 5% lower bound, the 95% upper bound. The latter two give a 90% confidence interval. The table also reports the probability that each star approaches the Sun within 0.5, 1.0, and 2.0pc, as well as the measured parallax, proper motion, and radial velocity (plus uncertainties) of the stars. Table 3 in the article lists the first 20 lines of this data table (stars with median perihelion distances below 2pc). Some stars are duplicated in this table, i.e. there are rows with the same ID, but different data. Stars with problematic data have not been removed, so some encounters are not reliable. Most IDs are Tycho, but in a few cases they are Hipparcos. (1 data file).
Secondary School Students' Reasoning about Conditional Probability, Samples, and Sampling Procedures
ERIC Educational Resources Information Center
Prodromou, Theodosia
2016-01-01
In the Australian mathematics curriculum, Year 12 students (aged 16-17) are asked to solve conditional probability problems that involve the representation of the problem situation with two-way tables or three-dimensional diagrams and consider sampling procedures that result in different correct answers. In a small exploratory study, we…
Midthune, Douglas; Dodd, Kevin W.; Freedman, Laurence S.; Krebs-Smith, Susan M.; Subar, Amy F.; Guenther, Patricia M.; Carroll, Raymond J.; Kipnis, Victor
2007-01-01
Objective We propose a new statistical method that uses information from two 24-hour recalls (24HRs) to estimate usual intake of episodically-consumed foods. Statistical Analyses Performed The method developed at the National Cancer Institute (NCI) accommodates the large number of non-consumption days that arise with foods by separating the probability of consumption from the consumption-day amount, using a two-part model. Covariates, such as sex, age, race, or information from a food frequency questionnaire (FFQ), may supplement the information from two or more 24HRs using correlated mixed model regression. The model allows for correlation between the probability of consuming a food on a single day and the consumption-day amount. Percentiles of the distribution of usual intake are computed from the estimated model parameters. Results The Eating at America's Table Study (EATS) data are used to illustrate the method to estimate the distribution of usual intake for whole grains and dark green vegetables for men and women and the distribution of usual intakes of whole grains by educational level among men. A simulation study indicates that the NCI method leads to substantial improvement over existing methods for estimating the distribution of usual intake of foods. Applications/Conclusions The NCI method provides distinct advantages over previously proposed methods by accounting for the correlation between probability of consumption and amount consumed and by incorporating covariate information. Researchers interested in estimating the distribution of usual intakes of foods for a population or subpopulation are advised to work with a statistician and incorporate the NCI method in analyses. PMID:17000190
Modeling Women's Menstrual Cycles using PICI Gates in Bayesian Network.
Zagorecki, Adam; Łupińska-Dubicka, Anna; Voortman, Mark; Druzdzel, Marek J
2016-03-01
A major difficulty in building Bayesian network (BN) models is the size of conditional probability tables, which grow exponentially in the number of parents. One way of dealing with this problem is through parametric conditional probability distributions that usually require only a number of parameters that is linear in the number of parents. In this paper, we introduce a new class of parametric models, the Probabilistic Independence of Causal Influences (PICI) models, that aim at lowering the number of parameters required to specify local probability distributions, but are still capable of efficiently modeling a variety of interactions. A subset of PICI models is decomposable and this leads to significantly faster inference as compared to models that cannot be decomposed. We present an application of the proposed method to learning dynamic BNs for modeling a woman's menstrual cycle. We show that PICI models are especially useful for parameter learning from small data sets and lead to higher parameter accuracy than when learning CPTs.
Anytime synthetic projection: Maximizing the probability of goal satisfaction
NASA Technical Reports Server (NTRS)
Drummond, Mark; Bresina, John L.
1990-01-01
A projection algorithm is presented for incremental control rule synthesis. The algorithm synthesizes an initial set of goal achieving control rules using a combination of situation probability and estimated remaining work as a search heuristic. This set of control rules has a certain probability of satisfying the given goal. The probability is incrementally increased by synthesizing additional control rules to handle 'error' situations the execution system is likely to encounter when following the initial control rules. By using situation probabilities, the algorithm achieves a computationally effective balance between the limited robustness of triangle tables and the absolute robustness of universal plans.
Inequalities between Kappa and Kappa-Like Statistics for "k x k" Tables
ERIC Educational Resources Information Center
Warrens, Matthijs J.
2010-01-01
The paper presents inequalities between four descriptive statistics that can be expressed in the form [P-E(P)]/[1-E(P)], where P is the observed proportion of agreement of a "kappa x kappa" table with identical categories, and E(P) is a function of the marginal probabilities. Scott's "pi" is an upper bound of Goodman and Kruskal's "lambda" and a…
40 CFR 455.50 - Identification of test procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... methods cited and described in Table IG at 40 CFR 136.3(a). Pesticide manufacturers may not use the analytical method cited in Table IB, Table IC, or Table ID of 40 CFR 136.3(a) to make these determinations (except where the method cited in those tables is identical to the method specified in Table IG at 40 CFR...
Clarke, M G; Kennedy, K P; MacDonagh, R P
2009-01-01
To develop a clinical prediction model enabling the calculation of an individual patient's life expectancy (LE) and survival probability based on age, sex, and comorbidity for use in the joint decision-making process regarding medical treatment. A computer software program was developed with a team of 3 clinicians, 2 professional actuaries, and 2 professional computer programmers. This incorporated statistical spreadsheet and database access design methods. Data sources included life insurance industry actuarial rating factor tables (public and private domain), Government Actuary Department UK life tables, professional actuarial sources, and evidence-based medical literature. The main outcome measures were numerical and graphical display of comorbidity-adjusted LE; 5-, 10-, and 15-year survival probability; in addition to generic UK population LE. Nineteen medical conditions, which impacted significantly on LE in actuarial terms and were commonly encountered in clinical practice, were incorporated in the final model. Numerical and graphical representations of statistical predictions of LE and survival probability were successfully generated for patients with either no comorbidity or a combination of the 19 medical conditions included. Validation and testing, including actuarial peer review, confirmed consistency with the data sources utilized. The evidence-based actuarial data utilized in this computer program design represent a valuable resource for use in the clinical decision-making process, where an accurate objective assessment of patient LE can so often make the difference between patients being offered or denied medical and surgical treatment. Ongoing development to incorporate additional comorbidities and enable Web-based access will enhance its use further.
Proposal and Implementation of a Robust Sensing Method for DVB-T Signal
NASA Astrophysics Data System (ADS)
Song, Chunyi; Rahman, Mohammad Azizur; Harada, Hiroshi
This paper proposes a sensing method for TV signals of DVB-T standard to realize effective TV White Space (TVWS) Communication. In the TVWS technology trial organized by the Infocomm Development Authority (iDA) of Singapore, with regard to the sensing level and sensing time, detecting DVB-T signal at the level of -120dBm over an 8MHz channel with a sensing time below 1 second is required. To fulfill such a strict sensing requirement, we propose a smart sensing method which combines feature detection and energy detection (CFED), and is also characterized by using dynamic threshold selection (DTS) based on a threshold table to improve sensing robustness to noise uncertainty. The DTS based CFED (DTS-CFED) is evaluated by computer simulations and is also implemented into a hardware sensing prototype. The results show that the DTS-CFED achieves a detection probability above 0.9 for a target false alarm probability of 0.1 for DVB-T signals at the level of -120dBm over an 8MHz channel with the sensing time equals to 0.1 second.
Pretreatment tables predicting pathologic stage of locally advanced prostate cancer.
Joniau, Steven; Spahn, Martin; Briganti, Alberto; Gandaglia, Giorgio; Tombal, Bertrand; Tosco, Lorenzo; Marchioro, Giansilvio; Hsu, Chao-Yu; Walz, Jochen; Kneitz, Burkhard; Bader, Pia; Frohneberg, Detlef; Tizzani, Alessandro; Graefen, Markus; van Cangh, Paul; Karnes, R Jeffrey; Montorsi, Francesco; van Poppel, Hein; Gontero, Paolo
2015-02-01
Pretreatment tables for the prediction of pathologic stage have been published and validated for localized prostate cancer (PCa). No such tables are available for locally advanced (cT3a) PCa. To construct tables predicting pathologic outcome after radical prostatectomy (RP) for patients with cT3a PCa with the aim to help guide treatment decisions in clinical practice. This was a multicenter retrospective cohort study including 759 consecutive patients with cT3a PCa treated with RP between 1987 and 2010. Retropubic RP and pelvic lymphadenectomy. Patients were divided into pretreatment prostate-specific antigen (PSA) and biopsy Gleason score (GS) subgroups. These parameters were used to construct tables predicting pathologic outcome and the presence of positive lymph nodes (LNs) after RP for cT3a PCa using ordinal logistic regression. In the model predicting pathologic outcome, the main effects of biopsy GS and pretreatment PSA were significant. A higher GS and/or higher PSA level was associated with a more unfavorable pathologic outcome. The validation procedure, using a repeated split-sample method, showed good predictive ability. Regression analysis also showed an increasing probability of positive LNs with increasing PSA levels and/or higher GS. Limitations of the study are the retrospective design and the long study period. These novel tables predict pathologic stage after RP for patients with cT3a PCa based on pretreatment PSA level and biopsy GS. They can be used to guide decision making in men with locally advanced PCa. Our study might provide physicians with a useful tool to predict pathologic stage in locally advanced prostate cancer that might help select patients who may need multimodal treatment. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Predicting the past: a simple reverse stand table projection method
Quang V. Cao; Shanna M. McCarty
2006-01-01
A stand table gives number of trees in each diameter class. Future stand tables can be predicted from current stand tables using a stand table projection method. In the simplest form of this method, a future stand table can be expressed as the product of a matrix of transitional proportions (based on diameter growth rates) and a vector of the current stand table. There...
Statistical Requirements For Pass-Fail Testing Of Contraband Detection Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliam, David M.
2011-06-01
Contraband detection systems for homeland security applications are typically tested for probability of detection (PD) and probability of false alarm (PFA) using pass-fail testing protocols. Test protocols usually require specified values for PD and PFA to be demonstrated at a specified level of statistical confidence CL. Based on a recent more theoretical treatment of this subject [1], this summary reviews the definition of CL and provides formulas and spreadsheet functions for constructing tables of general test requirements and for determining the minimum number of tests required. The formulas and tables in this article may be generally applied to many othermore » applications of pass-fail testing, in addition to testing of contraband detection systems.« less
Inferring probabilistic stellar rotation periods using Gaussian processes
NASA Astrophysics Data System (ADS)
Angus, Ruth; Morton, Timothy; Aigrain, Suzanne; Foreman-Mackey, Daniel; Rajpaul, Vinesh
2018-02-01
Variability in the light curves of spotted, rotating stars is often non-sinusoidal and quasi-periodic - spots move on the stellar surface and have finite lifetimes, causing stellar flux variations to slowly shift in phase. A strictly periodic sinusoid therefore cannot accurately model a rotationally modulated stellar light curve. Physical models of stellar surfaces have many drawbacks preventing effective inference, such as highly degenerate or high-dimensional parameter spaces. In this work, we test an appropriate effective model: a Gaussian Process with a quasi-periodic covariance kernel function. This highly flexible model allows sampling of the posterior probability density function of the periodic parameter, marginalizing over the other kernel hyperparameters using a Markov Chain Monte Carlo approach. To test the effectiveness of this method, we infer rotation periods from 333 simulated stellar light curves, demonstrating that the Gaussian process method produces periods that are more accurate than both a sine-fitting periodogram and an autocorrelation function method. We also demonstrate that it works well on real data, by inferring rotation periods for 275 Kepler stars with previously measured periods. We provide a table of rotation periods for these and many more, altogether 1102 Kepler objects of interest, and their posterior probability density function samples. Because this method delivers posterior probability density functions, it will enable hierarchical studies involving stellar rotation, particularly those involving population modelling, such as inferring stellar ages, obliquities in exoplanet systems, or characterizing star-planet interactions. The code used to implement this method is available online.
ProbCD: enrichment analysis accounting for categorization uncertainty.
Vêncio, Ricardo Z N; Shmulevich, Ilya
2007-10-12
As in many other areas of science, systems biology makes extensive use of statistical association and significance estimates in contingency tables, a type of categorical data analysis known in this field as enrichment (also over-representation or enhancement) analysis. In spite of efforts to create probabilistic annotations, especially in the Gene Ontology context, or to deal with uncertainty in high throughput-based datasets, current enrichment methods largely ignore this probabilistic information since they are mainly based on variants of the Fisher Exact Test. We developed an open-source R-based software to deal with probabilistic categorical data analysis, ProbCD, that does not require a static contingency table. The contingency table for the enrichment problem is built using the expectation of a Bernoulli Scheme stochastic process given the categorization probabilities. An on-line interface was created to allow usage by non-programmers and is available at: http://xerad.systemsbiology.net/ProbCD/. We present an analysis framework and software tools to address the issue of uncertainty in categorical data analysis. In particular, concerning the enrichment analysis, ProbCD can accommodate: (i) the stochastic nature of the high-throughput experimental techniques and (ii) probabilistic gene annotation.
Radial particle distributions in PARMILA simulation beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boicourt, G.P.
1984-03-01
The estimation of beam spill in particle accelerators is becoming of greater importance as higher current designs are being funded. To the present, no numerical method for predicting beam-spill has been available. In this paper, we present an approach to the loss-estimation problem that uses probability distributions fitted to particle-simulation beams. The properties of the PARMILA code's radial particle distribution are discussed, and a broad class of probability distributions are examined to check their ability to fit it. The possibility that the PARMILA distribution is a mixture is discussed, and a fitting distribution consisting of a mixture of two generalizedmore » gamma distributions is found. An efficient algorithm to accomplish the fit is presented. Examples of the relative prediction of beam spill are given. 26 references, 18 figures, 1 table.« less
JMAT 2.0 Operating Room Requirements Estimation Study
2011-05-25
Health Research Center 140 Sylvester Rd. San Diego, CA 92106-3521 Report No. 11-10J, supported by the Office of the Assistant...expected-value methodology for estimating OR requirements in a theater hospital; (b) algorithms for estimating a special case OR table requirement...assuming the probabilities of entering the OR are either 1 or 0; and (c) an Excel worksheet that calculates the special case OR table estimates
[Survival functions and life tables at the origins of actuarial mathematics].
Spelta, D
1997-01-01
"In the determination of death probabilities of an insured subject one can use either statistical data or a mathematical function. In this paper a survey of the relationship between mortality tables and survival functions from the origins until the first half of the nineteenth century is presented. The author has tried to find the methodological grounds which have induced the actuaries to prefer either of these tools." (EXCERPT)
Empirical Observations on the Sensitivity of Hot Cathode Ionization Type Vacuum Gages
NASA Technical Reports Server (NTRS)
Summers, R. L.
1969-01-01
A study of empirical methods of predicting tile relative sensitivities of hot cathode ionization gages is presented. Using previously published gage sensitivities, several rules for predicting relative sensitivity are tested. The relative sensitivity to different gases is shown to be invariant with gage type, in the linear range of gage operation. The total ionization cross section, molecular and molar polarizability, and refractive index are demonstrated to be useful parameters for predicting relative gage sensitivity. Using data from the literature, the probable error of predictions of relative gage sensitivity based on these molecular properties is found to be about 10 percent. A comprehensive table of predicted relative sensitivities, based on empirical methods, is presented.
Modeling the probability distribution of peak discharge for infiltrating hillslopes
NASA Astrophysics Data System (ADS)
Baiamonte, Giorgio; Singh, Vijay P.
2017-07-01
Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.
Reasons and correlates of contraceptive discontinuation in Kuwait.
Shah, N M; Shah, M A; Chowdhury, R I; Menon, I
2007-09-01
(1) To examine the probability of discontinuation of various methods within 1, 2, and three years of use and the reasons for discontinuation; 2) to analyse the socio-demographic correlates of discontinuation. Data from a survey of Kuwaiti women in reproductive ages conducted in 1999 were used. Information on duration of use of modern and traditional methods, and reasons for discontinuation during the 72 months before the survey were analysed. Probabilities of discontinuation were estimated through multiple decrement life table analysis. After 1 year, 30% of modern and 40% of traditional method users had discontinued; after 3 years, discontinuation increased to 66 and 70%, respectively. After 36 months, only 40% of IUD users discontinued compared with 74% of oral contraceptive users. The desire to become pregnant was the leading reason for discontinuation of most modern methods, while method failure was an equally important reason for traditional methods. Discontinuation was significantly more frequent among higher parity, non-working and Bedouin women, and among those who said Islam disapproves of contraception. Contraception is used largely for spacing. More than two-thirds of the women studied had discontinued most methods after three years, except the IUD, which was used only by about 10% of them. Traditional methods are often discontinued due to method failure and may result in an unintended pregnancy. Better counselling is warranted for traditional methods. Health care for managing side effects of modern methods also needs improvement.
ERIC Educational Resources Information Center
Nelson, Jonathan D.
2007-01-01
Reports an error in "Finding Useful Questions: On Bayesian Diagnosticity, Probability, Impact, and Information Gain" by Jonathan D. Nelson (Psychological Review, 2005[Oct], Vol 112[4], 979-999). In Table 13, the data should indicate that 7% of females had short hair and 93% of females had long hair. The calculations and discussion in the article…
Xiao, Wen-Jun; Ye, Ding-Wei; Yao, Xu-Dong; Zhang, Shi-Lin; Dai, Bo
2013-01-01
To compare Partin tables (PTs) 1997, 2001, and 2007 for their clinical applicability in a Chinese cohort based upon a decision curve analysis (DCA). Clinical and pathologic data of 264 consecutive Chinese patients with clinically localized prostate cancer were used. These patients underwent open radical prostatectomy between 2005 and 2011. DCA quantified the net benefit of different PT versions relating to specific threshold probabilities of established capsular penetration (ECP), seminal vesicle involvement (SVI), and lymph node involvement (LNI). Overall, ECP, SVI, and LNI were recorded in 23.1, 10.2, and 6.1%, respectively. When the threshold probability was below the prevalence for LNI and ECP predictions, the DCA favored the 2007 version versus the 1997 version for SVI. DCA indicates that for low threshold probability, decision models are useful to discriminate the performance differences of three PT versions, although net benefit differences were not apparent. For high threshold probability, there may not be an important benefit from the use of PTs and the current analysis cannot translate into meaningful net gains differences. Copyright © 2013 S. Karger AG, Basel.
ERIC Educational Resources Information Center
Tuttle, Christina Clark; Teh, Bing-ru; Nichols-Barrer, Ira; Gill, Brian P.; Gleason, Philip
2010-01-01
In this set of four supplemental tables, the authors compare the baseline test scores of the treatment and matched control group samples observed in each year after KIPP entry (outcome years 1 to 4). As discussed in Chapter III, the authors used an iterative propensity score estimation procedure to calculate each student's probability of entering…
[Construction of abridged life table for health evaluation of local resident using Excel program].
Chen, Qingsha; Wang, Feng; Li, Xiaozhen; Yang, Jian; Yu, Shouyi; Hu, Jun
2012-05-01
To provide an easy computational tool for evaluating the health condition of local residents. An abridged life table was programmed by applying mathematical functions and formula in Excel program and tested with the real study data to evaluate the results computed. The Excel was capable of computing group death probability of age in the life table ((n)q(x)), number of survivors (l(x)), number of death ((n)d(x)), survival per person-year ((n)L(x)), survival total per person-year (T(x)) and life expectancy (e(x)). The calculated results were consistent with those by SAS. The abridged life table constructed using Microsoft Excel can conveniently and accurately calculate the relevant indices for evaluating the health condition of the residents.
An Efficient Algorithm for the Detection of Infrequent Rapid Bursts in Time Series Data
NASA Astrophysics Data System (ADS)
Giles, A. B.
1997-01-01
Searching through data for infrequent rapid bursts is a common requirement in many areas of scientific research. In this paper, we present a powerful and flexible analysis method that, in a single pass through the data, searches for statistically significant bursts on a set of specified short timescales. The input data are binned, if necessary, and then quantified in terms of probabilities rather than rates or ratios. Using a measure-like probability makes the method relatively count rate independent. The method has been made computationally efficient by the use of lookup tables and cyclic buffers, and it is therefore particularly well suited to real-time applications. The technique has been developed specifically for use in an X-ray astronomy application to search for millisecond bursts from black hole candidates such as Cyg X-1. We briefly review the few observations of these types of features reported in the literature, as well as the variety of ways in which their statistical reliability was challenged. The developed technique, termed the burst expectation search (BES) method, is illustrated using some data simulations and archived data obtained during ground testing of the proportional counter array (PCA) experiment detectors on the Rossi X-Ray Timing Explorer (RXTE). A potential application for a real-time BES method on board RXTE is also examined.
Holzer, Thomas L.; Noce, Thomas E.; Bennett, Michael J.
2008-01-01
Maps showing the probability of surface manifestations of liquefaction in the northern Santa Clara Valley were prepared with liquefaction probability curves. The area includes the communities of San Jose, Campbell, Cupertino, Los Altos, Los Gatos Milpitas, Mountain View, Palo Alto, Santa Clara, Saratoga, and Sunnyvale. The probability curves were based on complementary cumulative frequency distributions of the liquefaction potential index (LPI) for surficial geologic units in the study area. LPI values were computed with extensive cone penetration test soundings. Maps were developed for three earthquake scenarios, an M7.8 on the San Andreas Fault comparable to the 1906 event, an M6.7 on the Hayward Fault comparable to the 1868 event, and an M6.9 on the Calaveras Fault. Ground motions were estimated with the Boore and Atkinson (2008) attenuation relation. Liquefaction is predicted for all three events in young Holocene levee deposits along the major creeks. Liquefaction probabilities are highest for the M7.8 earthquake, ranging from 0.33 to 0.37 if a 1.5-m deep water table is assumed, and 0.10 to 0.14 if a 5-m deep water table is assumed. Liquefaction probabilities of the other surficial geologic units are less than 0.05. Probabilities for the scenario earthquakes are generally consistent with observations during historical earthquakes.
Risk assessment of groundwater level variability using variable Kriging methods
NASA Astrophysics Data System (ADS)
Spanoudaki, Katerina; Kampanis, Nikolaos A.
2015-04-01
Assessment of the water table level spatial variability in aquifers provides useful information regarding optimal groundwater management. This information becomes more important in basins where the water table level has fallen significantly. The spatial variability of the water table level in this work is estimated based on hydraulic head measured during the wet period of the hydrological year 2007-2008, in a sparsely monitored basin in Crete, Greece, which is of high socioeconomic and agricultural interest. Three Kriging-based methodologies are elaborated in Matlab environment to estimate the spatial variability of the water table level in the basin. The first methodology is based on the Ordinary Kriging approach, the second involves auxiliary information from a Digital Elevation Model in terms of Residual Kriging and the third methodology calculates the probability of the groundwater level to fall below a predefined minimum value that could cause significant problems in groundwater resources availability, by means of Indicator Kriging. The Box-Cox methodology is applied to normalize both the data and the residuals for improved prediction results. In addition, various classical variogram models are applied to determine the spatial dependence of the measurements. The Matérn model proves to be the optimal, which in combination with Kriging methodologies provides the most accurate cross validation estimations. Groundwater level and probability maps are constructed to examine the spatial variability of the groundwater level in the basin and the associated risk that certain locations exhibit regarding a predefined minimum value that has been set for the sustainability of the basin's groundwater resources. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the protection of surface water and groundwater in the coastal zone', (2013 - 2015). Varouchakis, E. A. and D. T. Hristopulos (2013). "Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables." Advances in Water Resources 52: 34-49. Kitanidis, P. K. (1997). Introduction to geostatistics, Cambridge: University Press.
Water table dynamics in undisturbed, drained and restored blanket peat
NASA Astrophysics Data System (ADS)
Holden, J.; Wallage, Z. E.; Lane, S. N.; McDonald, A. T.
2011-05-01
SummaryPeatland water table depth is an important control on runoff production, plant growth and carbon cycling. Many peatlands have been drained but are now subject to activities that might lead to their restoration including the damming of artificial drains. This paper investigates water table dynamics on intact, drained and restored peatland slopes in a blanket peat in northern England using transects of automated water table recorders. Long-term (18 month), seasonal and short-term (storm event) records are explored. The restored site had drains blocked 6 years prior to monitoring commencing. The spatially-weighted mean water table depths over an 18 month period were -5.8 cm, -8.9 cm and -11.5 cm at the intact, restored and drained sites respectively. Most components of water table behaviour at the restored site, including depth exceedance probability curves, seasonality of water table variability, and water table responses to individual rainfall events were intermediate between that of the drained and intact sites. Responses also depended on location with respect to the drains. The results show that restoration of drained blanket peat is difficult and the water table dynamics may not function in the same way as those in undisturbed blanket peat even many years after management intervention. Further measurement of hydrological processes and water table responses to peatland restoration are required to inform land managers of the hydrological success of those projects.
Kedelski, M; Golata, E
1986-01-01
Official Polish data for the period 1982-1984 are used to construct multiple decrement tables of changes in marital status for the population of a hypothetical cohort over the course of its life history. The data are analyzed separately by sex with respect to the probabilities of change in marital status, the characteristics of the life cycle, and the expectation of life by marital status category. (SUMMARY IN ENG AND RUS)
[METHOD FOR DETERMINING EROSIVE LESIONS OF THE GASTRIC MUCOUSA IN CHILDREN WITH JUVENILE ARTHRITIS].
Listopadova, A P; Novikova, V P; Melnikova, I U; Petrovskiy, A N; Slizovskiy, N V
2015-01-01
To detect the clinical diagnostic criteria for non-invasive diagnosis of erosive gastritis in children with juvenile arthritis have been studied the 92 children aged 9 to 16 years (mean age-13,9 ± 2,3 years) with verified diagnosis of juvenile arthritis, of whom 10 had erosive gastritis (group 1) and 82 without erosions (group 2). A comparison of the groups on 23 grounds by analysis of contingency tables and the subsequent discriminant analysis, has developed a new non-invasive method for determining the erosive lesions of the mucous membrane of the stomach in children with juvenile arthritis, including a score of history, complaints and the results of laboratory studies the level of the G-17, pepsinogen I, pepsinogen II, and the ratio of pepsinogen I to pepsinogen II, the presence of autoantibodies to the H+, K+/ATPase of the parietal cells of the stomach, the test for occult blood "Colon View Hb and Hb/Hp". Developed a diagnostic table, including 11 features with scores each. The total score 27 or higher allows a high degree of probability to determine the erosive lesions of the gastric mucosa in children with juvenile arthritis.
NASA Astrophysics Data System (ADS)
Zhang, Y. M.; Evans, J. R. G.; Yang, S. F.
2010-11-01
The authors have discovered a systematic, intelligent and potentially automatic method to detect errors in handbooks and stop their transmission using unrecognised relationships between materials properties. The scientific community relies on the veracity of scientific data in handbooks and databases, some of which have a long pedigree covering several decades. Although various outlier-detection procedures are employed to detect and, where appropriate, remove contaminated data, errors, which had not been discovered by established methods, were easily detected by our artificial neural network in tables of properties of the elements. We started using neural networks to discover unrecognised relationships between materials properties and quickly found that they were very good at finding inconsistencies in groups of data. They reveal variations from 10 to 900% in tables of property data for the elements and point out those that are most probably correct. Compared with the statistical method adopted by Ashby and co-workers [Proc. R. Soc. Lond. Ser. A 454 (1998) p. 1301, 1323], this method locates more inconsistencies and could be embedded in database software for automatic self-checking. We anticipate that our suggestion will be a starting point to deal with this basic problem that affects researchers in every field. The authors believe it may eventually moderate the current expectation that data field error rates will persist at between 1 and 5%.
40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 15 2014-07-01 2014-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...
40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...
40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 14 2011-07-01 2011-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...
40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 14 2010-07-01 2010-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...
40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 15 2013-07-01 2013-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...
Management of Hip Fractures in Lateral Position without a Fracture Table
Pahlavanhosseini, Hamid; Valizadeh, Sima; Banadaky, Seyyed Hossein Saeed; Karbasi, Mohammad H Akhavan; Abrisham, Seyed Mohammad J; Fallahzadeh, Hossein
2014-01-01
Background: Hip fracture Management in supine position on a fracture table with biplane fluoroscopic views has some difficulties which leads to prolongation of surgery and increasing x- rays' dosage. The purpose of this study was to report the results and complications of hip fracture management in lateral position on a conventional operating table with just anteroposterior fluoroscopic view. Methods: 40 hip fractures (31 trochanteric and 9 femoral neck fractures) were operated in lateral position between Feb 2006 and Oct 2012. Age, gender, fracture classification, operation time, intra-operation blood loss, reduction quality, and complications were extracted from patients' medical records. The mean follow-up time was 30.78±22.73 months (range 4-83). Results: The mean operation time was 76.50 ± 16.88 min (range 50 - 120 min).The mean intra-operative blood loss was 628.75 ± 275.00 ml (range 250-1300ml). Anatomic and acceptable reduction was observed in 95%of cases. The most important complications were malunion (one case in trochanteric group), avascular necrosis of femoral head and nonunion (each one case in femoral neck group). Conclusions: It sounds that reduction and fixation of hip fractures in lateral position with fluoroscopy in just anteroposterior view for small rural hospitals may be executable and probably safe. PMID:25386577
40 CFR Table C-3 to Subpart C of... - Test Specifications for Pb in TSP and Pb in PM10 Methods
Code of Federal Regulations, 2011 CFR
2011-07-01
... Pb in PM10 Methods C Table C-3 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL..., Subpt. C, Table C-3 Table C-3 to Subpart C of Part 53—Test Specifications for Pb in TSP and Pb in PM10 Methods Table C-3 to Subpart C of Part 53—Test Specifications for Pb in TSP and Pb in PM10 Methods...
40 CFR Table C-3 to Subpart C of... - Test Specifications for Pb in TSP and Pb in PM10 Methods
Code of Federal Regulations, 2010 CFR
2010-07-01
... Pb in PM10 Methods C Table C-3 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL..., Subpt. C, Table C-3 Table C-3 to Subpart C of Part 53—Test Specifications for Pb in TSP and Pb in PM10 Methods Table C-3 to Subpart C of Part 53—Test Specifications for Pb in TSP and Pb in PM10 Methods...
An Analysis of Class II Supplies Requisitions in the Korean Army’s Organizational Supply
2009-03-26
five methods for qualitative research : Case study , Ethnography , 45 Phenomenological study , Grounded theory , and...Approaches .. 42 Table 9 Five Qualitative Research Methods ..................................................................... 45 Table 10 Six...Content analysis. Table 9 provides a brief overview of the five methods . Table 9 Five Qualitative
Bayes’ theorem, the ROC diagram and reference values: Definition and use in clinical diagnosis
Kallner, Anders
2017-01-01
Medicine is diagnosis, treatment and care. To diagnose is to consider the probability of the cause of discomfort experienced by the patient. The physician may face many options and all decisions are liable to uncertainty to some extent. The rational action is to perform selected tests and thereby increase the pre-test probability to reach a superior post-test probability of a particular option. To draw the right conclusions from a test, certain background information about the performance of the test is necessary. We set up a partially artificial dataset with measured results obtained from the laboratory information system and simulated diagnosis attached. The dataset is used to explore the use of contingency tables with a unique graphic design and software to establish and compare ROC graphs. The loss of information in the ROC curve is compensated by a cumulative data analysis (CDA) plot linked to a display of the efficiency and predictive values. A standard for the contingency table is suggested and the use of dynamic reference intervals discussed. PMID:29209139
An automated approach to the design of decision tree classifiers
NASA Technical Reports Server (NTRS)
Argentiero, P.; Chin, P.; Beaudet, P.
1980-01-01
The classification of large dimensional data sets arising from the merging of remote sensing data with more traditional forms of ancillary data is considered. Decision tree classification, a popular approach to the problem, is characterized by the property that samples are subjected to a sequence of decision rules before they are assigned to a unique class. An automated technique for effective decision tree design which relies only on apriori statistics is presented. This procedure utilizes a set of two dimensional canonical transforms and Bayes table look-up decision rules. An optimal design at each node is derived based on the associated decision table. A procedure for computing the global probability of correct classfication is also provided. An example is given in which class statistics obtained from an actual LANDSAT scene are used as input to the program. The resulting decision tree design has an associated probability of correct classification of .76 compared to the theoretically optimum .79 probability of correct classification associated with a full dimensional Bayes classifier. Recommendations for future research are included.
United States life tables eliminating certain causes of death, 1999-2001.
Arias, Elizabeth; Heron, Melonie; Tejada-Vera, Betzaida
2013-05-31
This report presents abridged cause-elimination life tables and multiple-decrement life table functions for 33 selected causes of death, by race (white and black) and sex, for the total United States. It is the fourth in a set of reports that present life table data for the United States and each state for the period 1999-2001. The life table functions presented in this report represent the mortality experience of a hypothetical cohort assuming that a particular cause of death is eliminated. The report includes a description of the methodology used to estimate the life table functions shown in four sets of tables. Each set contains seven tables, one each for the total population, total males, total females, white males, white females, black males, and black females. From birth, a person has a 31% chance of dying of Diseases of heart (heart disease) and a 22% chance of dying of Malignant neoplasms (cancer). In contrast, the probabilities of dying from Accidents (unintentional injuries), Diabetes mellitus (diabetes), and Septicemia--3 of the 10 leading causes of death in 1991-2001--are much smaller. Likewise, elimination of heart disease would increase life expectancy at birth by almost 4 years, and elimination of cancer by more than 3 years. Other leading causes of death have a much smaller impact.
1986-08-01
mean square errors for selected variables . . 34 8. Variable range and mean value for MCC and non-MCC cases . . 36 9. Alpha ( a ) levels at which the...Table 9. For each variable, the a level is listed at which the two mean values are determined to be significantly 38 Table 9. Alpha ( a ) levels at...vorticity advection None 700 mb vertical velocity forecast .20 different. These a levels express the probability of erroneously con- cluding that the
Realistic Fireteam Movement in Urban Environments
2010-10-01
00-2010 4 . TITLE AND SUBTITLE Realistic Fireteam Movement in Urban Environments 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...is largely consumed by the data transfer from the GPU to the CPU of the color and stencil buffers. Since this operation would only need to be...cost is given in table 4 . Waypoints Mean Std Dev 1112 1.25ms 0.09ms 3785 4.07ms 0.20ms Table 4 : Threat Probability Model update cost (Intel Q6600
A New Compression Method for FITS Tables
NASA Technical Reports Server (NTRS)
Pence, William; Seaman, Rob; White, Richard L.
2010-01-01
As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.
Mohammed, Riazuddin; Johnson, Karl; Bache, Ed
2010-07-01
Multiple radiographic images may be necessary during the standard procedure of in-situ pinning of slipped capital femoral epiphysis (SCFE) hips. This procedure can be performed with the patient positioned on a fracture table or a radiolucent table. Our study aims to look at any differences in the amount and duration of radiation exposure for in-situ pinning of SCFE performed using a traction table or a radiolucent table. Sixteen hips in thirteen patients who were pinned on radiolucent table were compared for the cumulative radiation exposure to 35 hips pinned on a fracture table in 33 patients during the same time period. Cumulative radiation dose was measured as dose area product in Gray centimeter2 and the duration of exposure was measured in minutes. Appropriate statistical tests were used to test the significance of any differences. Mean cumulative radiation dose for SCFE pinned on radiolucent table was statistically less than for those pinned on fracture table (P<0.05). The mean duration of radiation exposure on either table was not significantly different. Lateral projections may increase the radiation doses compared with anteroposterior projections because of the higher exposure parameters needed for side imaging. Our results showing decreased exposure doses on the radiolucent table are probably because of the ease of a frog leg lateral positioning obtained and thereby the ease of lateral imaging. In-situ pinning of SCFE hips on a radiolucent table has an additional advantage that the radiation dose during the procedure is significantly less than that of the procedure that is performed on a fracture table.
NASA Astrophysics Data System (ADS)
Şimşek, Ö.; Karagöz, D.; Ertugrul, M.
2003-10-01
The K to L shell vacancy transfer probabilities for nine elements in the atomic region 46≤ Z≤55 were determined by measuring the L X-ray yields from targets excited by 5.96 and 59.5 keV photons and using the theoretical K and L shell photoionization cross-sections. The L X-rays from different targets were detected with an Ultra-LEGe detector with very thin polymer window. Present experimental results were compared with the semi empirical values tabulated by Rao et al. [Atomic vacancy distributions product by inner shellionization, Phys. Rev. A 5 (1972) 997-1002] and theoretically calculated values using radiative and radiationless transitions. The radiative transitions of these elements were observed from the relativistic Hartree-Slater model, which was proposed by Scofield [Relativistic Hartree-Slater values for K and L shell X-ray emission rates, At. Data Nucl. Data Tables 14 (1974) 121-137]. The radiationless transitions were observed from the Dirac-Hartree-Slater model, which was proposed by Chen et al. [Relativistic radiationless transition probabilities for atomic K- and L-shells, At. Data Nucl. Data Tables 24 (1979) 13-37]. To the best of our knowledge, these vacancy transfer probabilities are reported for the first time.
VizieR Online Data Catalog: SFiNCs: X-ray, IR and membership catalogs (Getman+, 2017)
NASA Astrophysics Data System (ADS)
Getman, K. V.; Broos, P. S.; Kuhn, M. A.; Feigelson, E. D.; Richert, A. J. W.; Ota, Y.; Bate, M. R.; Garmire, G. P.
2017-06-01
Sixty five X-ray observations for the 22 Star Formation in Nearby Clouds (SFiNCs) star-forming regions (SFRs) (see tables 1 and 2), made with the imaging array on the Advanced CCD Imaging Spectrometer (ACIS), were pulled from the Chandra archive (spanning 2000 Jan to 2015 Apr; see table 2). Our final Chandra-ACIS catalog for the 22 SFiNCs SFRs comprises 15364 X-ray sources (Tables 3 and 4 and section 3.2). To obtain MIR photometry for X-ray objects and to identify and measure MIR photometry for additional non-Chandra disky stars that were missed in previous studies of the SFiNCs regions (typically faint YSOs), we have reduced the archived Spitzer-IRAC data by homogeneously applying the MYStIX-based Spitzer-IRAC data reduction methods of Kuhn+ (2013, J/ApJS/209/29) to the 423 Astronomical Object Request (AORs) data sets for the 22 SFiNCs SFRs (Table 5). As in MYStIX, here the SFiNCs IRAC source catalog retains all point sources with the photometric signal-to-noise ratio >5 in both [3.6] and [4.5] channels. This catalog covers the 22 SFiNCs SFRs and their vicinities on the sky and comprises 1638654 IRAC sources with available photometric measurements for 100%, 100%, 29%, and 23% of these sources in the 3.6, 4.5, 5.8, and 8.0um bands, respectively (see table 6 and section 3.4). Source position cross correlations between the SFiNCs Chandra X-ray source catalog and an IR catalog, either the "cut-out" IRAC or 2MASS, were made using the steps described in section 3.5. Tables 7 and 8 provide the list of 8492 SFiNCs probable cluster members (SPCMs) and their main IR and X-ray properties (see section 4). (9 data files).
The oilspill risk analysis model of the U. S. Geological Survey
Smith, R.A.; Slack, J.R.; Wyant, Timothy; Lanfear, K.J.
1982-01-01
The U.S. Geological Survey has developed an oilspill risk analysis model to aid in estimating the environmental hazards of developing oil resources in Outer Continental Shelf (OCS) lease areas. The large, computerized model analyzes the probability of spill occurrence, as well as the likely paths or trajectories of spills in relation to the locations of recreational and biological resources which may be vulnerable. The analytical methodology can easily incorporate estimates of weathering rates , slick dispersion, and possible mitigating effects of cleanup. The probability of spill occurrence is estimated from information on the anticipated level of oil production and method of route of transport. Spill movement is modeled in Monte Carlo fashion with a sample of 500 spills per season, each transported by monthly surface current vectors and wind velocities sampled from 3-hour wind transition matrices. Transition matrices are based on historic wind records grouped in 41 wind velocity classes, and are constructed seasonally for up to six wind stations. Locations and monthly vulnerabilities of up to 31 categories of environmental resources are digitized within an 800,000 square kilometer study area. Model output includes tables of conditional impact probabilities (that is, the probability of hitting a target, given that a spill has occured), as well as probability distributions for oilspills occurring and contacting environmental resources within preselected vulnerability time horizons. (USGS)
The oilspill risk analysis model of the U. S. Geological Survey
Smith, R.A.; Slack, J.R.; Wyant, T.; Lanfear, K.J.
1980-01-01
The U.S. Geological Survey has developed an oilspill risk analysis model to aid in estimating the environmental hazards of developing oil resources in Outer Continental Shelf (OCS) lease areas. The large, computerized model analyzes the probability of spill occurrence, as well as the likely paths or trajectories of spills in relation to the locations of recreational and biological resources which may be vulnerable. The analytical methodology can easily incorporate estimates of weathering rates , slick dispersion, and possible mitigating effects of cleanup. The probability of spill occurrence is estimated from information on the anticipated level of oil production and method and route of transport. Spill movement is modeled in Monte Carlo fashion with a sample of 500 spills per season, each transported by monthly surface current vectors and wind velocities sampled from 3-hour wind transition matrices. Transition matrices are based on historic wind records grouped in 41 wind velocity classes, and are constructed seasonally for up to six wind stations. Locations and monthly vulnerabilities of up to 31 categories of environmental resources are digitized within an 800,000 square kilometer study area. Model output includes tables of conditional impact probabilities (that is, the probability of hitting a target, given that a spill has occurred), as well as probability distributions for oilspills occurring and contacting environmental resources within preselected vulnerability time horizons. (USGS)
Influence of level of education on disability free life expectancy by sex: the ILSA study.
Minicuci, N; Noale, M
2005-12-01
To assess the effect of education on Disability Free Life Expectancy among older Italians, using a hierarchical model as indicator of disability, with estimates based on the multistate life table method and IMaCh software. Data were obtained from the Italian Longitudinal Study on Aging which considered a random sample of 5632 individuals. Total life expectancy ranged from 16.5 years for men aged 65 years to 6 years for men aged 80. The age range for women was 19.6 and 8.4 years, respectively. For both sexes, increasing age was associated with a lower probability of recovery from a mild state of disability, with a greater probability of worsening for all individuals presenting an independent state at baseline, and with a greater probability of dying except for women from a mild state of disability. A medium/high educational level was associated with a greater probability of recovery only in men with a mild state of disability at baseline, and with a lower probability of worsening in both sexes, except for men with a mild state of disability at baseline. The positive effects of high education are well established in most research work and, being a modifiable factor, strategies focused on increasing level of education and, hence strengthening access to information and use of health services would produce significant benefits.
Optimizing exoplanet transit searches around low-mass stars with inclination constraints
NASA Astrophysics Data System (ADS)
Herrero, E.; Ribas, I.; Jordi, C.; Guinan, E. F.; Engle, S. G.
2012-01-01
Aims: We investigate a method to increase the efficiency of a targeted exoplanet search with the transit technique by preselecting a subset of candidates from large catalogs of stars. Assuming spin-orbit alignment, this can be achieved by considering stars that have a higher probability to be oriented nearly equator-on (inclination close to 90°). Methods: We used activity-rotation velocity relations for low-mass stars with a convective envelope to study the dependence of the position in the activity-vsini diagram on the stellar axis inclination. We composed a catalog of G-, K-, M-type main-sequence simulated stars using isochrones, an isotropic inclination distribution and empirical relations to obtain their rotation periods and activity indexes. Then the activity-vsini diagram was completed and statistics were applied to trace the areas containing the higher ratio of stars with inclinations above 80°. A similar statistics was applied to stars from real catalogs with log(R'HK) and vsini data to find their probability of being oriented equator-on. Results: We present our method to generate the simulated star catalog and the subsequent statistics to find the highly inclined stars from real catalogs using the activity-vsini diagram. Several catalogs from the literature are analyzed and a subsample of stars with the highest probability of being equator-on is presented. Conclusions: Assuming spin-orbit alignment, the efficiency of an exoplanet transit search in the resulting subsample of probably highly inclined stars is estimated to be two to three times higher than with a general search without preselection. Table 4 is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/537/A147
NASA Technical Reports Server (NTRS)
Mengshoel, Ole J.; Roth, Dan; Wilkins, David C.
2001-01-01
Portfolio methods support the combination of different algorithms and heuristics, including stochastic local search (SLS) heuristics, and have been identified as a promising approach to solve computationally hard problems. While successful in experiments, theoretical foundations and analytical results for portfolio-based SLS heuristics are less developed. This article aims to improve the understanding of the role of portfolios of heuristics in SLS. We emphasize the problem of computing most probable explanations (MPEs) in Bayesian networks (BNs). Algorithmically, we discuss a portfolio-based SLS algorithm for MPE computation, Stochastic Greedy Search (SGS). SGS supports the integration of different initialization operators (or initialization heuristics) and different search operators (greedy and noisy heuristics), thereby enabling new analytical and experimental results. Analytically, we introduce a novel Markov chain model tailored to portfolio-based SLS algorithms including SGS, thereby enabling us to analytically form expected hitting time results that explain empirical run time results. For a specific BN, we show the benefit of using a homogenous initialization portfolio. To further illustrate the portfolio approach, we consider novel additive search heuristics for handling determinism in the form of zero entries in conditional probability tables in BNs. Our additive approach adds rather than multiplies probabilities when computing the utility of an explanation. We motivate the additive measure by studying the dramatic impact of zero entries in conditional probability tables on the number of zero-probability explanations, which again complicates the search process. We consider the relationship between MAXSAT and MPE, and show that additive utility (or gain) is a generalization, to the probabilistic setting, of MAXSAT utility (or gain) used in the celebrated GSAT and WalkSAT algorithms and their descendants. Utilizing our Markov chain framework, we show that expected hitting time is a rational function - i.e. a ratio of two polynomials - of the probability of applying an additive search operator. Experimentally, we report on synthetically generated BNs as well as BNs from applications, and compare SGSs performance to that of Hugin, which performs BN inference by compilation to and propagation in clique trees. On synthetic networks, SGS speeds up computation by approximately two orders of magnitude compared to Hugin. In application networks, our approach is highly competitive in Bayesian networks with a high degree of determinism. In addition to showing that stochastic local search can be competitive with clique tree clustering, our empirical results provide an improved understanding of the circumstances under which portfolio-based SLS outperforms clique tree clustering and vice versa.
Instantaneous and controllable integer ambiguity resolution: review and an alternative approach
NASA Astrophysics Data System (ADS)
Zhang, Jingyu; Wu, Meiping; Li, Tao; Zhang, Kaidong
2015-11-01
In the high-precision application of Global Navigation Satellite System (GNSS), integer ambiguity resolution is the key step to realize precise positioning and attitude determination. As the necessary part of quality control, integer aperture (IA) ambiguity resolution provides the theoretical and practical foundation for ambiguity validation. It is mainly realized by acceptance testing. Due to the constraint of correlation between ambiguities, it is impossible to realize the controlling of failure rate according to analytical formula. Hence, the fixed failure rate approach is implemented by Monte Carlo sampling. However, due to the characteristics of Monte Carlo sampling and look-up table, we have to face the problem of a large amount of time consumption if sufficient GNSS scenarios are included in the creation of look-up table. This restricts the fixed failure rate approach to be a post process approach if a look-up table is not available. Furthermore, if not enough GNSS scenarios are considered, the table may only be valid for a specific scenario or application. Besides this, the method of creating look-up table or look-up function still needs to be designed for each specific acceptance test. To overcome these problems in determination of critical values, this contribution will propose an instantaneous and CONtrollable (iCON) IA ambiguity resolution approach for the first time. The iCON approach has the following advantages: (a) critical value of acceptance test is independently determined based on the required failure rate and GNSS model without resorting to external information such as look-up table; (b) it can be realized instantaneously for most of IA estimators which have analytical probability formulas. The stronger GNSS model, the less time consumption; (c) it provides a new viewpoint to improve the research about IA estimation. To verify these conclusions, multi-frequency and multi-GNSS simulation experiments are implemented. Those results show that IA estimators based on iCON approach can realize controllable ambiguity resolution. Besides this, compared with ratio test IA based on look-up table, difference test IA and IA least square based on the iCON approach most of times have higher success rates and better controllability to failure rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spriggs, G D
In a previous paper, the composite exposure rate conversion factor (ECF) for nuclear fallout was calculated using a simple theoretical photon-transport model. The theoretical model was used to fill in the gaps in the FGR-12 table generated by ORNL. The FGR-12 table contains the individual conversion factors for approximate 1000 radionuclides. However, in order to calculate the exposure rate during the first 30 minutes following a nuclear detonation, the conversion factors for approximately 2000 radionuclides are needed. From a human-effects standpoint, it is also necessary to have the dose rate conversion factors (DCFs) for all 2000 radionuclides. The DCFs aremore » used to predict the whole-body dose rates that would occur if a human were standing in a radiation field of known exposure rate. As calculated by ORNL, the whole-body dose rate (rem/hr) is approximately 70% of the exposure rate (R/hr) at one meter above the surface. Hence, the individual DCFs could be estimated by multiplying the individual ECFs by 0.7. Although this is a handy rule-of-thumb, a more consistent (and perhaps, more accurate) method of estimating the individual DCFs for the missing radionuclides in the FGR-12 table is to use the linear relationship between DCF and total gamma energy released per decay. This relationship is shown in Figure 1. The DCFs for individual organs in the body can also be estimated from the estimated whole-body DCF. Using the DCFs given FGR-12, the ratio of the organ-specific DCFs to the whole-body DCF were plotted as a function of the whole-body DCF. From these plots, the asymptotic ratios were obtained (see Table 1). Using these asymptotic ratios, the organ-specific DCFs can be estimated using the estimated whole-body DCF for each of the missing radionuclides in the FGR-12 table. Although this procedure for estimating the organ-specific DCFs may over-estimate the value for some low gamma-energy emitters, having a finite value for the organ-specific DCFs in the table is probably better than having no value at all. A summary of the complete ECF and DCF values are given in Table 2.« less
Thalgott, Mark; Düwel, Charlotte; Rauscher, Isabel; Heck, Matthias M; Haller, Bernhard; Gafita, Andrei; Gschwend, Jürgen E; Schwaiger, Markus; Maurer, Tobias; Eiber, Matthias
2018-05-24
Our aim was to assess the diagnostic potential of one-stop shop Prostate-specific membrane antigen-ligand Positron Emission Tomography/Magnetic Resonance Imaging ( 68 Ga-PSMA-11 PET/MRI) compared to preoperative staging nomograms in patients with high-risk prostate cancer (PC). Methods: A total of 102 patients underwent 68 Ga-PSMA-11 PET/MRI before intended radical prostatectomy (RP) with lymph node dissection. Preoperative variables determined the probabilities for lymph node metastases (LNM), extracapsular extension (ECE) and seminal vesical involvement (SVI) using the Memorial Sloan-Kettering Cancer Center (MSKCC) nomogram and Partin tables. Receiver operating characteristic (ROC) analyses were performed to determine best discriminatory cutoffs. On cohort base, positivity rates of imaging and nomograms were compared to pathological prevalence. On patient base, sensitivity, specificity and its area under the curves (AUCs) were calculated. Finally, the full concordance of each method to postoperative T- and N-stage was determined. Results: 73 patients were finally analysed. On cohort base, the MSKCC nomogram (39.7%) positivity rate was most concordant with pathological prevalence for LNM (34.3%) compared to Partin tables (14.1%) and imaging (20.6). Prevalence of ECE (72.6%) was best predicted by MSKCC nomograms and imaging (83.6% each), compared to Partin tables (38.4%). For prevalence of SVI (45.2%), imaging (47.9%) performed superior to MSKCC (37.6%) and Partin tables (19.3%). On patient base, AUCs for LNM, ECE and SVI did not differ significantly between tests (p>0.05). Imaging revealed a high specificity (100%) for LNM and a sensitivity (60%) comparable to the MSKCC nomogram (68%) and Partin tables (60%). For ECE, imaging revealed the highest sensitivity (94.3%) compared to the MSKCC nomogram (66%) and Partin tables (71.1%). For SVI, sensitivity and specificity of imaging and MSKCC nomogram were comparable (81.5% and 80% vs. 87.9% and 75%). The rate of concordance to the final pTN-stage was 60.3% for imaging, 52.1% for the MSKCC nomogram and 39.7% for Partin tables. Conclusion: In our analysis, preoperative one-stop shop 68 Ga-PSMA-11 PET/MRI performs at least equally for T- and N-stage prediction compared to nomograms in high-risk PC patients. Despite, an improved prediction of the full final stage and the yield of additional anatomical information, the use of 68 Ga-PSMA-11 PET/MRI warrants further prospective evaluation. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Structuring as an Aid to Performance in Base-Rate Problems.
1988-06-01
Design. All subjects were given two base-rate problems, here called the Lightbulb problem (adapted from Lyon & Slovic, 1976) and the Dyslexia problem; both...are shown in Table 1. Approximately half the subjects received the Lightbulb problem first; the others received the Dyslexia problem first. The two...probability that this bulb is really defective? [the child really has dyslexia ]? You can probably give a good estimate if you think hard and carefully
Moisture Absorption in Certain Tropical American Woods
1949-08-01
surface area was in unobstructed contact with the salt water. Similar wire mesh racks were weighted and placed on top of the specimens to keep them...Oak (Quercus alba)" Br. Guiana Honduras United States (control) II II Total absorption by 2 x 2 x 6-inch uncoated specimens. Probably sapwood ...only. /2 ~~ Probably sapwood Table 3 (Continued) Species Source Increase over 40 percent Fiddlewood (Vit ex Gaumeri) Roble Blanco (Tabebuia
NASA Astrophysics Data System (ADS)
Kholil, Muhammad; Nurul Alfa, Bonitasari; Hariadi, Madjumsyah
2018-04-01
Network planning is one of the management techniques used to plan and control the implementation of a project, which shows the relationship between activities. The objective of this research is to arrange network planning on house construction project on CV. XYZ and to know the role of network planning in increasing the efficiency of time so that can be obtained the optimal project completion period. This research uses descriptive method, where the data collected by direct observation to the company, interview, and literature study. The result of this research is optimal time planning in project work. Based on the results of the research, it can be concluded that the use of the both methods in scheduling of house construction project gives very significant effect on the completion time of the project. The company’s CPM (Critical Path Method) method can complete the project with 131 days, PERT (Program Evaluation Review and Technique) Method takes 136 days. Based on PERT calculation obtained Z = -0.66 or 0,2546 (from normal distribution table), and also obtained the value of probability or probability is 74,54%. This means that the possibility of house construction project activities can be completed on time is high enough. While without using both methods the project completion time takes 173 days. So using the CPM method, the company can save time up to 42 days and has time efficiency by using network planning.
Williams, Michael S; Cao, Yong; Ebel, Eric D
2013-07-15
Levels of pathogenic organisms in food and water have steadily declined in many parts of the world. A consequence of this reduction is that the proportion of samples that test positive for the most contaminated product-pathogen pairings has fallen to less than 0.1. While this is unequivocally beneficial to public health, datasets with very few enumerated samples present an analytical challenge because a large proportion of the observations are censored values. One application of particular interest to risk assessors is the fitting of a statistical distribution function to datasets collected at some point in the farm-to-table continuum. The fitted distribution forms an important component of an exposure assessment. A number of studies have compared different fitting methods and proposed lower limits on the proportion of samples where the organisms of interest are identified and enumerated, with the recommended lower limit of enumerated samples being 0.2. This recommendation may not be applicable to food safety risk assessments for a number of reasons, which include the development of new Bayesian fitting methods, the use of highly sensitive screening tests, and the generally larger sample sizes found in surveys of food commodities. This study evaluates the performance of a Markov chain Monte Carlo fitting method when used in conjunction with a screening test and enumeration of positive samples by the Most Probable Number technique. The results suggest that levels of contamination for common product-pathogen pairs, such as Salmonella on poultry carcasses, can be reliably estimated with the proposed fitting method and samples sizes in excess of 500 observations. The results do, however, demonstrate that simple guidelines for this application, such as the proportion of positive samples, cannot be provided. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Kastner, S. O.
1976-01-01
Forbidden transition probabilities are given for ground term transitions of ions in the isoelectronic sequences with outer configurations 2s2 2p (B I), 2p5 (F I), 3s2 3p (Al I), and 3p5 (Cl I). Tables give, for each ion, the ground term interval, the associated wavelength, the quadrupole radial integral, the electric quadrupole transition probability, and the magnetic dipole transition probability. Coronal lines due to some of these ions have been observed, while others are yet to be observed. The tales for the Al I and Cl I sequences include elements up to germanium.
Comparison of Value System among a Group of Military Prisoners with Controls in Tehran.
Mirzamani, Seyed Mahmood
2011-01-01
Religious values were investigated in a group of Iranian Revolutionary Guards in Tehran. The sample consisted of official duty troops and conscripts who were in prison due to a crime. One hundred thirty seven individuals cooperated with us in the project (37 Official personnel and 100 conscripts). The instruments used included a demographic questionnaire containing personal data and the Allport, Vernon and Lindzey's Study of Values Test. Most statistical methods used descriptive statistical methods such as frequency, mean, tables and t-test. The results showed that religious value was lower in the criminal group than the control group (p<.001). This study showed lower religious value scores in the criminals group, suggesting the possibility that lower religious value increases the probability of committing crimes.
A novel high-frequency encoding algorithm for image compression
NASA Astrophysics Data System (ADS)
Siddeq, Mohammed M.; Rodrigues, Marcos A.
2017-12-01
In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.
Ali, Mohamed M; Park, Min Hae; Ngo, Thoai D
2014-07-01
To examine the levels and determinants of switching to any reversible modern contraceptive method following intrauterine device (IUD) discontinuation due to method-related reasons among women in developing countries. We analysed 5-year contraceptive calendar data from 14 Demographic and Health Surveys, conducted in 1993-2008 (n=218,092 women; 17,151 women contributed a total of 18,485 IUD episodes). Life-table methods were used to determine overall and cause-specific probabilities of IUD discontinuation at 12 months of use. For IUD episodes discontinued due to method-related reasons, the probability of switching to another reversible modern method within 3 months was estimated, overall and by place of residence, education level, motivation for use, age category and wealth tertiles. Country-specific rate ratios (RR) were estimated using generalized linear models, and pooled RRs using meta-analyses. The median duration of uninterrupted IUD use was 37 months. At 12 months, median probability of discontinuation was 13.2% and median probability of discontinuation due to method-related reasons was 8.9%. Within 3 months of discontinuation due to method-related reasons, half of the women had switched to another reversible modern method, 12% switched to traditional methods, 12% became pregnant, and 25% remained at risk for pregnancy. More educated women were more likely to switch to another reversible modern method than women with primary education or less (pooled RR 1.47; 95% CI 1.10-1.96), as were women in the highest wealth tertile (pooled RR 1.38; 95% CI 1.04-1.83) and women who were limiting births (pooled RR 1.35; 95% CI 1.08-1.68). Delays to switching and switching to less reliable methods following IUD discontinuation remain a problem, exposing women to the risk of unwanted pregnancy. Family planning programmes should aim to improve quality of services through strengthening of counselling and follow-up services to support women's continuation of effective methods. The risk of unintended pregnancy following IUD discontinuation remains high in developing countries. The quality of family planning services may be an important factor in switching to alternative modern contraceptive methods. Service providers should focus on counselling services and follow-up of women to support the continued use of effective methods. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sutawanir
2015-12-01
Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.
Nonconvergence to Saddle Boundary Points under Perturbed Reinforcement Learning
2012-12-07
of the ODE (12). Note that for some games not all stationary points of the ODE (12) are Nash equilibria. For example, if you consider the Typewriter ...B A 4, 4 2, 2 B 2, 2 3, 3 Table 1: The Typewriter Game. On the other hand, any stationary point in the interior of the probability simplex will... Typewriter Game of Table 1. We observe that it is possible for the process to converge to a pure strategy profile which is not a Nash equilibrium when Ri(α
NASA Astrophysics Data System (ADS)
Heintz, W. D.
1981-04-01
Micrometer observations in 1979-1980 permitted the computation of substantially revised or new orbital elements for 15 visual pairs. They include the bright stars 52 Ari and 78 UMa (in the UMa cluster), four faint dK pairs, and the probable triple ADS 16185. Ephemerides for equator of data are listed in a table along with the orbital elements of the binaries. The measured positions and their residuals are listed in a second table. The considered binaries include ADS 896, 2336, 6315, 7054, 7629, 8092, 8555, 8739, 13987, 16185, Rst 1658, 3906, 3972, 4529, and Jsp 691.
Validation Workshop of the DRDC Concept Map Knowledge Model: Issues in Intelligence Analysis
2010-06-29
group noted problems with grammar , and a more standard approach to the grammar of the linking term (e.g. use only active tense ) would certainly have...Knowledge Model is distinct from a Concept Map. A Concept Map is a single map, probably presented in one view, while a Knowledge Model is a set of...Agenda The workshop followed the agenda presented in Table 2-3. Table 2-3: Workshop Agenda Time Title 13:00 – 13:15 Registration 13:15 – 13:45
Terrain Analysis and Settlement Pattern Survey: Upper Bayou Zourie, Fort Polk, Louisiana.
1981-10-01
Louisiana , Vernon Parish 20. AsSrl ACT (Coawnuo - Fevwe eie if nacee.,y ad identify by block number) "- As part of a cultural resources survey of...on the hilltops and along the tops of ridges between the incised drain- ages. The bulk of the soil is a colluvial sand and clay. To the north is a...general exposure of red clayey sands which were probably a product 25 Table 10 16VN441 Site DFI Summary Table FP-31 Material Primary Secondary Tertiary Non
NASA Astrophysics Data System (ADS)
Patrignani, C.; Particle Data Group
2016-10-01
The Review summarizes much of particle physics and cosmology. Using data from previous editions, plus 3,062 new measurements from 721 papers, we list, evaluate, and average measured properties of gauge bosons and the recently discovered Higgs boson, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as supersymmetric particles, heavy bosons, axions, dark photons, etc. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as Higgs Boson Physics, Supersymmetry, Grand Unified Theories, Neutrino Mixing, Dark Energy, Dark Matter, Cosmology, Particle Detectors, Colliders, Probability and Statistics. Among the 117 reviews are many that are new or heavily revised, including those on Pentaquarks and Inflation. The complete Review is published online in a journal and on the website of the Particle Data Group (http://pdg.lbl.gov). The printed PDG Book contains the Summary Tables and all review articles but no longer includes the detailed tables from the Particle Listings. A Booklet with the Summary Tables and abbreviated versions of some of the review articles is also available. Contents Abstract, Contributors, Highlights and Table of ContentsAcrobat PDF (150 KB) IntroductionAcrobat PDF (456 KB) Particle Physics Summary Tables Gauge and Higgs bosonsAcrobat PDF (155 KB) LeptonsAcrobat PDF (134 KB) QuarksAcrobat PDF (84 KB) MesonsAcrobat PDF (871 KB) BaryonsAcrobat PDF (300 KB) Searches (Supersymmetry, Compositeness, etc.)Acrobat PDF (91 KB) Tests of conservation lawsAcrobat PDF (330 KB) Reviews, Tables, and Plots Detailed contents for this sectionAcrobat PDF (37 KB) Constants, Units, Atomic and Nuclear PropertiesAcrobat PDF (278 KB) Standard Model and Related TopicsAcrobat PDF (7.3 MB) Astrophysics and CosmologyAcrobat PDF (2.7 MB) Experimental Methods and CollidersAcrobat PDF (3.8 MB) Mathematical Tools or Statistics, Monte Carlo, Group Theory Acrobat PDF (1.3 MB) Kinematics, Cross-Section Formulae, and PlotsAcrobat PDF (3.9 MB) Particle Listings Illustrative key and abbreviationsAcrobat PDF (235 KB) Gauge and Higgs bosonsAcrobat PDF (2 MB) LeptonsAcrobat PDF (1.5 MB) QuarksAcrobat PDF (1.2 MB) Mesons: Light unflavored and strangeAcrobat PDF (4 MB) Mesons: Charmed and bottomAcrobat PDF (7.4 MB) Mesons: OtherAcrobat PDF (3.1 MB) BaryonsAcrobat PDF (3.97 MB) Miscellaneous searchesAcrobat PDF (2.4 MB) IndexAcrobat PDF (160 KB)
Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network
NASA Astrophysics Data System (ADS)
Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu
2018-04-01
This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.
Quantifying Uncertainty in Early Lifecycle Cost Estimation (QUELCE)
2011-12-01
state, and use their best judgment on the probability that the nominal state will change as shown in Table 3. CMU/SEI-2011-TR-026 | 17 Each cell ...Figure 6. The row is the program change driver and the column is the effect. For example, if the cell is designated (Advo- cacy, Funding), then the... cell will contain the conditional probability that an Advocacy change will cause a Funding change. The diagonal will be blank. We then populate the
[Ioduria and iodine concentration in table salt in Peruvian elementary schoolchildren].
Tarqui-Mamani, Carolina; Alvarez-Dongo, Doris; Fernández-Tinco, Inés
2016-01-01
To determine the ioduria and iodine concentration in table salt in Peruvian elementary schoolchildren. A cross-sectional study was performed. A total of 8,023 elementary schoolchildren, who voluntarily participated, were included. Multistage stratified probability sampling was performed, and the sample was obtained by systematic selection. Ioduria was determined via spectrophotometry (Sandell-Kolthoff method), and the amount of iodine in salt was evaluated volumetrically. The data were processed by means of analysis for complex samples with a weighting factor. Medians, percentiles, and confidence intervals were calculated, and the Mann-Whitney U and Kruskal-Wallis H tests were used, where appropriate. Nationwide, the median ioduria in schoolchildren was 258.53 ug/L, being higher in boys (265.90 ug/L) than in girls (250.77 ug/L). The median ioduria in urban areas was higher (289.89 ug/L) than that in rural areas (199.67 ug/L), while it was 315.48 ug/L in private schools and 241.56 ug/L in public schools (p<0.001). The median iodine concentration in table salt was 28.69 mg/kg. Of the total salt samples, 23.1% contained less than 15 mg/kg of iodine. The median ioduria in elementary schoolchildren exceeded normal levels, according to the criteria of the World Health Organization, with differences between urban and rural areas and public and private schools.
Gronewold, Andrew D; Sobsey, Mark D; McMahan, Lanakila
2017-06-01
For the past several years, the compartment bag test (CBT) has been employed in water quality monitoring and public health protection around the world. To date, however, the statistical basis for the design and recommended procedures for enumerating fecal indicator bacteria (FIB) concentrations from CBT results have not been formally documented. Here, we provide that documentation following protocols for communicating the evolution of similar water quality testing procedures. We begin with an overview of the statistical theory behind the CBT, followed by a description of how that theory was applied to determine an optimal CBT design. We then provide recommendations for interpreting CBT results, including procedures for estimating quantiles of the FIB concentration probability distribution, and the confidence of compliance with recognized water quality guidelines. We synthesize these values in custom user-oriented 'look-up' tables similar to those developed for other FIB water quality testing methods. Modified versions of our tables are currently distributed commercially as part of the CBT testing kit. Published by Elsevier B.V.
Applications of Formal Methods to Specification and Safety of Avionics Software
NASA Technical Reports Server (NTRS)
Hoover, D. N.; Guaspari, David; Humenn, Polar
1996-01-01
This report treats several topics in applications of formal methods to avionics software development. Most of these topics concern decision tables, an orderly, easy-to-understand format for formally specifying complex choices among alternative courses of action. The topics relating to decision tables include: generalizations fo decision tables that are more concise and support the use of decision tables in a refinement-based formal software development process; a formalism for systems of decision tables with behaviors; an exposition of Parnas tables for users of decision tables; and test coverage criteria and decision tables. We outline features of a revised version of ORA's decision table tool, Tablewise, which will support many of the new ideas described in this report. We also survey formal safety analysis of specifications and software.
NASA Technical Reports Server (NTRS)
Scalzo, F.
1983-01-01
Sensor redundancy management (SRM) requires a system which will detect failures and reconstruct avionics accordingly. A probability density function to determine false alarm rates, using an algorithmic approach was generated. Microcomputer software was developed which will print out tables of values for the cummulative probability of being in the domain of failure; system reliability; and false alarm probability, given a signal is in the domain of failure. The microcomputer software was applied to the sensor output data for various AFT1 F-16 flights and sensor parameters. Practical recommendations for further research were made.
40 CFR Table C-2 to Subpart C of... - Sequence of Test Measurements
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Sequence of Test Measurements C Table C-2 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Comparability Between Candidate Methods and Reference Methods Pt. 53, Subpt. C, Table C-2 Table C-2 to Subpart C...
40 CFR Table C-2 to Subpart C of... - Sequence of Test Measurements
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 6 2014-07-01 2014-07-01 false Sequence of Test Measurements C Table C-2 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Comparability Between Candidate Methods and Reference Methods Pt. 53, Subpt. C, Table C-2 Table C-2 to Subpart C...
40 CFR Table C-2 to Subpart C of... - Sequence of Test Measurements
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 6 2012-07-01 2012-07-01 false Sequence of Test Measurements C Table C-2 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Comparability Between Candidate Methods and Reference Methods Pt. 53, Subpt. C, Table C-2 Table C-2 to Subpart C...
40 CFR Table C-2 to Subpart C of... - Sequence of Test Measurements
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Sequence of Test Measurements C Table C-2 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Comparability Between Candidate Methods and Reference Methods Pt. 53, Subpt. C, Table C-2 Table C-2 to Subpart C...
40 CFR Table C-2 to Subpart C of... - Sequence of Test Measurements
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Sequence of Test Measurements C Table C-2 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Comparability Between Candidate Methods and Reference Methods Pt. 53, Subpt. C, Table C-2 Table C-2 to Subpart C...
Bejaei, M; Wiseman, K; Cheng, K M
2015-01-01
Consumers' interest in specialty eggs appears to be growing in Europe and North America. The objective of this research was to develop logistic regression models that utilise purchaser attributes and demographics to predict the probability of a consumer purchasing a specific type of table egg including regular (white and brown), non-caged (free-run, free-range and organic) or nutrient-enhanced eggs. These purchase prediction models, together with the purchasers' attributes, can be used to assess market opportunities of different egg types specifically in British Columbia (BC). An online survey was used to gather data for the models. A total of 702 completed questionnaires were submitted by BC residents. Selected independent variables included in the logistic regression to develop models for different egg types to predict the probability of a consumer purchasing a specific type of table egg. The variables used in the model accounted for 54% and 49% of variances in the purchase of regular and non-caged eggs, respectively. Research results indicate that consumers of different egg types exhibit a set of unique and statistically significant characteristics and/or demographics. For example, consumers of regular eggs were less educated, older, price sensitive, major chain store buyers, and store flyer users, and had lower awareness about different types of eggs and less concern regarding animal welfare issues. However, most of the non-caged egg consumers were less concerned about price, had higher awareness about different types of table eggs, purchased their eggs from local/organic grocery stores, farm gates or farmers markets, and they were more concerned about care and feeding of hens compared to consumers of other eggs types.
Diagnostic articulation tables
NASA Astrophysics Data System (ADS)
Mikhailov, V. G.
2002-09-01
In recent years, considerable progress has been made in the development of instrumental methods for general speech quality and intelligibility evaluation on the basis of modeling the auditory perception of speech and measuring the signal-to-noise ratio. Despite certain advantages (fast measurement procedures with a low labor consumption), these methods are not universal and, in essence, secondary, because they rely on the calibration based on subjective-statistical measurements. At the same time, some specific problems of speech quality evaluation, such as the diagnostics of the factors responsible for the deviation of the speech quality from standard (e.g., accent features of a speaker or individual voice distortions), can be solved by psycholinguistic methods. This paper considers different kinds of diagnostic articulation tables: tables of minimal pairs of monosyllabic words (DRT) based on the Jacobson differential features, tables consisting of multisyllabic quartets of Russian words (the choice method), and tables of incomplete monosyllables of the _VC/CV_ type (the supplementary note method). Comparative estimates of the tables are presented along with the recommendations concerning their application.
Ogan, M T
1989-12-01
The possible relationship between high numbers of fecal coliforms (FCs), fecal streptococci (FS), standard plate count (SPCs) and well characteristics viz: well depth, water column, temperature, pH and non-filterable residue in 25 rural community wells in the Port Harcourt region, Nigeria, was studied. Zonal differences in residue level, well depth and fecal indicator bacteria were observed; these parameters were lowest in an area of high population density (slum) reclaimed from and adjacent to mangrove forests. Although some wells were covered and/or walled to protect them from surface runoff contamination, FCs and FS were recovered from all, except three, in numbers (log10 per 100 mL) ranging respectively from 0.40-3.79 and 0.70-3.44. The FC:FS ratio was less than 1.0 in 8 and greater than 1.0 in 14 samples. Well depth correlated with FCs (p = 0.01; r = 0.5684), FS (p = 0.001; r = 0.6423), pH (p = 0.0001; r = 0.5981); FCs and FS correlated significantly (p = 0.01; r = 0.4948). SPCs did not correlate significantly with FCs, FS and the well and water characteristics. Simultaneous analysis of samples by the Membrane-filtration (MF) and Most Probable Number (MPN) methods recovered mean FC counts in the decreasing sequence: Standard-MPN----Anaerobic----Aerobic MF----Direct-MPN. The underground water table is most probably contaminated via large numbers of soakaway pits and similar conveniences. Downward movement of contaminant from the shallow conveniences into deeper water tables may explain the well depth: indicator bacteria correlation.
40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 14 2011-07-01 2011-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...
40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 14 2010-07-01 2010-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...
40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 15 2013-07-01 2013-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...
40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 15 2014-07-01 2014-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...
40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...
Delayed treatment of decompression sickness with short, no-air-break tables: review of 140 cases.
Cianci, Paul; Slade, John B
2006-10-01
Most cases of decompression sickness (DCS) in the U.S. are treated with hyperbaric oxygen using U.S. Navy Treatment Tables 5 and 6, although detailed analysis shows that those tables were based on limited data. We reviewed the development of these protocols and offer an alternative treatment table more suitable for monoplace chambers that has proven effective in the treatment of DCS in patients presenting to our facility. We reviewed the outcomes for 140 cases of DCS in civilian divers treated with the shorter tables at our facility from January 1983 through December 2002. Onset of symptoms averaged 9.3 h after surfacing. At presentation, 44% of the patients demonstrated mental aberration. The average delay from onset of symptoms to treatment was 93.5 h; median delay was 48 h. Complete recovery in the total group of 140 patients was 87%. When 30 patients with low probability of DCS were excluded, the recovery rate was 98%. All patients with cerebral symptoms recovered. Patients with the highest severity scores showed a high rate of complete recovery (97.5%). Short oxygen treatment tables as originally described by Hart are effective in the treatment of DCS, even with long delays to definitive recompression that often occur among civilian divers presenting to a major Divers Alert Network referral center.
Responses of riparian cottonwoods to alluvial water table declines
Scott, M.L.; Shafroth, P.B.; Auble, G.T.
1999-01-01
Human demands for surface and shallow alluvial groundwater have contributed to the loss, fragmentation, and simplification of riparian ecosystems. Populus species typically dominate riparian ecosystems throughout arid and semiarid regions of North American and efforts to minimize loss of riparian Populus requires an integrated understanding of the role of surface and groundwater dynamics in the establishment of new, and maintenance of existing, stands. In a controlled, whole-stand field experiment, we quantified responses of Populus morphology, growth, and mortality to water stress resulting from sustained water table decline following in-channel sand mining along an ephemeral sandbed stream in eastern Colorado, USA. We measured live crown volume, radial stem growth, annual branch increment, and mortality of 689 live Populus deltoides subsp. monilifera stems over four years in conjunction with localized water table declines. Measurements began one year prior to mining and included trees in both affected and unaffected areas. Populus demonstrated a threshold response to water table declines in medium alluvial sands; sustained declines ???1 m produced leaf desiccation and branch dieback within three weeks and significant declines in live crown volume, stem growth, and 88% mortality over a three-year period. Declines in live Crown volume proved to be a significant leading indicator of mortality in the following year. A logistic regression of tree survival probability against the prior year's live crown volume was significant (-2 log likelihood = 270, ??2 with 1 df = 232, P < 0.0001) and trees with absolute declines in live crown volume of ???30 during one year had survival probabilities <0.5 in the following year. In contrast, more gradual water table declines of ~0.5 m had no measurable effect on mortality, stem growth, or live crown volume and produced significant declines only in annual branch growth increments. Developing quantitative information on the timing and extent of morphological responses and mortality of Populus to the rate, depth, and duration of water table declines can assist in the design of management prescriptions to minimize impacts of alluvial groundwater depletion on existing riparian Populus forests.
NASA Astrophysics Data System (ADS)
Li, Y.; Gong, H.; Zhu, L.; Guo, L.; Gao, M.; Zhou, C.
2016-12-01
Continuous over-exploitation of groundwater causes dramatic drawdown, and leads to regional land subsidence in the Huairou Emergency Water Resources region, which is located in the up-middle part of the Chaobai river basin of Beijing. Owing to the spatial heterogeneity of strata's lithofacies of the alluvial fan, ground deformation has no significant positive correlation with groundwater drawdown, and one of the challenges ahead is to quantify the spatial distribution of strata's lithofacies. The transition probability geostatistics approach provides potential for characterizing the distribution of heterogeneous lithofacies in the subsurface. Combined the thickness of clay layer extracted from the simulation, with deformation field acquired from PS-InSAR technology, the influence of strata's lithofacies on land subsidence can be analyzed quantitatively. The strata's lithofacies derived from borehole data were generalized into four categories and their probability distribution in the observe space was mined by using the transition probability geostatistics, of which clay was the predominant compressible material. Geologically plausible realizations of lithofacies distribution were produced, accounting for complex heterogeneity in alluvial plain. At a particular probability level of more than 40 percent, the volume of clay defined was 55 percent of the total volume of strata's lithofacies. This level, equaling nearly the volume of compressible clay derived from the geostatistics, was thus chosen to represent the boundary between compressible and uncompressible material. The method incorporates statistical geological information, such as distribution proportions, average lengths and juxtaposition tendencies of geological types, mainly derived from borehole data and expert knowledge, into the Markov chain model of transition probability. Some similarities of patterns were indicated between the spatial distribution of deformation field and clay layer. In the area with roughly similar water table decline, locations in the subsurface having a higher probability for the existence of compressible material occur more than that in the location with a lower probability. Such estimate of spatial probability distribution is useful to analyze the uncertainty of land subsidence.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 6 2014-07-01 2014-07-01 false Test Specifications for PM 10, PM 2.5 and PM 10-2.5 Candidate Equivalent Methods C Table C-4 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-4 Table C-4 to Subpart C of Part 53—Test Specifications for PM...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Test Specifications for PM 10, PM 2.5 and PM 10-2.5 Candidate Equivalent Methods C Table C-4 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-4 Table C-4 to Subpart C of Part 53—Test Specifications for PM...
CANCER CONTROL AND POPULATION SCIENCES FAST STATS
Fast Stats links to tables, charts, and graphs of cancer statistics for all major cancer sites by age, sex, race, and geographic area. The statistics include incidence, mortality, prevalence, and the probability of developing or dying from cancer. A large set of statistics is ava...
Ilbäck, N-G; Alzin, M; Jahrl, S; Enghardt-Barbieri, H; Busk, L
2003-02-01
Few sweetener intake studies have been performed on the general population and only one study has been specifically designed to investigate diabetics and children. This report describes a Swedish study on the estimated intake of the artificial sweeteners acesulfame-K, aspartame, cyclamate and saccharin by children (0-15 years) and adult male and female diabetics (types I and II) of various ages (16-90 years). Altogether, 1120 participants were asked to complete a questionnaire about their sweetener intake. The response rate (71%, range 59-78%) was comparable across age and gender groups. The most consumed 'light' foodstuffs were diet soda, cider, fruit syrup, table powder, table tablets, table drops, ice cream, chewing gum, throat lozenges, sweets, yoghurt and vitamin C. The major sources of sweetener intake were beverages and table powder. About 70% of the participants, equally distributed across all age groups, read the manufacturer's specifications of the food products' content. The estimated intakes showed that neither men nor women exceeded the ADI for acesulfame-K; however, using worst-case calculations, high intakes were found in young children (169% of ADI). In general, the aspartame intake was low. Children had the highest estimated (worst case) intake of cyclamate (317% of ADI). Children's estimated intake of saccharin only slightly exceeded the ADI at the 5% level for fruit syrup. Children had an unexpected high intake of tabletop sweeteners, which, in Sweden, is normally based on cyclamate. The study was performed during two winter months when it can be assumed that the intake of sweeteners was lower as compared with during warm, summer months. Thus, the present study probably underestimates the average intake on a yearly basis. However, our worst-case calculations based on maximum permitted levels were performed on each individual sweetener, although exposure is probably relatively evenly distributed among all sweeteners, except for cyclamate containing table sweeteners.
Tables of stark level transition probabilities and branching ratios in hydrogen-like atoms
NASA Technical Reports Server (NTRS)
Omidvar, K.
1980-01-01
The transition probabilities which are given in terms of n prime k prime and n k are tabulated. No additional summing or averaging is necessary. The electric quantum number k plays the role of the angular momentum quantum number l in the presence of an electric field. The branching ratios between stark levels are also tabulated. Necessary formulas for the transition probabilities and branching ratios are given. Symmetries are discussed and selection rules are given. Some disagreements for some branching ratios are found between the present calculation and the measurement of Mark and Wierl. The transition probability multiplied by the statistical weight of the initial state is called the static intensity J sub S, while the branching ratios are called the dynamic intensity J sub D.
An Effective Method of Introducing the Periodic Table as a Crossword Puzzle at the High School Level
ERIC Educational Resources Information Center
Joag, Sushama D.
2014-01-01
A simple method to introduce the modern periodic table of elements at the high school level as a game of solving a crossword puzzle is presented here. A survey to test the effectiveness of this new method relative to the conventional method, involving use of a wall-mounted chart of the periodic table, was conducted on a convenience sample. This…
Zhong, Qi-Cheng; Wang, Jiang-Tao; Zhou, Jian-Hong; Ou, Qiang; Wang, Kai-Yun
2014-02-01
During the growing season of 2011, the leaf photosynthesis, morphological and growth traits of Phragmites australis and Imperata cylindrica were investigated along a gradient of water table (low, medium and high) in the reclaimed tidal wetland at the Dongtan of Chongming Island in the Yangtze Estuary of China. A series of soil factors, i. e., soil temperature, moisture, salinity and inorganic nitrogen content, were also measured. During the peak growing season, leaf photosynthetic capacity of P. australis in the wetland with high water table was significantly lower than those in the wetland with low and medium water tables, and no difference was observed in leaf photosynthetic capacity of I. cylindrica at the three water tables. During the entire growing season, at the shoot level, the morphological and growth traits of P. australis got the optimum in the wetland with medium water table, but most of the morphological and growth traits of I. cylindrica had no significant differences at the three water tables. At the population level, the shoot density, leaf area index and aboveground biomass per unit area were the highest in the wetland with high water table for P. australis, but all of the three traits were the highest in the wetland with low water table for I. cylindrica. At the early growing season, the rhizome biomass of P. australis in the 0-20 cm soil layer had no difference at the three water tables, and the rhizome biomass of I. cylindrica in the 0-20 cm soil layer in the wetland with high water table was significantly lower than those in the wetland with low and medium water table. As a native hygrophyte before the reclamation, the variations of performances of P. australis at the three water tables were probably attributed to the differences in the soil factors as well as the intensity of competition from I. cylindrica. To appropriately manipulate water table in the reclaimed tidal wetland may restrict the growth and propagation of the mesophyte I. cylindrica, and facilitate the restoration of P. australis-dominated marsh plant community.
Park, Jeong Yoon; Kim, Kyung Hyun; Kuh, Sung Uk; Chin, Dong Kyu; Kim, Keun Su; Cho, Yong Eun
2014-05-01
Surgeon spine angle during surgery was studied ergonomically and the kinematics of the surgeon's spine was related with musculoskeletal fatigue and pain. Spine angles varied depending on operation table height and visualization method, and in a previous paper we showed that the use of a loupe and a table height at the midpoint between the umbilicus and the sternum are optimal for reducing musculoskeletal loading. However, no studies have previously included a microscope as a possible visualization method. The objective of this study is to assess differences in surgeon spine angles depending on operating table height and visualization method, including microscope. We enrolled 18 experienced spine surgeons for this study, who each performed a discectomy using a spine surgery simulator. Three different methods were used to visualize the surgical field (naked eye, loupe, microscope) and three different operating table heights (anterior superior iliac spine, umbilicus, the midpoint between the umbilicus and the sternum) were studied. Whole spine angles were compared for three different views during the discectomy simulation: midline, ipsilateral, and contralateral. A 16-camera optoelectronic motion analysis system was used, and 16 markers were placed from the head to the pelvis. Lumbar lordosis, thoracic kyphosis, cervical lordosis, and occipital angle were compared between the different operating table heights and visualization methods as well as a natural standing position. Whole spine angles differed significantly depending on visualization method. All parameters were closer to natural standing values when discectomy was performed with a microscope, and there were no differences between the naked eye and the loupe. Whole spine angles were also found to differ from the natural standing position depending on operating table height, and became closer to natural standing position values as the operating table height increased, independent of the visualization method. When using a microscope, lumbar lordosis, thoracic kyphosis, and cervical lordosis showed no differences according to table heights above the umbilicus. This study suggests that the use of a microscope and a table height above the umbilicus are optimal for reducing surgeon musculoskeletal fatigue.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Inorganic HAP Emissions From Catalytic Reforming Units 25 Table 25 to Subpart UUU of Part 63 Protection of... Sulfur Recovery Units Pt. 63, Subpt. UUU, Table 25 Table 25 to Subpart UUU of Part 63—Requirements for... Procedure) in appendix A to subpart UUU; or EPA Method 5050 combined either with EPA Method 9056, or with...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Inorganic HAP Emissions From Catalytic Reforming Units 25 Table 25 to Subpart UUU of Part 63 Protection of... Sulfur Recovery Units Pt. 63, Subpt. UUU, Table 25 Table 25 to Subpart UUU of Part 63—Requirements for... Procedure) in appendix A to subpart UUU; or EPA Method 5050 combined either with EPA Method 9056, or with...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Inorganic HAP Emissions From Catalytic Reforming Units 25 Table 25 to Subpart UUU of Part 63 Protection of... Units Pt. 63, Subpt. UUU, Table 25 Table 25 to Subpart UUU of Part 63—Requirements for Performance Tests... Procedure) in appendix A to subpart UUU; or EPA Method 5050 combined either with EPA Method 9056, or with...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Inorganic HAP Emissions From Catalytic Reforming Units 25 Table 25 to Subpart UUU of Part 63 Protection of... Sulfur Recovery Units Pt. 63, Subpt. UUU, Table 25 Table 25 to Subpart UUU of Part 63—Requirements for... Procedure) in appendix A to subpart UUU; or EPA Method 5050 combined either with EPA Method 9056, or with...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herberger, Sarah M.; Boring, Ronald L.
Abstract Objectives: This paper discusses the differences between classical human reliability analysis (HRA) dependence and the full spectrum of probabilistic dependence. Positive influence suggests an error increases the likelihood of subsequent errors or success increases the likelihood of subsequent success. Currently the typical method for dependence in HRA implements the Technique for Human Error Rate Prediction (THERP) positive dependence equations. This assumes that the dependence between two human failure events varies at discrete levels between zero and complete dependence (as defined by THERP). Dependence in THERP does not consistently span dependence values between 0 and 1. In contrast, probabilistic dependencemore » employs Bayes Law, and addresses a continuous range of dependence. Methods: Using the laws of probability, complete dependence and maximum positive dependence do not always agree. Maximum dependence is when two events overlap to their fullest amount. Maximum negative dependence is the smallest amount that two events can overlap. When the minimum probability of two events overlapping is less than independence, negative dependence occurs. For example, negative dependence is when an operator fails to actuate Pump A, thereby increasing his or her chance of actuating Pump B. The initial error actually increases the chance of subsequent success. Results: Comparing THERP and probability theory yields different results in certain scenarios; with the latter addressing negative dependence. Given that most human failure events are rare, the minimum overlap is typically 0. And when the second event is smaller than the first event the max dependence is less than 1, as defined by Bayes Law. As such alternative dependence equations are provided along with a look-up table defining the maximum and maximum negative dependence given the probability of two events. Conclusions: THERP dependence has been used ubiquitously for decades, and has provided approximations of the dependencies between two events. Since its inception, computational abilities have increased exponentially, and alternative approaches that follow the laws of probability dependence need to be implemented. These new approaches need to consider negative dependence and identify when THERP output is not appropriate.« less
Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network
Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu
2018-01-01
This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit. PMID:29765629
Subjective expectations in the context of HIV/AIDS in Malawi
Delavande, Adeline; Kohler, Hans-Peter
2009-01-01
In this paper we present a newly developed interactive elicitation methodology for collecting probabilistic expectations in a developing country context with low levels of literacy and numeracy, and we evaluate the feasibility and success of this method for a wide range of outcomes in rural Malawi. We find that respondents’ answers about their subjective expectations take into account basic properties of probabilities, and vary meaningfully with observable characteristics and past experience. From a substantive point of view, the elicited expectations indicate that individuals are generally aware of differential risks. For example, individuals with lower incomes and less land rightly feel at greater risk of financial distress than people with higher socioeconomic status (SES), and people who are divorced or widowed rightly feel at greater risk of being infected with HIV than currently married individuals. Meanwhile many expectations—including the probability of being currently infected with HIV—are well-calibrated compared to actual probabilities, but mortality expectations are substantially overestimated compared to life table estimates. This overestimation may lead individuals to underestimate the benefits of adopting HIV risk-reduction strategies. The skewed distribution of expectations about condom use also suggests that a small group of innovators are the forerunners in the adoption of condoms within marriage for HIV prevention. PMID:19946378
NASA Astrophysics Data System (ADS)
Bijnagte, J. L.; Luger, D.
2012-12-01
In the Northern parts of the Netherlands exploration of natural gas reservoirs causes subsidence over large areas. As a consequence, the water levels in canals and polders have to be adjusted over time in order to keep the groundwater levels at a constant depth relative to the surface level. In the middle of the subsidence area it is relatively easy to follow the settlements by a uniform lowering of the water level. This would however result in a relative lowering of the groundwater table at the edges of the subsidence area. Given the presence of soft compressible soils, this would result in induced settlements. For buildings in these areas this will increase the chance of damage. A major design challenge lies therefore in the optimisation of the use of compartments. The more compartments the higher the cost therefore the aim is to make compartments in the water management system that are as large as possible without causing inadmissible damage to buildings. In order to asses expected damage from different use of compartments three tools are needed. The first is a generally accepted method of damage determination, the second a method to determine the contribution to damage of a new influence, e.g. a groundwater table change. Third, and perhaps most importantly, a method is needed to evaluate effects not for single buildings but for larger areas. The first need is covered by established damage criteria like those of Burland & Wroth or Boscardin & Cording. Up until now the second and the third have been problematic. This paper presents a method which enables to assign a contribution to the probability of damage of various recognised mechanisms such as soil and foundation inhomogeneity, uneven loading, ground water level changes. Shallow subsidence due to peat oxidation and deep subsidence due to reservoir depletion can be combined. In order to address the third issue: evaluation of effects for larger areas, the method uses a probabilistic approach. Apart from a description of the method itself validation of the approach is described by applying the theory to an area in the North of The Netherlands, near a canal, where a water level change was considered. This area consists of soft soil overlying sandy deposits. It was found that the damage percentages as given by the theory are in the right order of magnitude when compared to the actual damage observed in this area. For study of a large area affected by subsidence, input parameters have been established based on a field inspection of the state of the buildings present in that area. Groundwater changes resulting from water level adaptations have been calculated based on measurements of the present situation for ten cross sections. Results of the analyses show that lowering of the water table by 0.1-0.15 m influences a relatively small zone next to the canal only. Use of the new damage assesment method shows that even within this influenced zone, which is mostly less than 10 m wide, effects on buildings are rather limited. Almost in all cases the increase of chance of building damage is negligible.
1980-11-01
59 programmable calculator . Method 1 will most likely be used if there is a toxic corridor length table for the chemical; Method 2 if there is no table...experience of the forecaster in making this forecast, availability of a toxic corridor length table for the released chemical, and availability of a TI
Olasz, Balázs; Szabó, István
2017-01-01
Bimolecular nucleophilic substitution (SN2) and proton transfer are fundamental processes in chemistry and F– + CH3I is an important prototype of these reactions. Here we develop the first full-dimensional ab initio analytical potential energy surface (PES) for the F– + CH3I system using a permutationally invariant fit of high-level composite energies obtained with the combination of the explicitly-correlated CCSD(T)-F12b method, the aug-cc-pVTZ basis, core electron correlation effects, and a relativistic effective core potential for iodine. The PES accurately describes the SN2 channel producing I– + CH3F via Walden-inversion, front-side attack, and double-inversion pathways as well as the proton-transfer channel leading to HF + CH2I–. The relative energies of the stationary points on the PES agree well with the new explicitly-correlated all-electron CCSD(T)-F12b/QZ-quality benchmark values. Quasiclassical trajectory computations on the PES show that the proton transfer becomes significant at high collision energies and double-inversion as well as front-side attack trajectories can occur. The computed broad angular distributions and hot internal energy distributions indicate the dominance of indirect mechanisms at lower collision energies, which is confirmed by analyzing the integration time and leaving group velocity distributions. Comparison with available crossed-beam experiments shows usually good agreement. PMID:28507692
1980-03-01
recommended guidelines, the Spillway Design Flood (SDF) ranges between the 1 /2-PMF (Probable Maximum Flood) and PMF. Since the dam is near the lower end of...overtopping. A breach analysis indicates that failure under 1 /2-PMF conditions would probably not lead to increased property damage or loss of life at...ii OVERVIEW PHOTOGRAPH ......... .................. V TABLE OF CONTENTS ......... ................... vi SECTION 1 - GENERAL INFORMATION
Objective Analysis of Oceanic Data for Coast Guard Trajectory Models Phase II
1997-12-01
as outliers depends on the desired probability of false alarm, Pfa values, which is the probability of marking a valid point as an outlier. Table 2-2...constructed to minimize the mean-squared prediction error of the grid point estimate under the constraint that the estimate is unbiased . The...prediction error, e= Zl(S) _oizl(Si)+oC1iZz(S) (2.44) subject to the constraints of unbiasedness , • c/1 = 1,and (2.45) i SCC12 = 0. (2.46) Denoting
Probability, Problem Solving, and "The Price is Right."
ERIC Educational Resources Information Center
Wood, Eric
1992-01-01
This article discusses the analysis of a decision-making process faced by contestants on the television game show "The Price is Right". The included analyses of the original and related problems concern pattern searching, inductive reasoning, quadratic functions, and graphing. Computer simulation programs in BASIC and tables of…
Numerical Estimation of Information Theoretic Measures for Large Data Sets
2013-01-30
probability including a new indifference rule,” J. Inst. of Actuaries Students’ Soc. 73, 285–334 (1947). 7. M. Hutter and M. Zaffalon, “Distribution...Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Dover Publications, New York (1972). 13. K.B. Oldham et al., An Atlas
Weight, D G
1998-09-01
This article reviews the persisting difficulty and the importance of the diagnosis of minor head trauma. The diagnosis has been complicated by pervasive disagreement regarding diagnostic criteria. This is primarily a result of the fact that evidence for actual injury is hard to obtain in minor cases because most symptoms tend to be subjective and have high base rates in the normal, uninjured population. At the same time, the diagnostic decision has important implications for patients in terms of treatment, expectancy for future function and lifestyle, and compensation for injuries. Decision theory leads us to the awareness of diagnostic errors. In addition to correct determination, the clinician can make an error of not diagnosing an injury when it has in fact occurred or making a positive diagnosis where there is no injury. The optimal strategy is to set the cutoff at the midpoint of these two error probabilities. The clinician may be willing to make one error rather than the other depending on the cost and bias involved. The second error is more likely to be made when the clinician stands as a strong advocate for the patient and willing to provide any help necessary to encourage treatment, give patients a rationale for understanding their symptoms, and help them obtain compensation for injuries. This can also lead to significant overdiagnosis of injury. The first error is more likely to be made when the clinician recognizes the potential for increasing costs to the health-care industry, the court system, and increasing personal injury claims. He or she may also recognize the vulnerability to the risk for symptom invalidity, the perpetuation of patient symptoms through suggestion, and the need for a biologic explanation for life stressors and preexisting emotional and personality constraints. It can be argued that the most objective diagnostic opinion, uninfluenced by the above biases, should ultimately be in the best interest of the patient, the clinician, legal consultants, and society. Based on the findings in this chapter, at least four symptom constellations can be identified. These have differing probabilities for residual symptoms of minor head trauma and include the following: 1. These patients' symptoms clearly meet the criteria from Table 2. This includes several findings from 1 to 10 of Table 1, together with abnormal neuropsychologic testing on the AIR, General Neuropsychological Deficit Scale, or other indicators of diminished cortical integrity. This group of patients shows a very strong probability of having experienced a brain injury and for showing residual symptoms of minor head trauma. 2. These patients have experienced concussional symptoms (e.g., headache, mild confusion, and balance and visual symptoms) that were documented at the time of injury but sustained no or brief (< 15 seconds) LOC or PTA and, therefore, do not qualify for the diagnosis in Table 2. They may still have several symptoms from Table 1, including objective findings from neuroscanning and variable neuropsychologic testing, especially in measures of attention and delayed recall. This group also shows a high probability for residual, unresolved concussional, and related symptoms. 3. These patients may have shown evidence of concussional symptoms at the time of injury, with no or brief LOC, PTA, or other symptoms from Table 1 (1-10). They continue to show persistent symptoms after 6 months to 1 year. With this group, there is a strong probability that emotional, motivational and premorbid personality factors are either causing or supporting these residual symptoms. 4. In these patients, clearly identifiable postconcussive symptoms at the time of injury are not easy to identify, and perhaps headache is the only reported symptom. There was no LOC or PTA, and virtually none of symptoms 1 to 10 in Table 1 are observed. These patients show strong evidence of symptom invalidity on MMPI-2 or other measures, and marked somatoform, depression, anx
Code of Federal Regulations, 2010 CFR
2010-07-01
... Permitted Tolerance for Conducting Radiative Tests E Table E-2 to Subpart E of Part 53 Protection of... Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 Pt. 53, Subpt. E, Table E-2 Table E-2 to Subpart E of Part 53—Spectral Energy Distribution and Permitted Tolerance for...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Campaign Site and Seasonal Requirements for Class II and III FEMs for PM 10-2,5 and PM 2.5 C Table C-5 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... Between Candidate Methods and Reference Methods Pt. 53, Subpt. C, Table C-5 Table C-5 to Subpart C of Part...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Campaign Site and Seasonal Requirements for Class II and III FEMs for PM 10-2.5 and PM 2.5 C Table C-5 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... Between Candidate Methods and Reference Methods Pt. 53, Subpt. C, Table C-5 Table C-5 to Subpart C of Part...
VizieR Online Data Catalog: NiI transition probability measurements (Wood+, 2014)
NASA Astrophysics Data System (ADS)
Wood, M. P.; Lawler, J. E.; Sneden, C.; Cowan, J. J.
2014-04-01
As in much of our previous branching fraction work, this NiI branching fraction study makes use of archived FTS data from both the 1.0m Fourier Transform Spectrometer (FTS) previously at the National Solar Observatory (NSO) on Kitt Peak and the Chelsea Instruments FT500 UV FTS at Lund University in Sweden. Table 1 lists the 37 FTS spectra used in our NiI branching fraction study. All NSO spectra, raw interferograms, and header files are available in the NSO electronic archives. The 80 CCD frames of spectra from commercial Ni HCD lamps of the echelle spectrograph are listed in Table 2. (6 data files).
Tables of square-law signal detection statistics for Hann spectra with 50 percent overlap
NASA Technical Reports Server (NTRS)
Deans, Stanley R.; Cullers, D. Kent
1991-01-01
The Search for Extraterrestrial Intelligence, currently being planned by NASA, will require that an enormous amount of data be analyzed in real time by special purpose hardware. It is expected that overlapped Hann data windows will play an important role in this analysis. In order to understand the statistical implication of this approach, it has been necessary to compute detection statistics for overlapped Hann spectra. Tables of signal detection statistics are given for false alarm rates from 10(exp -14) to 10(exp -1) and signal detection probabilities from 0.50 to 0.99; the number of computed spectra ranges from 4 to 2000.
Simulation of wetlands forest vegetation dynamics
Phipps, R.L.
1979-01-01
A computer program, SWAMP, was designed to simulate the effects of flood frequency and depth to water table on southern wetlands forest vegetation dynamics. By incorporating these hydrologic characteristics into the model, forest vegetation and vegetation dynamics can be simulated. The model, based on data from the White River National Wildlife Refuge near De Witt, Arkansas, "grows" individual trees on a 20 x 20-m plot taking into account effects on the tree growth of flooding, depth to water table, shade tolerance, overtopping and crowding, and probability of death and reproduction. A potential application of the model is illustrated with simulations of tree fruit production following flood-control implementation and lumbering. ?? 1979.
Standardized Pearson type 3 density function area tables
NASA Technical Reports Server (NTRS)
Cohen, A. C.; Helm, F. R.; Sugg, M.
1971-01-01
Tables constituting extension of similar tables published in 1936 are presented in report form. Single and triple parameter gamma functions are discussed. Report tables should interest persons concerned with development and use of numerical analysis and evaluation methods.
NASA Technical Reports Server (NTRS)
Wheeler, J. T.
1990-01-01
The Weibull process, identified as the inhomogeneous Poisson process with the Weibull intensity function, is used to model the reliability growth assessment of the space shuttle main engine test and flight failure data. Additional tables of percentage-point probabilities for several different values of the confidence coefficient have been generated for setting (1-alpha)100-percent two sided confidence interval estimates on the mean time between failures. The tabled data pertain to two cases: (1) time-terminated testing, and (2) failure-terminated testing. The critical values of the three test statistics, namely Cramer-von Mises, Kolmogorov-Smirnov, and chi-square, were calculated and tabled for use in the goodness of fit tests for the engine reliability data. Numerical results are presented for five different groupings of the engine data that reflect the actual response to the failures.
Hammerslough, C R
1992-01-01
An integrated approach to estimate the total number of pregnancies that begin in a population during one calendar year and the probability of spontaneous abortion is described. This includes an indirect estimate of the number of pregnancies that result in spontaneous abortions. The method simultaneously takes into account the proportion of induced abortions that are censored by spontaneous abortions and vice versa in order to estimate the true annual number of spontaneous and induced abortions for a population. It also estimates the proportion of pregnancies that women intended to allow to continue to a live birth. The proposed indirect approach derives adjustment factors to make indirect estimates by combining vital statistics information on gestational age at induced abortion (from the 12 States that report to the National Center for Health Statistics) with a life table of spontaneous abortion probabilities. The adjustment factors are applied to data on induced abortions from the Alan Guttmacher Institute Abortion Provider Survey and data on births from U.S. vital statistics. For the United States in 1980 the probability of a spontaneous abortion is 19 percent, given the presence of induced abortion. Once the effects of spontaneous abortion are discounted, women in 1980 intended to allow 73 percent of their pregnancies to proceed to a live birth. One medical benefit to a population practicing induced abortion is that induced abortions avert some spontaneous abortions, leading to a lower mean gestational duration at the time of spontaneous abortion. PMID:1594736
An Introduction to Using Surface Geophysics to Characterize Sand and Gravel Deposits
Lucius, Jeffrey E.; Langer, William H.; Ellefsen, Karl J.
2006-01-01
This report is an introduction to surface geophysical techniques that aggregate producers can use to characterize known deposits of sand and gravel. Five well-established and well-tested geophysical methods are presented: seismic refraction and reflection, resistivity, ground penetrating radar, time-domain electromagnetism, and frequency-domain electromagnetism. Depending on site conditions and the selected method(s), geophysical surveys can provide information concerning aerial extent and thickness of the deposit, thickness of overburden, depth to the water table, critical geologic contacts, and location and correlation of geologic features. In addition, geophysical surveys can be conducted prior to intensive drilling to help locate auger or drill holes, reduce the number of drill holes required, calculate stripping ratios to help manage mining costs, and provide continuity between sampling sites to upgrade the confidence of reserve calculations from probable reserves to proved reserves. Perhaps the greatest value of geophysics to aggregate producers may be the speed of data acquisition, reduced overall costs, and improved subsurface characterization.
An Introduction to Using Surface Geophysics to Characterize Sand and Gravel Deposits
Lucius, Jeffrey E.; Langer, William H.; Ellefsen, Karl J.
2007-01-01
This report is an introduction to surface geophysical techniques that aggregate producers can use to characterize known deposits of sand and gravel. Five well-established and well-tested geophysical methods are presented: seismic refraction and reflection, resistivity, ground penetrating radar, time-domain electromagnetism, and frequency-domain electromagnetism. Depending on site conditions and the selected method(s), geophysical surveys can provide information concerning areal extent and thickness of the deposit, thickness of overburden, depth to the water table, critical geologic contacts, and location and correlation of geologic features. In addition, geophysical surveys can be conducted prior to intensive drilling to help locate auger or drill holes, reduce the number of drill holes required, calculate stripping ratios to help manage mining costs, and provide continuity between sampling sites to upgrade the confidence of reserve calculations from probable reserves to proved reserves. Perhaps the greatest value of geophysics to aggregate producers may be the speed of data acquisition, reduced overall costs, and improved subsurface characterization.
Active life expectancy from annual follow-up data with missing responses.
Izmirlian, G; Brock, D; Ferrucci, L; Phillips, C
2000-03-01
Active life expectancy (ALE) at a given age is defined as the expected remaining years free of disability. In this study, three categories of health status are defined according to the ability to perform activities of daily living independently. Several studies have used increment-decrement life tables to estimate ALE, without error analysis, from only a baseline and one follow-up interview. The present work conducts an individual-level covariate analysis using a three-state Markov chain model for multiple follow-up data. Using a logistic link, the model estimates single-year transition probabilities among states of health, accounting for missing interviews. This approach has the advantages of smoothing subsequent estimates and increased power by using all follow-ups. We compute ALE and total life expectancy from these estimated single-year transition probabilities. Variance estimates are computed using the delta method. Data from the Iowa Established Population for the Epidemiologic Study of the Elderly are used to test the effects of smoking on ALE on all 5-year age groups past 65 years, controlling for sex and education.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Campaign Site and Seasonal Requirements for Class II and III FEMs for PM10â2.5 and PM2.5 C Table C-5 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... Between Candidate Methods and Reference Methods Pt. 53, Subpt. C, Table C-5 Table C-5 to Subpart C of Part...
40 CFR Table C-3 to Subpart C of... - Test Specifications for Pb in TSP and Pb in PM 10 Methods
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 6 2014-07-01 2014-07-01 false Test Specifications for Pb in TSP and Pb in PM 10 Methods C Table C-3 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL..., Subpt. C, Table C-3 Table C-3 to Subpart C of Part 53—Test Specifications for Pb in TSP and Pb in PM 10...
40 CFR Table C-3 to Subpart C of... - Test Specifications for Pb in TSP and Pb in PM 10 Methods
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Test Specifications for Pb in TSP and Pb in PM 10 Methods C Table C-3 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL..., Subpt. C, Table C-3 Table C-3 to Subpart C of Part 53—Test Specifications for Pb in TSP and Pb in PM 10...
40 CFR Table C-3 to Subpart C of... - Test Specifications for Pb in TSP and Pb in PM10 Methods
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 6 2012-07-01 2012-07-01 false Test Specifications for Pb in TSP and Pb in PM10 Methods C Table C-3 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL..., Subpt. C, Table C-3 Table C-3 to Subpart C of Part 53—Test Specifications for Pb in TSP and Pb in PM10...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Campaign Site and Seasonal Requirements for Class II and III FEMs for PM10â2.5 and PM2.5 C Table C-5 to Subpart C of Part 53 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... Between Candidate Methods and Reference Methods Pt. 53, Subpt. C, Table C-5 Table C-5 to Subpart C of Part...
Aerosol-type retrieval and uncertainty quantification from OMI data
NASA Astrophysics Data System (ADS)
Kauppi, Anu; Kolmonen, Pekka; Laine, Marko; Tamminen, Johanna
2017-11-01
We discuss uncertainty quantification for aerosol-type selection in satellite-based atmospheric aerosol retrieval. The retrieval procedure uses precalculated aerosol microphysical models stored in look-up tables (LUTs) and top-of-atmosphere (TOA) spectral reflectance measurements to solve the aerosol characteristics. The forward model approximations cause systematic differences between the modelled and observed reflectance. Acknowledging this model discrepancy as a source of uncertainty allows us to produce more realistic uncertainty estimates and assists the selection of the most appropriate LUTs for each individual retrieval.This paper focuses on the aerosol microphysical model selection and characterisation of uncertainty in the retrieved aerosol type and aerosol optical depth (AOD). The concept of model evidence is used as a tool for model comparison. The method is based on Bayesian inference approach, in which all uncertainties are described as a posterior probability distribution. When there is no single best-matching aerosol microphysical model, we use a statistical technique based on Bayesian model averaging to combine AOD posterior probability densities of the best-fitting models to obtain an averaged AOD estimate. We also determine the shared evidence of the best-matching models of a certain main aerosol type in order to quantify how plausible it is that it represents the underlying atmospheric aerosol conditions.The developed method is applied to Ozone Monitoring Instrument (OMI) measurements using a multiwavelength approach for retrieving the aerosol type and AOD estimate with uncertainty quantification for cloud-free over-land pixels. Several larger pixel set areas were studied in order to investigate the robustness of the developed method. We evaluated the retrieved AOD by comparison with ground-based measurements at example sites. We found that the uncertainty of AOD expressed by posterior probability distribution reflects the difficulty in model selection. The posterior probability distribution can provide a comprehensive characterisation of the uncertainty in this kind of problem for aerosol-type selection. As a result, the proposed method can account for the model error and also include the model selection uncertainty in the total uncertainty budget.
Flexible Method for Inter-object Communication in C++
NASA Technical Reports Server (NTRS)
Curlett, Brian P.; Gould, Jack J.
1994-01-01
A method has been developed for organizing and sharing large amounts of information between objects in C++ code. This method uses a set of object classes to define variables and group them into tables. The variable tables presented here provide a convenient way of defining and cataloging data, as well as a user-friendly input/output system, a standardized set of access functions, mechanisms for ensuring data integrity, methods for interprocessor data transfer, and an interpretive language for programming relationships between parameters. The object-oriented nature of these variable tables enables the use of multiple data types, each with unique attributes and behavior. Because each variable provides its own access methods, redundant table lookup functions can be bypassed, thus decreasing access times while maintaining data integrity. In addition, a method for automatic reference counting was developed to manage memory safely.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weimar, Mark R.; Daly, Don S.; Wood, Thomas W.
Both nuclear power and nuclear weapons programs should have (related) economic signatures which are detectible at some scale. We evaluated this premise in a series of studies using national economic input/output (IO) data. Statistical discrimination models using economic IO tables predict with a high probability whether a country with an unknown predilection for nuclear weapons proliferation is in fact engaged in nuclear power development or nuclear weapons proliferation. We analyzed 93 IO tables, spanning the years 1993 to 2005 for 37 countries that are either members or associates of the Organization for Economic Cooperation and Development (OECD). The 2009 OECDmore » input/output tables featured 48 industrial sectors based on International Standard Industrial Classification (ISIC) Revision 3, and described the respective economies in current country-of-origin valued currency. We converted and transformed these reported values to US 2005 dollars using appropriate exchange rates and implicit price deflators, and addressed discrepancies in reported industrial sectors across tables. We then classified countries with Random Forest using either the adjusted or industry-normalized values. Random Forest, a classification tree technique, separates and categorizes countries using a very small, select subset of the 2304 individual cells in the IO table. A nation’s efforts in nuclear power, be it for electricity or nuclear weapons, are an enterprise with a large economic footprint -- an effort so large that it should discernibly perturb coarse country-level economics data such as that found in yearly input-output economic tables. The neoclassical economic input-output model describes a country’s or region’s economy in terms of the requirements of industries to produce the current level of economic output. An IO table row shows the distribution of an industry’s output to the industrial sectors while a table column shows the input required of each industrial sector by a given industry.« less
Nutrient transport and transformation beneath an infiltration basin
Sumner, D.M.; Rolston, D.E.; Bradner, L.A.
1998-01-01
Field experiments were conducted to examine nutrient transport and transformation beneath an infiltration basin used for the disposal of treated wastewater. Removal of nitrogen from infiltrating water by denitrification was negligible beneath the basin, probably because of subsurface aeration as a result of daily interruptions in basin loading. Retention of organic nitrogen in the upper 4.6 m of the unsaturated zone (water table depth of approximately 11 m) during basin loading resulted in concentrations of nitrate as much as 10 times that of the applied treated wastewater, following basin 'rest' periods of several weeks, which allowed time for mineralization and nitrification. Approximately 90% of the phosphorus in treated wastewater was removed within the upper 4.6 m of the subsurface, primarily by adsorption reactions, with abundant iron and aluminum oxyhydroxides occurring as soil coatings. A reduction in the flow rate of infiltrating water arriving at the water table may explain the accumulation of relatively coarse (>0.45 ??m), organic forms of nitrogen and phosphorus slightly below the water table. Mineralization and nitrification reactions at this second location of organic nitrogen accumulation contributed to concentrations of nitrate as much as three times that of the applied treated wastewater. Phosphorus, which accumulated below the water table, was immobilized by adsorption or precipitation reactions during basin rest periods.Field experiments were conducted to examine nutrient transport and transformation beneath an infiltration basin used for the disposal of treated wastewater. Removal of nitrogen from infiltrating water by denitrification was negligible beneath the basin, probably because of subsurface aeration as a result of daily interruptions in basin loading. Retention of organic nitrogen in the upper 4.6 m of the unsaturated zone (water table depth of approximately 11 m) during basin loading resulted in concentrations of nitrate as much as 10 times that of the applied treated wastewater, following basin 'rest' periods of several weeks, which allowed time for mineralization and nitrification. Approximately 90% of the phosphorus in treated wastewater was removed within the upper 4.6 m of the subsurface, primarily by adsorption reactions, with abundant iron and aluminum oxyhydroxides occurring as soil coatings. A reduction in the flow rate of infiltrating water arriving at the water table may explain the accumulation of relatively coarse (>0.45 ??m), organic forms of nitrogen and phosphorus slightly below the water table. Mineralization and nitrification reactions at this second location of organic nitrogen accumulation contributed to concentrations of nitrate as much as three times that of the applied treated wastewater. Phosphorus, which accumulated below the water table, was immobilized by adsorption or precipitation reactions during basin rest periods.
Verification of aerial photo stand volume tables for southeast Alaska.
Theodore S. Setzer; Bert R. Mead
1988-01-01
Aerial photo volume tables are used in the multilevel sampling system of Alaska Forest Inventory and Analysis. These volume tables are presented with a description of the data base and methods used to construct the tables. Volume estimates compiled from the aerial photo stand volume tables and associated ground-measured values are compared and evaluated.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Test Specifications for PM10, PM2.5 and PM10-2.5 Candidate Equivalent Methods C Table C-4 to Subpart C of Part 53 Protection of Environment... Pt. 53, Subpt. C, Table C-4 Table C-4 to Subpart C of Part 53—Test Specifications for PM10, PM2.5 and...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 6 2012-07-01 2012-07-01 false Test Specifications for PM10, PM2.5 and PM10-2.5 Candidate Equivalent Methods C Table C-4 to Subpart C of Part 53 Protection of Environment... Pt. 53, Subpt. C, Table C-4 Table C-4 to Subpart C of Part 53—Test Specifications for PM10, PM2.5 and...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Test Specifications for PM10, PM2.5 and PM10-2.5 Candidate Equivalent Methods C Table C-4 to Subpart C of Part 53 Protection of Environment... Pt. 53, Subpt. C, Table C-4 Table C-4 to Subpart C of Part 53—Test Specifications for PM10, PM2.5 and...
The Application of LT-Table in TRIZ Contradiction Resolving Process
NASA Astrophysics Data System (ADS)
Wei, Zihui; Li, Qinghai; Wang, Donglin; Tian, Yumei
TRIZ is used to resolve invention problems. ARIZ is the most powerful systematic method which integrates all of TRIZ heuristics. Definition of ideal final result (IFR), identification of contradictions and resource utilization are main lines of ARIZ. But resource searching of ARIZ has fault of blindness. Alexandr sets up mathematical model of transformation of the hereditary information in an invention problem using the theory of catastrophes, and provides method of resource searching using LT-table. The application of LT-table on contradiction resolving is introduced. Resource utilization using LT-table is joined into ARIZ step as an addition of TRIZ, apply this method in separator paper punching machine design.
A new algorithm for stand table projection models.
Quang V. Cao; V. Clark Baldwin
1999-01-01
The constrained least squares method is proposed as an algorithm for projecting stand tables through time. This method consists of three steps: (1) predict survival in each diameter class, (2) predict diameter growth, and (3) use the least squares approach to adjust the stand table to satisfy the constraints of future survival, average diameter, and stand basal area....
40 CFR 53.20 - General provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Methods SO2, CO, O3, and NO2 § 53.20 General provisions. (a) The test procedures given in this subpart... selectable measurement range, one range must be that specified in table B-1 (standard range for SO2), and a... concentrations) than that specified in table B-1. For SO2 methods, table B-1 specifies special performance...
Statistical summaries of streamflow in Oklahoma through 1999
Tortorelli, R.L.
2002-01-01
Statistical summaries of streamflow records through 1999 for gaging stations in Oklahoma and parts of adjacent states are presented for 188 stations with at least 10 years of streamflow record. Streamflow at 113 of the stations is regulated for specific periods. Data for these periods were analyzed separately to account for changes in streamflow due to regulation by dams or other human modification of streamflow. A brief description of the location, drainage area, and period of record is given for each gaging station. A brief regulation history also is given for stations with a regulated streamflow record. This descriptive information is followed by tables of mean annual discharges, magnitude and probability of exceedance of annual high flows, magnitude and probability of exceedance of annual instantaneous peak flows, durations of daily mean flow, magnitude and probability of non-exceedance of annual low flows, and magnitude and probability of non-exceedance of seasonal low flows.
Are higher doses of proton pump inhibitors better in acute peptic bleeding?
Villalón, Alejandro; Olmos, Roberto; Rada, Gabriel
2016-06-24
Although there is broad consensus about the benefits of proton pump inhibitors in acute upper peptic bleeding, there is still controversy over their optimal dosing. Searching in Epistemonikos database, which is maintained by screening 30 databases, we identified six systematic reviews including 27 randomized trials addressing this question. We combined the evidence using meta-analysis and generated a summary of findings table following the GRADE approach. We concluded high-dose proton pump inhibitors probably result in little or no difference in re-bleeding rate or mortality. The risk/benefit and cost/benefit balance probably favor use of low-doses.
NASA Technical Reports Server (NTRS)
Vesely, William E.; Colon, Alfredo E.
2010-01-01
Design Safety/Reliability is associated with the probability of no failure-causing faults existing in a design. Confidence in the non-existence of failure-causing faults is increased by performing tests with no failure. Reliability-Growth testing requirements are based on initial assurance and fault detection probability. Using binomial tables generally gives too many required tests compared to reliability-growth requirements. Reliability-Growth testing requirements are based on reliability principles and factors and should be used.
Development of Modern Methods for Determination of Stabilizers in Propellants
1996-04-01
powder will! . gtve information Jbout the history of this powder and an indication of its future: usefulness. In othdrs words, the determination of...been excluded in Tables I and II. According to Table I, in order to develop an HPLC method for DPA-stabilized powders , the products that should be... powders were determined by each country using its own HPLC method . The results are given in Table XVI. As indicated, the agreement between the two
NASA Astrophysics Data System (ADS)
Marketin, T.; Huther, L.; Martínez-Pinedo, G.
2016-02-01
Background: r -process nucleosynthesis models rely, by necessity, on nuclear structure models for input. Particularly important are β -decay half-lives of neutron-rich nuclei. At present only a single systematic calculation exists that provides values for all relevant nuclei making it difficult to test the sensitivity of nucleosynthesis models to this input. Additionally, even though there are indications that their contribution may be significant, the impact of first-forbidden transitions on decay rates has not been systematically studied within a consistent model. Purpose: Our goal is to provide a table of β -decay half-lives and β -delayed neutron emission probabilities, including first-forbidden transitions, calculated within a fully self-consistent microscopic theoretical framework. The results are used in an r -process nucleosynthesis calculation to asses the sensitivity of heavy element nucleosynthesis to weak interaction reaction rates. Method: We use a fully self-consistent covariant density functional theory (CDFT) framework. The ground state of all nuclei is calculated with the relativistic Hartree-Bogoliubov (RHB) model, and excited states are obtained within the proton-neutron relativistic quasiparticle random phase approximation (p n -RQRPA). Results: The β -decay half-lives, β -delayed neutron emission probabilities, and the average number of emitted neutrons have been calculated for 5409 nuclei in the neutron-rich region of the nuclear chart. We observe a significant contribution of the first-forbidden transitions to the total decay rate in nuclei far from the valley of stability. The experimental half-lives are in general well reproduced for even-even, odd-A , and odd-odd nuclei, in particular for short-lived nuclei. The resulting data table is included with the article as Supplemental Material. Conclusions: In certain regions of the nuclear chart, first-forbidden transitions constitute a large fraction of the total decay rate and must be taken into account consistently in modern evaluations of half-lives. Both the β -decay half-lives and β -delayed neutron emission probabilities have a noticeable impact on the results of heavy element nucleosynthesis models.
Symbol Tables and Branch Tables: Linking Applications Together
NASA Technical Reports Server (NTRS)
Handler, Louis M.
2011-01-01
This document explores the computer techniques used to execute software whose parts are compiled and linked separately. The computer techniques include using a branch table or indirect address table to connect the parts. Methods of storing the information in data structures are discussed as well as differences between C and C++.
Distribution of microbial physiologic types in an aquifer contaminated by crude oil
Bekins, B.A.; Godsy, E.M.; Warren, E.
1999-01-01
We conducted a plume-scale study of the microbial ecology in the anaerobic portion of an aquifer contaminated by crude-oil compounds. The data provide insight into the patterns of ecological succession, microbial nutrient demands, and the relative importance of free-living versus attached microbial populations. The most probable number (MPN) method was used to characterize the spatial distribution of six physiologic types: aerobes, denitrifiers, iron-reducers, heterotrophic fermenters, sulfate-reducers, and methanogens. Both free-living and attached numbers were determined over a broad cross-section of the aquifer extending horizontally from the source of the plume at a nonaqueous oil body to 66 m downgradient, and vertically from above the water table to the base of the plume below the water table. Point samples from widely spaced locations were combined with three closely spaced vertical profiles to create a map of physiologic zones for a cross-section of the plume. Although some estimates suggest that less than 1% of the subsurface microbial population can be grown in laboratory cultures, the MPN results presented here provide a comprehensive qualitative picture of the microbial ecology at the plume scale. Areas in the plume that are evolving from iron-reducing to methanogenic conditions are clearly delineated and generally occupy 25-50% of the plume thickness. Lower microbial numbers below the water table compared to the unsaturated zone suggest that nutrient limitations may be important in limiting growth in the saturated zone. Finally, the data indicate that an average of 15% of the total population is suspended.
Query-Adaptive Reciprocal Hash Tables for Nearest Neighbor Search.
Liu, Xianglong; Deng, Cheng; Lang, Bo; Tao, Dacheng; Li, Xuelong
2016-02-01
Recent years have witnessed the success of binary hashing techniques in approximate nearest neighbor search. In practice, multiple hash tables are usually built using hashing to cover more desired results in the hit buckets of each table. However, rare work studies the unified approach to constructing multiple informative hash tables using any type of hashing algorithms. Meanwhile, for multiple table search, it also lacks of a generic query-adaptive and fine-grained ranking scheme that can alleviate the binary quantization loss suffered in the standard hashing techniques. To solve the above problems, in this paper, we first regard the table construction as a selection problem over a set of candidate hash functions. With the graph representation of the function set, we propose an efficient solution that sequentially applies normalized dominant set to finding the most informative and independent hash functions for each table. To further reduce the redundancy between tables, we explore the reciprocal hash tables in a boosting manner, where the hash function graph is updated with high weights emphasized on the misclassified neighbor pairs of previous hash tables. To refine the ranking of the retrieved buckets within a certain Hamming radius from the query, we propose a query-adaptive bitwise weighting scheme to enable fine-grained bucket ranking in each hash table, exploiting the discriminative power of its hash functions and their complement for nearest neighbor search. Moreover, we integrate such scheme into the multiple table search using a fast, yet reciprocal table lookup algorithm within the adaptive weighted Hamming radius. In this paper, both the construction method and the query-adaptive search method are general and compatible with different types of hashing algorithms using different feature spaces and/or parameter settings. Our extensive experiments on several large-scale benchmarks demonstrate that the proposed techniques can significantly outperform both the naive construction methods and the state-of-the-art hashing algorithms.
Piezoelectric Resonance Enhanced Microwave And Optoelectronic Interactive Devices
2013-05-01
0080 glass complex permittivity measured by NECVP method near 4.01GHz (TE103) and 5.19 (TE105) GHz...144 Table A.4 Corning 0080 glass complex permittivity measured by post resonant technique ...... 144 Table A.5...144 Table A.6 Complex permittivity of Pyrex glass rod measured by NECVP method near 4.01GHz (TE103) and 5.19
Code of Federal Regulations, 2013 CFR
2013-07-01
... EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emissions Standards for Hazardous Air Pollutants: Rubber Tire Manufacturing Pt. 63, Subpt. XXXX, Table 9 Table 9 to... Method 311 (40 CFR part 60, appendix A), or approved alternative method, test results indicating the mass...
Winograd, I.J.; Szabo, B. J.
1986-01-01
The distribution of vein calcite, tufa, and other features indicative of paleo-groundwater discharge, indicates that during the early to middle Pleistocene, the water table at Ash Meadows, in the Amargosa Desert, Nevada, and at Furnace Creek Wash, in east-central Death Valley, California, was tens to hundreds of meters above the modern water table, and that groundwater discharge occurred up to 18 km up-the-hydraulic gradient from modern discharge areas. Uranium series dating of the calcitic veins permits calculation of rates of apparent water table decline; rates of 0.02 to 0.08 m/1000 yr are indicated for Ash meadows and 0.2 to 0.6 m/1000 yr for Furnace Creek Wash. The rates for Furnace Creek Wash closely match a published estimate of vertical crustal offset for this area, suggesting that tectonism is a major cause for the displacement observed. In general, displacements of the paleo-water table probably reflect a combination of: (a) tectonic uplift of vein calcite and tufa, unaccompanied by a change in water table altitude; (b) decline in water table altitude in response to tectonic depression of areas adjacent to dated veins and associated tufa; (c) decline in water table altitude in response to increasing aridity caused by major uplift of the Sierra Nevada and Transverse Ranges during the Quaternary; and (d) decline in water altitude in response to erosion triggered by increasing aridity and/or tectonism. A synthesis of geohydrologic, neotectonic, and paleoclimatologic information with the vein-calcite data permits the inference that the water table in the south-central Great Basin progressively lowered throughout the Quaternary. This inference is pertinent to an evaluation of the utility of thick (200-600 m) unsaturated zones of the region for isolating solidified radioactive wastes from the hydrosphere for hundreds of millenia. Wastes buried a few tens to perhaps 100 m above the modern water table--that is above possible water level rises due to future pluvial climates--are unlikely to be inundated by a rising water table in the foreseeable geologic future. (Author 's abstract)
2014-02-01
reactions over time. ............................................8 List of Tables Table 1. Performance predictions from Cheetah 7.0...making it a highly desirable target (table 1). 3 Table 1. Performance predictions from Cheetah 7.0 (4). Substance ρa ∆Hf (kJ/mol) Pcjd (GPa) Dv e (km...HMXc 1.90 75.02 37.19 9.246 11.00 –21.61 aDensity. bPredicted using the methods of Rice (10–14). c∆Hf and density numbers obtained from Cheetah 7.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin, E-mail: binchen@lsu.edu
2014-08-21
A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model ofmore » alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.« less
NASA Astrophysics Data System (ADS)
Belyaev, Andrey K.; Yakovleva, Svetlana A.
2017-10-01
Aims: We derive a simplified model for estimating atomic data on inelastic processes in low-energy collisions of heavy-particles with hydrogen, in particular for the inelastic processes with high and moderate rate coefficients. It is known that these processes are important for non-LTE modeling of cool stellar atmospheres. Methods: Rate coefficients are evaluated using a derived method, which is a simplified version of a recently proposed approach based on the asymptotic method for electronic structure calculations and the Landau-Zener model for nonadiabatic transition probability determination. Results: The rate coefficients are found to be expressed via statistical probabilities and reduced rate coefficients. It turns out that the reduced rate coefficients for mutual neutralization and ion-pair formation processes depend on single electronic bound energies of an atom, while the reduced rate coefficients for excitation and de-excitation processes depend on two electronic bound energies. The reduced rate coefficients are calculated and tabulated as functions of electronic bound energies. The derived model is applied to potassium-hydrogen collisions. For the first time, rate coefficients are evaluated for inelastic processes in K+H and K++H- collisions for all transitions from ground states up to and including ionic states. Tables with calculated data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/A147
NASA Astrophysics Data System (ADS)
Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong
2014-09-01
In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.
42 CFR 81.10 - Use of cancer risk assessment models in NIOSH IREP.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false Use of cancer risk assessment models in NIOSH IREP... Risk Models Used To Estimate Probability of Causation § 81.10 Use of cancer risk assessment models in... tables were developed from analyses of cancer mortality risk among the Japanese atomic bomb survivor...
12 CFR Appendix A to Subpart A of... - Appendix A to Subpart A of Part 327
Code of Federal Regulations, 2010 CFR
2010-01-01
... pricing multipliers are derived from: • A model (the Statistical Model) that estimates the probability..., which is four basis points higher than the minimum rate. II. The Statistical Model The Statistical Model... to 1997. As a result, and as described in Table A.1, the Statistical Model is estimated using a...
Memoir Upon the Formation of a Deaf Variety of the Human Race.
ERIC Educational Resources Information Center
Bell, Alexander Graham
A compilation of data on the hereditary aspects of deafness presented at a conference in 1883 by Alexander Graham Bell, the document contains records of familial occurences of deafness and marriage statistics. Tables indicate that within schools for the deaf many students had the same family name; it was considered highly probable that a…
Archaeological Data Recovery in the Abiquiu Reservoir Multiple Resource Area, New Mexico,
1982-09-01
location probably does not represent a locus of past human activity. Iso i .... .. 2 *1 . .. . .A LA27006 (AR56) Description: The site is a small...25566 27021 27047 25378 No topo map location for site (n 3) 27001 27002 27037 118 TABLE 10 ESTIMATED AND ACTUAL FREQUENCIES OF ARTIFACT CLASSES AT
Diameter Growth, Survival, and Volume Estimates for Missouri Trees
Stephen R. Shifley; W. Brad Smith
1982-01-01
Measurements of more than 20,000 Missouri trees were summarized by species and diameter class into tables of mean annual diameter growth, annual probability of survival, net cubic foot volume, and net board foot volume. In the absence of better forecasting techniques, this information can be utilized to project short-term changes for Missouri trees, inventory plots,...
42 CFR 81.10 - Use of cancer risk assessment models in NIOSH IREP.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 1 2014-10-01 2014-10-01 false Use of cancer risk assessment models in NIOSH IREP... Risk Models Used To Estimate Probability of Causation § 81.10 Use of cancer risk assessment models in... tables were developed from analyses of cancer mortality risk among the Japanese atomic bomb survivor...
42 CFR 81.10 - Use of cancer risk assessment models in NIOSH IREP.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 1 2012-10-01 2012-10-01 false Use of cancer risk assessment models in NIOSH IREP... Risk Models Used To Estimate Probability of Causation § 81.10 Use of cancer risk assessment models in... tables were developed from analyses of cancer mortality risk among the Japanese atomic bomb survivor...
42 CFR 81.10 - Use of cancer risk assessment models in NIOSH IREP.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 1 2013-10-01 2013-10-01 false Use of cancer risk assessment models in NIOSH IREP... Risk Models Used To Estimate Probability of Causation § 81.10 Use of cancer risk assessment models in... tables were developed from analyses of cancer mortality risk among the Japanese atomic bomb survivor...
ERIC Educational Resources Information Center
Philippines Univ., Quezon City. Science Education Center.
This module discusses methods of obtaining table salt from seawater. Topic areas considered include: (1) obtaining salt by solar evaporation of seawater in holes; (2) obtaining salt by boiling seawater in pots; (3) how table salt is obtained from seawater in the Philippines; and (4) methods of making salt by solar evaporation of seawater in the…
Diop-Sidibe, Nafissatou
2005-06-01
The association between youths' sexual and reproductive attitudes and behaviors and those of their peers and parents has been documented; however, information on siblings' influence is scarce, especially for developing countries. Data on 1,395 female and 1,242 male survey respondents aged 15-24 from three cities in Côte d'Ivoire were analyzed. Life-table analysis was conducted to examine respondents' probability of remaining sexually inexperienced according to siblings' history of premarital childbearing. Cox multivariate regressions were used to estimate respondents' relative risks of sexual debut by age 17 and by age 24. At any age between 15 and 24 years, the life-table probability of remaining sexually inexperienced was typically lower among persons who had at least one sibling with a premarital birth than among those who had no such sibling. In general, among those with at least one sibling who had had a premarital birth, the probability was lower if the sibling or siblings and the respondent were of the same gender rather than opposite genders, and the probability was lowest among those who had a brother and a sister with a history of premarital childbearing. In the multivariate analysis for males, having one or more brothers only, or having at least one brother and at least one sister, with a history of premarital childbearing was associated with increased relative risks of being sexually experienced by ages 17 and 24. No such association was found for females. Programs that seek to reduce premarital sexual activity among young people should develop strategies that take into account the potential influence of siblings.
Enter the reverend: introduction to and application of Bayes' theorem in clinical ophthalmology.
Thomas, Ravi; Mengersen, Kerrie; Parikh, Rajul S; Walland, Mark J; Muliyil, Jayprakash
2011-12-01
Ophthalmic practice utilizes numerous diagnostic tests, some of which are used to screen for disease. Interpretation of test results and many clinical management issues are actually problems in inverse probability that can be solved using Bayes' theorem. Use two-by-two tables to understand Bayes' theorem and apply it to clinical examples. Specific examples of the utility of Bayes' theorem in diagnosis and management. Two-by-two tables are used to introduce concepts and understand the theorem. The application in interpretation of diagnostic tests is explained. Clinical examples demonstrate its potential use in making management decisions. Positive predictive value and conditional probability. The theorem demonstrates the futility of testing when prior probability of disease is low. Application to untreated ocular hypertension demonstrates that the estimate of glaucomatous optic neuropathy is similar to that obtained from the Ocular Hypertension Treatment Study. Similar calculations are used to predict the risk of acute angle closure in a primary angle closure suspect, the risk of pupillary block in a diabetic undergoing cataract surgery, and the probability that an observed decrease in intraocular pressure is due to the medication that has been started. The examples demonstrate how data required for management can at times be easily obtained from available information. Knowledge of Bayes' theorem helps in interpreting test results and supports the clinical teaching that testing for conditions with a low prevalence has a poor predictive value. In some clinical situations Bayes' theorem can be used to calculate vital data required for patient management. © 2011 The Authors. Clinical and Experimental Ophthalmology © 2011 Royal Australian and New Zealand College of Ophthalmologists.
Booth, Robert K.; Hotchkiss, Sara C.; Wilcox, Douglas A.
2005-01-01
Summary: 1. Discoloration of polyvinyl chloride (PVC) tape has been used in peatland ecological and hydrological studies as an inexpensive way to monitor changes in water-table depth and reducing conditions. 2. We investigated the relationship between depth of PVC tape discoloration and measured water-table depth at monthly time steps during the growing season within nine kettle peatlands of northern Wisconsin. Our specific objectives were to: (1) determine if PVC discoloration is an accurate method of inferring water-table depth in Sphagnum-dominated kettle peatlands of the region; (2) assess seasonal variability in the accuracy of the method; and (3) determine if systematic differences in accuracy occurred among microhabitats, PVC tape colour and peatlands. 3. Our results indicated that PVC tape discoloration can be used to describe gradients of water-table depth in kettle peatlands. However, accuracy differed among the peatlands studied, and was systematically biased in early spring and late summer/autumn. Regardless of the month when the tape was installed, the highest elevations of PVC tape discoloration showed the strongest correlation with midsummer (around July) water-table depth and average water-table depth during the growing season. 4. The PVC tape discoloration method should be used cautiously when precise estimates are needed of seasonal changes in the water-table.
A time series approach to inferring groundwater recharge using the water table fluctuation method
NASA Astrophysics Data System (ADS)
Crosbie, Russell S.; Binning, Philip; Kalma, Jetse D.
2005-01-01
The water table fluctuation method for determining recharge from precipitation and water table measurements was originally developed on an event basis. Here a new multievent time series approach is presented for inferring groundwater recharge from long-term water table and precipitation records. Additional new features are the incorporation of a variable specific yield based upon the soil moisture retention curve, proper accounting for the Lisse effect on the water table, and the incorporation of aquifer drainage so that recharge can be detected even if the water table does not rise. A methodology for filtering noise and non-rainfall-related water table fluctuations is also presented. The model has been applied to 2 years of field data collected in the Tomago sand beds near Newcastle, Australia. It is shown that gross recharge estimates are very sensitive to time step size and specific yield. Properly accounting for the Lisse effect is also important to determining recharge.
Interpretation of diagnostic data: 4. How to do it with a more complex table.
1983-10-15
A more complex table is especially useful when a diagnostic test produces a wide range of results and your patient's levels are near one of the extremes. The following guidelines will be useful: Identify the several cut-off points that could be used. Fill in a complex table along the lines of Table I, showing the numbers of patients at each level who have and do not have the target disorder. Generate a simple table for each cut-off point, as in Table II, and determine the sensitivity (TP rate) and specificity (TN rate) at each of them. Select the cut-off point that makes the most sense for your patient's test result and proceed as in parts 2 and 3 of our series. Alternatively, construct an ROC curve by plotting the TP and FP rates that attend each cut-off point. If you keep your tables and ROC curves close at hand, you will gradually accumulate a set of very useful guides. However, if you looked very hard at what was happening, you will probably have noticed that they are not very useful for patients whose test results fall in the middle zones, or for those with just one positive result of two tests; the post-test likelihood of disease in these patients lurches back and forth past 50%, depending on where the cut-off point is. We will show you how to tackle this problem in part 5 of our series. It involves some maths, but you will find that its very powerful clinical application can be achieved with a simple nomogram or with some simple calculations.
The Italian national trends in smoking initiation and cessation according to gender and education.
Sardu, C; Mereu, A; Minerba, L; Contu, P
2009-09-01
OBJECTIVES. This study aims to assess the trend in initiation and cessation of smoking across successive birth cohorts, according to gender and education, in order to provide useful suggestion for tobacco control policy. STUDY DESIGN. The study is based on data from the "Health conditions and resort to sanitary services" survey carried out in Italy from October 2004 to September 2005 by the National Institute of Statistics. Through a multisampling procedure a sample representative of the entire national territory was selected. In order to calculate trends in smoking initiation and cessation, data were stratified for birth cohorts, gender and education level, and analyzed through the life table method. The cumulative probability of smoking initiation, across subsequent generations, shows a downward trend followed by a plateau. This result highlights that there is not a shred of evidence to support the hypothesis of an anticipation in smoking initiation. The cumulative probability of quitting, across subsequent generations, follows an upward trend, highlighting the growing tendency of smokers to become an "early quitter", who give up within 30 years of age. Results suggest that the Italian antismoking approach, for the most part targeted at preventing the initiation of smoking emphasising the negative consequences, has an effect on the early smoking cessation. Health policies should reinforce the existing trend of "early quitting" through specific actions. In addition our results show that men with low education exhibit the higher probability of smoking initiation and the lower probability of early quitting, and therefore should be targeted with special attention.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Measurements Required, and Maximum Discrepancy Specification C Table C-1 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 53—Test Concentration Ranges..., June 22, 2010, table C-1 to subpart C was revised, effective Aug. 23, 2010. For the convenience of the...
Ning, Shuoying; Zhang, Wenchao; Sun, Yan; Feng, Jinian
2017-07-06
In this study, we first construct an age-stage, two-sex life table for onion maggot, Delia antiqua, grown on three host plants: onion, scallion, and garlic. We found that onion is the optimal host for this species and populations grown on onion have maximum fecundity, longest adult longevity and reproduction period, and the shortest immature developmental time. In contrast, the fecundity on other hosts was lower, particularly on garlic, but these crops can also serve as important secondary hosts for this pest. These data will be useful to the growers to develop specific integrated management programs for each of hosts. We also compared the demographic analyses of using individually-reared and group-reared methods. These two methods provided similar accurate outcomes for estimating insect population dynamics for this species. However, for gregarious species, using the individually-reared method to construct insect life tables produces inaccurate results, and researchers must use group-reared method for life table calculations. When studying large groups of insect, group-reared demographic analysis for age-stage, two-sex life table can also simplify statistical analysis, save considerable labor, and reduce experimental errors.
Fixed-Base Comb with Window-Non-Adjacent Form (NAF) Method for Scalar Multiplication
Seo, Hwajeong; Kim, Hyunjin; Park, Taehwan; Lee, Yeoncheol; Liu, Zhe; Kim, Howon
2013-01-01
Elliptic curve cryptography (ECC) is one of the most promising public-key techniques in terms of short key size and various crypto protocols. For this reason, many studies on the implementation of ECC on resource-constrained devices within a practical execution time have been conducted. To this end, we must focus on scalar multiplication, which is the most expensive operation in ECC. A number of studies have proposed pre-computation and advanced scalar multiplication using a non-adjacent form (NAF) representation, and more sophisticated approaches have employed a width-w NAF representation and a modified pre-computation table. In this paper, we propose a new pre-computation method in which zero occurrences are much more frequent than in previous methods. This method can be applied to ordinary group scalar multiplication, but it requires large pre-computation table, so we combined the previous method with ours for practical purposes. This novel structure establishes a new feature that adjusts speed performance and table size finely, so we can customize the pre-computation table for our own purposes. Finally, we can establish a customized look-up table for embedded microprocessors. PMID:23881143
40 CFR 90.7 - Reference materials.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Appendix A to subpart D, Table 3. ASTM D2699-92: Standard Test Method for Knock Characteristics of Motor... Knock Characteristics of Motor and Aviation Fuels by the Motor Method Appendix A to subpart D, Table 3...
40 CFR 90.7 - Reference materials.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Appendix A to subpart D, Table 3. ASTM D2699-92: Standard Test Method for Knock Characteristics of Motor... Knock Characteristics of Motor and Aviation Fuels by the Motor Method Appendix A to subpart D, Table 3...
Statistical Short-Range Guidance for Peak Wind Speed Forecasts at Edwards Air Force Base, CA
NASA Technical Reports Server (NTRS)
Dreher, Joseph; Crawford, Winifred; Lafosse, Richard; Hoeth, Brian; Burns, Kerry
2008-01-01
The peak winds near the surface are an important forecast element for Space Shuttle landings. As defined in the Shuttle Flight Rules (FRs), there are peak wind thresholds that cannot be exceeded in order to ensure the safety of the shuttle during landing operations. The National Weather Service Spaceflight Meteorology Group (SMG) is responsible for weather forecasts for all shuttle landings. They indicate peak winds are a challenging parameter to forecast. To alleviate the difficulty in making such wind forecasts, the Applied Meteorology Unit (AMTJ) developed a personal computer based graphical user interface (GUI) for displaying peak wind climatology and probabilities of exceeding peak-wind thresholds for the Shuttle Landing Facility (SLF) at Kennedy Space Center. However, the shuttle must land at Edwards Air Force Base (EAFB) in southern California when weather conditions at Kennedy Space Center in Florida are not acceptable, so SMG forecasters requested that a similar tool be developed for EAFB. Marshall Space Flight Center (MSFC) personnel archived and performed quality control of 2-minute average and 10-minute peak wind speeds at each tower adjacent to the main runway at EAFB from 1997- 2004. They calculated wind climatologies and probabilities of average peak wind occurrence based on the average speed. The climatologies were calculated for each tower and month, and were stratified by hour, direction, and direction/hour. For the probabilities of peak wind occurrence, MSFC calculated empirical and modeled probabilities of meeting or exceeding specific 10-minute peak wind speeds using probability density functions. The AMU obtained and reformatted the data into Microsoft Excel PivotTables, which allows users to display different values with point-click-drag techniques. The GUT was then created from the PivotTables using Visual Basic for Applications code. The GUI is run through a macro within Microsoft Excel and allows forecasters to quickly display and interpret peak wind climatology and likelihoods in a fast-paced operational environment. A summary of how the peak wind climatologies and probabilities were created and an overview of the GUT will be presented.
Inception horizon concept as a basis for sinkhole hazard mapping
NASA Astrophysics Data System (ADS)
Vouillamoz, J.; Jeannin, P.-Y.; Kopp, L.; Chantry, R.
2012-04-01
The office for natural hazards of the Vaud canton (Switzerland) is interested for a pragmatic approach to map sinkhole hazard in karst areas. A team was created by merging resources from a geoengineering company (CSD) and a karst specialist (SISKA). Large areas in Vaud territory are limestone karst in which the collapse hazard is essentially related to the collapse of soft-rocks covering underground cavities, rather than the collapse of limestone roofs or underground chambers. This statement is probably not valid for cases in gypsum and salt. Thus, for limestone areas, zones of highest danger are voids covered by a thin layer of soft-sediments. The spatial distributions of void and cover-thickness should therefore be used for the hazard assessment. VOID ASSESSMENT Inception features (IF) are millimetre to decimetre thick planes (mainly bedding but also fractures) showing a mineralogical, a granulometrical or a physical contrast with the surrounding formation that make them especially susceptible to karst development (FILIPPONI ET AL., 2009). The analysis of more than 1500 km of cave passage showed that karst conduits are mainly developed along such discrete layers within a limestone series. The so-called Karst-ALEA method (FILIPPONI ET AL., 2011) is based on this concept and aims at assessing the probability of karst conduit occurrences in the drilling of a tunnel. This approach requires as entries the identification of inception features (IF), the recognition of paleo-water-table (PWT), and their respective spatial distribution in a 3D geological model. We suggest the Karst-ALEA method to be adjusted in order to assess the void distribution in subsurface as a basis for sinkhole hazard mapping. Inception features (horizons or fractures) and paleo-water-tables (PWT) have to be first identified using visible caves and dolines. These features should then be introduced into a 3D geological model. Intersections of HI and PWT located close to landsurface are areas with a high probability of karst occurrence. ASSESSMENT OF THE SOFT-SEDIMENT COVER Classical geological investigations (mapping, DEM analysis, drilling, etc.) are used to establish a map of the thickness of soft-sediment on top of the limestone. This can also be included in the 3D model. The combination of the void and soft-sediment information in the 3D model makes it possible to derive the sinkhole hazard map. This is currently being developed and applied in the Vaud canton and first results will be presented. BIBLIOGRAPHY FILIPPONI, M., JEANNIN, P. & TACHER, L. (2009): Evidence of inception horizons in karst conduit networks. Geomorphology, 106, 86-99. FILIPPONI, M., SCHMASSMANN, S., JEANNIN, P. Y. & PARRIAUX, A. (2011): Karst - ALEA - Method a risk assessment method of karst for tunnel projects: Application to the Tunnel of Flims (GR, Switzerland). Proc. 9th conference on limestone hydrogeology. Besançon, France. p. 181-184.
The kinematics of table tennis racquet: differences between topspin strokes.
Bańkosz, Ziemowit; Winiarski, Sławomir
2017-03-01
Studies of shot kinematics in table tennis have not been sufficiently described in the literature. The assessment of the racquet trajectory, its speed and time characteristics makes it possible to emphasize on certain technical elements in the training process in order, for example, to increase strength, speed of rotation or speed of the shot while maintaining its accuracy. The aim of this work was to measure selected kinematic parameters of table tennis racquet during forehand and backhand topspin shots, while considering the differences between these strokes in table tennis. The measurements took place in a certified biomechanical laboratory using a motion analysis system. The study involved 12 female table tennis players in high-level sports training and performance. Each subject had to complete series of six tasks, presenting different varieties of topspin shots. The longest racquet trajectory was related to forehand shots, shots played against a ball with backspin and winner shots. The maximum racquet velocity was precisely in the moment of impact with the ball. The individual of velocity and distance were larger in the direction of the acting force, depending on the individual shot. Changing the type of topspin shot requires changes of time, velocity and primarily distance parameters as well as the direction of the playing racquet. The maximum speed of the racquet occurring at the moment of the impact is probably the most important principle in playing technique. The results can be directly used in improving training of table tennis techniques, especially in the application and use of topspin shots.
NASA Technical Reports Server (NTRS)
Pulkkinen, A.; Rastaetter, L.; Kuznetsova, M.; Singer, H.; Balch, C.; Weimer, D.; Toth, G.; Ridley, A.; Gombosi, T.; Wiltberger, M.;
2013-01-01
In this paper we continue the community-wide rigorous modern space weather model validation efforts carried out within GEM, CEDAR and SHINE programs. In this particular effort, in coordination among the Community Coordinated Modeling Center (CCMC), NOAA Space Weather Prediction Center (SWPC), modelers, and science community, we focus on studying the models' capability to reproduce observed ground magnetic field fluctuations, which are closely related to geomagnetically induced current phenomenon. One of the primary motivations of the work is to support NOAA SWPC in their selection of the next numerical model that will be transitioned into operations. Six geomagnetic events and 12 geomagnetic observatories were selected for validation.While modeled and observed magnetic field time series are available for all 12 stations, the primary metrics analysis is based on six stations that were selected to represent the high-latitude and mid-latitude locations. Events-based analysis and the corresponding contingency tables were built for each event and each station. The elements in the contingency table were then used to calculate Probability of Detection (POD), Probability of False Detection (POFD) and Heidke Skill Score (HSS) for rigorous quantification of the models' performance. In this paper the summary results of the metrics analyses are reported in terms of POD, POFD and HSS. More detailed analyses can be carried out using the event by event contingency tables provided as an online appendix. An online interface built at CCMC and described in the supporting information is also available for more detailed time series analyses.
Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays.
Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick
2013-01-01
Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand. Copyright © 2012 Elsevier Ltd. All rights reserved.
Methods to predict seasonal high water table (SHGWT) : final report.
DOT National Transportation Integrated Search
2017-04-03
The research study was sectioned into 5 separate tasks. Task 1 included defining the seasonal high ground water table (SHGWT); describing : methods and techniques used to determine SHGWTs; identify problems associated with estimating SHGWT conditions...
Diameter growth, survival, and volume estimates for trees in Indiana and Illinois.
W. Brad Smith; Stephen R. Shifley
1984-01-01
Measurements of more that 15,000 Indiana and Illinois trees were summarized by species and diameter class into tables of mean annual diameter growth, annual probability of survival, net cubic foot volume, and net board foot volume. In the absence of better forecasting techniques, this information can be utilized to project short-term changes for Indiana and Illinois...
Olive, K. A.
2016-10-01
The Review summarizes much of particle physics and cosmology. Using data from previous editions, plus 3,062 new measurements from 721 papers, we list, evaluate, and average measured properties of gauge bosons and the recently discovered Higgs boson, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as supersymmetric particles, heavy bosons, axions, dark photons, etc. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as Higgs Boson Physics, Supersymmetry, Grand Unified Theories, Neutrino Mixing, Dark Energy, Dark Matter, Cosmology, Particle Detectors, Colliders,more » Probability and Statistics. As a result, among the 117 reviews are many that are new or heavily revised, including those on Pentaquarks and Inflation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olive, K. A.
The Review summarizes much of particle physics and cosmology. Using data from previous editions, plus 3,062 new measurements from 721 papers, we list, evaluate, and average measured properties of gauge bosons and the recently discovered Higgs boson, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as supersymmetric particles, heavy bosons, axions, dark photons, etc. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as Higgs Boson Physics, Supersymmetry, Grand Unified Theories, Neutrino Mixing, Dark Energy, Dark Matter, Cosmology, Particle Detectors, Colliders,more » Probability and Statistics. As a result, among the 117 reviews are many that are new or heavily revised, including those on Pentaquarks and Inflation.« less
Regional Input-Output Tables and Trade Flows: an Integrated and Interregional Non-survey Approach
Boero, Riccardo; Edwards, Brian Keith; Rivera, Michael Kelly
2017-03-20
Regional input–output tables and trade flows: an integrated and interregional non-survey approach. Regional Studies. Regional analyses require detailed and accurate information about dynamics happening within and between regional economies. However, regional input–output tables and trade flows are rarely observed and they must be estimated using up-to-date information. Common estimation approaches vary widely but consider tables and flows independently. Here, by using commonly used economic assumptions and available economic information, this paper presents a method that integrates the estimation of regional input–output tables and trade flows across regions. Examples of the method implementation are presented and compared with other approaches, suggestingmore » that the integrated approach provides advantages in terms of estimation accuracy and analytical capabilities.« less
Regional Input-Output Tables and Trade Flows: an Integrated and Interregional Non-survey Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boero, Riccardo; Edwards, Brian Keith; Rivera, Michael Kelly
Regional input–output tables and trade flows: an integrated and interregional non-survey approach. Regional Studies. Regional analyses require detailed and accurate information about dynamics happening within and between regional economies. However, regional input–output tables and trade flows are rarely observed and they must be estimated using up-to-date information. Common estimation approaches vary widely but consider tables and flows independently. Here, by using commonly used economic assumptions and available economic information, this paper presents a method that integrates the estimation of regional input–output tables and trade flows across regions. Examples of the method implementation are presented and compared with other approaches, suggestingmore » that the integrated approach provides advantages in terms of estimation accuracy and analytical capabilities.« less
Code of Federal Regulations, 2012 CFR
2012-07-01
... Monitoring Systems (CEMS) 4 Table 4 of Subpart AAAA to Part 60 Protection of Environment ENVIRONMENTAL.... 60, Subpt. AAAA, Table 4 Table 4 of Subpart AAAA to Part 60—Requirements for Continuous Emission... unit P.S. 2 Method 6C. 4. Carbon Monoxide 125 percent of the maximum expected hourly potential carbon...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Emission Monitoring Systems (CEMS) 3 Table 3 of Subpart AAAA of Part 60 Protection of Environment... Definitions What definitions must I know? Pt. 60, Subpt. AAAA, Table 3 Table 3 of Subpart AAAA of Part 60... levels Use the following methods in appendix A of this part to measure oxygen (or carbon dioxide) 1...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Monitoring Systems (CEMS) 4 Table 4 of Subpart AAAA of Part 60 Protection of Environment ENVIRONMENTAL... Definitions What definitions must I know? Pt. 60, Subpt. AAAA, Table 4 Table 4 of Subpart AAAA of Part 60... dioxide emissions of the municipal waste combustion unit P.S. 2 Method 6C. 4. Carbon Monoxide 125 percent...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Monitoring Systems (CEMS) 4 Table 4 of Subpart AAAA of Part 60 Protection of Environment ENVIRONMENTAL... Definitions What definitions must I know? Pt. 60, Subpt. AAAA, Table 4 Table 4 of Subpart AAAA of Part 60... dioxide emissions of the municipal waste combustion unit P.S. 2 Method 6C. 4. Carbon Monoxide 125 percent...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Emission Monitoring Systems (CEMS) 3 Table 3 of Subpart AAAA to Part 60 Protection of Environment... SOURCES Pt. 60, Subpt. AAAA, Table 3 Table 3 of Subpart AAAA to Part 60—Requirements for Validating... following methods in appendix A of this part to measure oxygen (or carbon dioxide) 1. Nitrogen Oxides (Class...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hancock, S; Clements, C; Hyer, D
2016-06-15
Purpose: To develop and demonstrate application of a method that characterizes deviation of linac x-ray beams from the centroid of the volumetric radiation isocenter as a function of gantry, collimator, and table variables. Methods: A set of Winston-Lutz ball-bearing images was used to determine the gantry radiation isocenter as the midrange of deviation values resulting from gantry and collimator rotation. Also determined were displacement of table axis from gantry isocenter and recommended table axis adjustment. The method, previously reported, has been extended to include the effect of collimator walkout by obtaining measurements with 0 and 180 degree collimator rotation formore » each gantry angle. Twelve images were used to characterize the volumetric isocenter for the full range of available gantry, collimator, and table rotations. Results: Three Varian True Beam, two Elekta Infinity and four Versa HD linacs at five institutions were tested using identical methodology. Varian linacs exhibited substantially less deviation due to head sag than Elekta linacs (0.4 mm vs. 1.2 mm on average). One linac from each manufacturer had additional isocenter deviation of 0.3 to 0.4 mm due to jaw instability with gantry and collimator rotation. For all linacs, the achievable isocenter tolerance was dependent on adjustment of collimator position offset, transverse position steering, and alignment of the table axis with gantry isocenter, facilitated by these test results. The pattern and magnitude of table axis wobble vs. table angle was reproducible and unique to each machine. Conclusion: This new method provides a comprehensive set of isocenter deviation values including all variables. It effectively facilitates minimization of deviation between beam center and target (ball-bearing) position. This method was used to quantify the effect of jaw instability on isocenter deviation and to identify the offending jaw. The test is suitable for incorporation into a routine machine QA program. Software development was performed by Radiological Imaging Technology, Inc.« less
A New Approach to Simulate Groundwater Table Dynamics and Its Validation in China
NASA Astrophysics Data System (ADS)
Lv, M.; Lu, H.; Dan, L.; Yang, K.
2017-12-01
The groundwater has very important role in hydrology-climate-human activity interaction. But the groundwater table dynamics currently is not well simulated in global-scale land surface models. Meanwhile, almost all groundwater schemes are adopting a specific yield method to estimate groundwater table, in which how to determine the proper specific yield value remains a big challenge. In this study, we developed a Soil Moisture Correlation (SMC) method to simulate groundwater table dynamics. We coupled SMC with a hydrological model (named as NEW) and compared it with the original model in which a specific yield method is used (named as CTL). Both NEW and CTL were tested in Tangnaihai Subbasin of Yellow River and Jialingjiang Subbasin along Yangtze River, where underground water is less impacted by human activities. The simulated discharges by NEW and CTL are compared against gauge observations. The comparison results reveal that after calibration both models are able to reproduce the discharge well. However, there is no parameter needed to be calibrated for SMC. It indicates that SMC method is more efficient and easy-to-use than the specific yield method. Since there is no direct groundwater table observation in these two basins, simulated groundwater table were compared with a global data set provided by Fan et al. (2013). Both NEW and CTL estimate lower depths than Fan does. Moreover, when comparing the variation of terrestrial water storage (TWS) derived from NEW with that observed by GRACE, good agreements were confirmed. It demonstrated that SMC method is able to reproduce groundwater level dynamics reliably.
NASA Astrophysics Data System (ADS)
Jun, Jinhyuck; Park, Minwoo; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Do, Munhoe; Lee, Dongchan; Kim, Taehoon; Choi, Junghoe; Luk-Pat, Gerard; Miloslavsky, Alex
2015-03-01
As the industry pushes to ever more complex illumination schemes to increase resolution for next generation memory and logic circuits, sub-resolution assist feature (SRAF) placement requirements become increasingly severe. Therefore device manufacturers are evaluating improvements in SRAF placement algorithms which do not sacrifice main feature (MF) patterning capability. There are known-well several methods to generate SRAF such as Rule based Assist Features (RBAF), Model Based Assist Features (MBAF) and Hybrid Assisted Features combining features of the different algorithms using both RBAF and MBAF. Rule Based Assist Features (RBAF) continue to be deployed, even with the availability of Model Based Assist Features (MBAF) and Inverse Lithography Technology (ILT). Certainly for the 3x nm node, and even at the 2x nm nodes and lower, RBAF is used because it demands less run time and provides better consistency. Since RBAF is needed now and in the future, what is also needed is a faster method to create the AF rule tables. The current method typically involves making masks and printing wafers that contain several experiments, varying the main feature configurations, AF configurations, dose conditions, and defocus conditions - this is a time consuming and expensive process. In addition, as the technology node shrinks, wafer process changes and source shape redesigns occur more frequently, escalating the cost of rule table creation. Furthermore, as the demand on process margin escalates, there is a greater need for multiple rule tables: each tailored to a specific set of main-feature configurations. Model Assisted Rule Tables(MART) creates a set of test patterns, and evaluates the simulated CD at nominal conditions, defocused conditions and off-dose conditions. It also uses lithographic simulation to evaluate the likelihood of AF printing. It then analyzes the simulation data to automatically create AF rule tables. It means that analysis results display the cost of different AF configurations as the space grows between a pair of main features. In summary, model based rule tables method is able to make it much easier to create rule tables, leading to faster rule-table creation and a lower barrier to the creation of more rule tables.
2012-03-01
edu 75 Appendix C Factor Analysis of Measurement Items Interrole conflict Factor Analysis (FA): Table: KMO and Bartlett’s Test Kaiser-Meyer...Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. 77 POS FA: Table: KMO and Bartlett’s...Tempo FA: Table: KMO and Bartlett’s Test Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .733 Bartlett’s Test of Sphericity Approx. Chi-Square
40 CFR Table B-4 to Subpart B of... - Line Voltage and Room Temperature Test Conditions
Code of Federal Regulations, 2012 CFR
2012-07-01
... Conditions B Table B-4 to Subpart B of Part 53 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Testing Performance Characteristics of Automated Methods for SO2, CO, O3, and NO2 Pt. 53, Subpt. B, Table B-4 Table B-4 to Subpart B of Part 53—Line Voltage and Room Temperature Test Conditions Test day...
40 CFR Table B-4 to Subpart B of... - Line Voltage and Room Temperature Test Conditions
Code of Federal Regulations, 2013 CFR
2013-07-01
... Conditions B Table B-4 to Subpart B of Part 53 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Testing Performance Characteristics of Automated Methods for SO2, CO, O3, and NO2 Pt. 53, Subpt. B, Table B-4 Table B-4 to Subpart B of Part 53—Line Voltage and Room Temperature Test Conditions Test day...
40 CFR Table B-4 to Subpart B of... - Line Voltage and Room Temperature Test Conditions
Code of Federal Regulations, 2014 CFR
2014-07-01
... Conditions B Table B-4 to Subpart B of Part 53 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Testing Performance Characteristics of Automated Methods for SO2, CO, O3, and NO2 Pt. 53, Subpt. B, Table B-4 Table B-4 to Subpart B of Part 53—Line Voltage and Room Temperature Test Conditions Test day...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Continuous Emission Monitoring Systems (CEMS) 6 Table 6 to Subpart BBBB of Part 60 Protection of Environment... or Before August 30, 1999 Pt. 60, Subpt. BBBB, Table 6 Table 6 to Subpart BBBB of Part 60—Model Rule... levels Use the following methods in appendix A of this part to measure oxygen (or carbon dioxide) 1...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Continuous Emission Monitoring Systems (CEMS) 6 Table 6 to Subpart BBBB of Part 60 Protection of Environment... or Before August 30, 1999 Pt. 60, Subpt. BBBB, Table 6 Table 6 to Subpart BBBB of Part 60—Model Rule... levels Use the following methods in appendix A of this part to measure oxygen (or carbon dioxide) 1...
Ecological periodic tables for nekton and benthic macrofaunal community usage of estuarine habitats Steven P. Ferraro, U.S. Environmental Protection Agency, Newport, OR Background/Questions/Methods The chemical periodic table, the Linnaean system of classification, and the Her...
Report on computation of repetitive hyperbaric-hypobaric decompression tables
NASA Technical Reports Server (NTRS)
Edel, P. O.
1975-01-01
The tables were constructed specifically for NASA's simulated weightlessness training program; they provide for 8 depth ranges covering depths from 7 to 47 FSW, with exposure times of 15 to 360 minutes. These tables were based up on an 8 compartment model using tissue half-time values of 5 to 360 minutes and Workmanline M-values for control of the decompression obligation resulting from hyperbaric exposures. Supersaturation ratios of 1.55:1 to 2:1 were used for control of ascents to altitude following such repetitive dives. Adequacy of the method and the resultant tables were determined in light of past experience with decompression involving hyperbaric-hypobaric interfaces in human exposures. Using these criteria, the method showed conformity with empirically determined values. In areas where a discrepancy existed, the tables would err in the direction of safety.
NASA Astrophysics Data System (ADS)
Beringer, J.; Arguin, J.-F.; Barnett, R. M.; Copic, K.; Dahl, O.; Groom, D. E.; Lin, C.-J.; Lys, J.; Murayama, H.; Wohl, C. G.; Yao, W.-M.; Zyla, P. A.; Amsler, C.; Antonelli, M.; Asner, D. M.; Baer, H.; Band, H. R.; Basaglia, T.; Bauer, C. W.; Beatty, J. J.; Belousov, V. I.; Bergren, E.; Bernardi, G.; Bertl, W.; Bethke, S.; Bichsel, H.; Biebel, O.; Blucher, E.; Blusk, S.; Brooijmans, G.; Buchmueller, O.; Cahn, R. N.; Carena, M.; Ceccucci, A.; Chakraborty, D.; Chen, M.-C.; Chivukula, R. S.; Cowan, G.; D'Ambrosio, G.; Damour, T.; de Florian, D.; de Gouvêa, A.; DeGrand, T.; de Jong, P.; Dissertori, G.; Dobrescu, B.; Doser, M.; Drees, M.; Edwards, D. A.; Eidelman, S.; Erler, J.; Ezhela, V. V.; Fetscher, W.; Fields, B. D.; Foster, B.; Gaisser, T. K.; Garren, L.; Gerber, H.-J.; Gerbier, G.; Gherghetta, T.; Golwala, S.; Goodman, M.; Grab, C.; Gritsan, A. V.; Grivaz, J.-F.; Grünewald, M.; Gurtu, A.; Gutsche, T.; Haber, H. E.; Hagiwara, K.; Hagmann, C.; Hanhart, C.; Hashimoto, S.; Hayes, K. G.; Heffner, M.; Heltsley, B.; Hernández-Rey, J. J.; Hikasa, K.; Höcker, A.; Holder, J.; Holtkamp, A.; Huston, J.; Jackson, J. D.; Johnson, K. F.; Junk, T.; Karlen, D.; Kirkby, D.; Klein, S. R.; Klempt, E.; Kowalewski, R. V.; Krauss, F.; Kreps, M.; Krusche, B.; Kuyanov, Yu. V.; Kwon, Y.; Lahav, O.; Laiho, J.; Langacker, P.; Liddle, A.; Ligeti, Z.; Liss, T. M.; Littenberg, L.; Lugovsky, K. S.; Lugovsky, S. B.; Mannel, T.; Manohar, A. V.; Marciano, W. J.; Martin, A. D.; Masoni, A.; Matthews, J.; Milstead, D.; Miquel, R.; Mönig, K.; Moortgat, F.; Nakamura, K.; Narain, M.; Nason, P.; Navas, S.; Neubert, M.; Nevski, P.; Nir, Y.; Olive, K. A.; Pape, L.; Parsons, J.; Patrignani, C.; Peacock, J. A.; Petcov, S. T.; Piepke, A.; Pomarol, A.; Punzi, G.; Quadt, A.; Raby, S.; Raffelt, G.; Ratcliff, B. N.; Richardson, P.; Roesler, S.; Rolli, S.; Romaniouk, A.; Rosenberg, L. J.; Rosner, J. L.; Sachrajda, C. T.; Sakai, Y.; Salam, G. P.; Sarkar, S.; Sauli, F.; Schneider, O.; Scholberg, K.; Scott, D.; Seligman, W. G.; Shaevitz, M. H.; Sharpe, S. R.; Silari, M.; Sjöstrand, T.; Skands, P.; Smith, J. G.; Smoot, G. F.; Spanier, S.; Spieler, H.; Stahl, A.; Stanev, T.; Stone, S. L.; Sumiyoshi, T.; Syphers, M. J.; Takahashi, F.; Tanabashi, M.; Terning, J.; Titov, M.; Tkachenko, N. P.; Törnqvist, N. A.; Tovey, D.; Valencia, G.; van Bibber, K.; Venanzoni, G.; Vincter, M. G.; Vogel, P.; Vogt, A.; Walkowiak, W.; Walter, C. W.; Ward, D. R.; Watari, T.; Weiglein, G.; Weinberg, E. J.; Wiencke, L. R.; Wolfenstein, L.; Womersley, J.; Woody, C. L.; Workman, R. L.; Yamamoto, A.; Zeller, G. P.; Zenin, O. V.; Zhang, J.; Zhu, R.-Y.; Harper, G.; Lugovsky, V. S.; Schaffner, P.
2012-07-01
This biennial Review summarizes much of particle physics. Using data from previous editions, plus 2658 new measurements from 644 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. Among the 112 reviews are many that are new or heavily revised including those on Heavy-Quark and Soft-Collinear Effective Theory, Neutrino Cross Section Measurements, Monte Carlo Event Generators, Lattice QCD, Heavy Quarkonium Spectroscopy, Top Quark, Dark Matter, Vcb & Vub, Quantum Chromodynamics, High-Energy Collider Parameters, Astrophysical Constants, Cosmological Parameters, and Dark Matter.A booklet is available containing the Summary Tables and abbreviated versions of some of the other sections of this full Review. All tables, listings, and reviews (and errata) are also available on the Particle Data Group website: http://pdg.lbl.gov/.The 2012 edition of Review of Particle Physics is published for the Particle Data Group as article 010001 in volume 86 of Physical Review D.This edition should be cited as: J. Beringer et al. (Particle Data Group), Phys. Rev. D 86, 010001 (2012).
Decision curve analysis to compare 3 versions of Partin Tables to predict final pathologic stage.
Augustin, Herbert; Sun, Maxine; Isbarn, Hendrik; Pummer, Karl; Karakiewicz, Pierre
2012-01-01
To perform a decision curve analysis (DCA) to compare the Partin Tables 1997, 2001, and 2007 for their clinical applicability. Clinical and pathologic data of 687 consecutive patients treated with open radical prostatectomy for clinically localized prostate cancer between 2003 and 2008 at a single institution were used. DCA quantified the net benefit relating to specific threshold probabilities of extraprostatic extension (EPE), seminal vesicle involvement (SVI), and lymph node involvement (LNI). Overall, EPE, SVI, and LNI were recorded in 17.8, 6.0, and 1.2%, respectively. For EPE predictions, the DCA favored the 2007 version vs. 1997 for SVI vs. none of the versions for LNI. DCA indicate that for very low prevalence conditions such as LNI (1.2%), decision models are not useful. For low prevalence rates such as SVI, the use of different versions of the Partin Tables does not translate into meaningful net gains differences. Finally, for intermediate prevalence conditions such as EPE (18%), despite apparent performance differences, the net benefit differences were also marginal. In consequence, the current analysis could not confirm an important benefit from the use of the Partin Tables and it could not identify a clearly better version of any of the 3 available iterations. Copyright © 2012 Elsevier Inc. All rights reserved.
Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan
2017-02-01
To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.
Iino, Yoichi; Kojima, Takeji
2016-01-01
The purpose of this study was to investigate the effect of the racket mass and the rate of strokes on the kinematics and kinetics of the trunk and the racket arm in the table tennis topspin backhand. Eight male Division I collegiate table tennis players hit topspin backhands against topspin balls projected at 75 balls · min(-1) and 35 balls · min(-1) using three rackets varying in mass of 153.5, 176 and 201.5 g. A motion capture system was used to obtain trunk and racket arm motion data. The joint torques of the racket arm were determined using inverse dynamics. The racket mass did not significantly affect all the trunk and racket arm kinematics and kinetics examined except for the wrist dorsiflexion torque, which was significantly larger for the large mass racket than for the small mass racket. The racket speed at impact was significantly lower for the high ball frequency than for the low ball frequency. This was probably because pelvis and upper trunk axial rotations tended to be more restricted for the high ball frequency. The result highlights one of the advantages of playing close to the table and making the rally speed fast.
Sagayama, Hiroyuki; Hamaguchi, Genki; Toguchi, Makiko; Ichikawa, Mamiko; Yamada, Yosuke; Ebine, Naoyuki; Higaki, Yasuki; Tanaka, Hiroaki
2017-10-01
Total daily energy expenditure (TEE) and physical activity level (PAL) are important for adequate nutritional management in athletes. The PAL of table tennis has been estimated to about 2.0: it is categorized as a moderateactivity sport (4.0 metabolic equivalents [METs]) in the Compendium of Physical Activities. However, modern table tennis makes high physiological demands. The aims of the current study were to examine (1) TEE and PAL of competitive table tennis players and (2) the physiological demands of various types of table tennis practice. In Experiment 1, we measured TEE and PAL in 10 Japanese college competitive table tennis players (aged 19.9 ± 1.1 years) using the doubly labeled water (DLW) method during training and with an exercise training log and self-reported energy intake. TEE was 15.5 ± 1.9 MJ·day -1 (3695 ± 449 kcal·day -1 ); PAL was 2.53 ± 0.25; and the average training duration was 181 ± 38 min·day -1 . In Experiment 2, we measured METs of five different practices in seven college competition players (20.6 ± 1.2 years). Three practices without footwork were 4.5-5.2 METs, and two practices with footwork were 9.5-11.5 METs. Table tennis practices averaged 7.1 ± 3.2 METS demonstrating similarities with other vigorous racket sports. In conclusion the current Compendium of Physical Activities underestimates the physiological demands of table tennis practice for competition; the estimated energy requirement should be based on DLW method data.
Leske, David A.; Hatt, Sarah R.; Liebermann, Laura; Holmes, Jonathan M.
2016-01-01
Purpose We compare two methods of analysis for Rasch scoring pre- to postintervention data: Rasch lookup table versus de novo stacked Rasch analysis using the Adult Strabismus-20 (AS-20). Methods One hundred forty-seven subjects completed the AS-20 questionnaire prior to surgery and 6 weeks postoperatively. Subjects were classified 6 weeks postoperatively as “success,” “partial success,” or “failure” based on angle and diplopia status. Postoperative change in AS-20 scores was compared for all four AS-20 domains (self-perception, interactions, reading function, and general function) overall and by success status using two methods: (1) applying historical Rasch threshold measures from lookup tables and (2) performing a stacked de novo Rasch analysis. Change was assessed by analyzing effect size, improvement exceeding 95% limits of agreement (LOA), and score distributions. Results Effect sizes were similar for all AS-20 domains whether obtained from lookup tables or stacked analysis. Similar proportions exceeded 95% LOAs using lookup tables versus stacked analysis. Improvement in median score was observed for all AS-20 domains using lookup tables and stacked analysis (P < 0.0001 for all comparisons). Conclusions The Rasch-scored AS-20 is a responsive and valid instrument designed to measure strabismus-specific health-related quality of life. When analyzing pre- to postoperative change in AS-20 scores, Rasch lookup tables and de novo stacked Rasch analysis yield essentially the same results. Translational Relevance We describe a practical application of lookup tables, allowing the clinician or researcher to score the Rasch-calibrated AS-20 questionnaire without specialized software. PMID:26933524
DOT National Transportation Integrated Search
1998-01-01
The conventional methods of determining origin-destination (O-D) trip tables involve elaborate surveys, e.g., home interviews, that require considerable time, staff, and funds. To overcome this drawback, a number of theoretical models that synthesize...
Carrasco-Labra, Alonso; Brignardello-Petersen, Romina; Santesso, Nancy; Neumann, Ignacio; Mustafa, Reem A; Mbuagbaw, Lawrence; Ikobaltzeta, Itziar Etxeandia; De Stio, Catherine; McCullagh, Lauren J; Alonso-Coello, Pablo; Meerpohl, Joerg J; Vandvik, Per Olav; Brozek, Jan L; Akl, Elie A; Bossuyt, Patrick; Churchill, Rachel; Glenton, Claire; Rosenbaum, Sarah; Tugwell, Peter; Welch, Vivian; Guyatt, Gordon; Schünemann, Holger
2015-04-16
Systematic reviews represent one of the most important tools for knowledge translation but users often struggle with understanding and interpreting their results. GRADE Summary-of-Findings tables have been developed to display results of systematic reviews in a concise and transparent manner. The current format of the Summary-of-Findings tables for presenting risks and quality of evidence improves understanding and assists users with finding key information from the systematic review. However, it has been suggested that additional methods to present risks and display results in the Summary-of-Findings tables are needed. We will conduct a non-inferiority parallel-armed randomized controlled trial to determine whether an alternative format to present risks and display Summary-of-Findings tables is not inferior compared to the current standard format. We will measure participant understanding, accessibility of the information, satisfaction, and preference for both formats. We will invite systematic review users to participate (that is clinicians, guideline developers, and researchers). The data collection process will be undertaken using the online 'Survey Monkey' system. For the primary outcome understanding, non-inferiority of the alternative format (Table A) to the current standard format (Table C) of Summary-of-Findings tables will be claimed if the upper limit of a 1-sided 95% confidence interval (for the difference of proportion of participants answering correctly a given question) excluded a difference in favor of the current format of more than 10%. This study represents an effort to provide systematic reviewers with additional options to display review results using Summary-of-Findings tables. In this way, review authors will have a variety of methods to present risks and more flexibility to choose the most appropriate table features to display (that is optional columns, risks expressions, complementary methods to display continuous outcomes, and so on). NCT02022631 (21 December 2013).
Estimating Bird / Aircraft Collision Probabilities and Risk Utilizing Spatial Poisson Processes
2012-06-10
Operations (1995-2011) ........................................... 2 Table 2 DeVault Top 15 Relative Hazard Score...dedicated bird radar (Dokter, et al. 2011). The WRS-88D is used in the Avian Hazard Advisory System which is described later in this paper. Advisory...Avian Hazard Advisory System (AHAS) is an online, near real-time, geographic information system (GIS) used for bird strike risk flight planning across
The Late Start and Amazing Upswing in Gold Chemistry
ERIC Educational Resources Information Center
Raubenheimer, Helgard G.; Schmidbaur, Hubert
2014-01-01
Probably owing to the prejudice that gold is a metal too noble to be used much in chemistry, the chemistry of this element has developed much later than that of its congeners and neighbors in the periodic table. In fact, before and after the time of alchemists, and up to the 20th century, all chemistry of gold was mainly performed in attempts to…
Affordable Emerging Computer Hardware for Neuromorphic Computing Applications
2011-09-01
DATES COVERED (From - To) 4 . TITLE AND SUBTITLE AFFORDABLE EMERGING COMPUTER HARDWARE FOR NEUROMORPHIC COMPUTING APPLICATIONS 5a. CONTRACT NUMBER...speedup over software [3, 4 ]. 3 Table 1 shows a comparison of the computing performance, communication performance, power consumption...time is probably 5 frames per second, corresponding to 5 saccades. III. RESULTS AND DISCUSSION The use of IBM Cell-BE technology (Sony PlayStation
Breastfeeding trends in the Philippines, 1973 and 1983.
Popkin, B M; Akin, J S; Flieger, W; Wong, E L
1989-01-01
This paper examines comparable national surveys of breastfeeding from the Philippines carried out in 1973 and 1983. The probability of breastfeeding at selected infant ages is estimated, using the weighted life table. The conclusions are that a 5 per cent decline in the proportion of infants ever breast-fed occurred during the referenced period, and that median length of breastfeeding remained essentially the same. PMID:2909178
Al-26 losses from weathered chondrites
NASA Technical Reports Server (NTRS)
Herzog, G. F.; Cressy, P. J., Jr.
1976-01-01
Analysis of Al-26 and noble gases were conducted in a study of samples of two heavily weathered meteorites. The analyses were performed in accordance with procedures described by Cressy (1970) and by Herzog and Cressy (1974). The analytic data are presented in tables. Evidence is presented which implicates weathering as the most probable cause of the observed variation of Al-26 and the rare gas contents.
Army Training Study: Training Effectiveness Analysis (TEA) Summary. Volume 1. Armor.
1978-08-08
c. r tf(e rx =r) rfs it ,, 0’( r’( r , a S’ f "’ u nn Ir Irf I I. * 222 ARM’- NN :TuDY TWANINC EFFCIUENEZ-: ANALISIS TEA : MMARY ’JLUM~ I ARMOR U...marginally above 50%, however, probably is not. 22 TABLE 10 TANK CREW QUALIFICATION PERFORMANCE ON TASK STANDARDS S TANDARD SATI S FACTORY Day
NASA Technical Reports Server (NTRS)
Havill, Clinton H
1928-01-01
These tables are intended to provide a standard method and to facilitate the calculation of the quantity of "Standard Helium" in high pressure containers. The research data and the formulas used in the preparation of the tables were furnished by the Research Laboratory of Physical Chemistry, of the Massachusetts Institute of Technology.
Using the tabulated diffusion flamelet model ADF-PCM to simulate a lifted methane-air jet flame
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michel, Jean-Baptiste; Colin, Olivier; Angelberger, Christian
2009-07-15
Two formulations of a turbulent combustion model based on the approximated diffusion flame presumed conditional moment (ADF-PCM) approach [J.-B. Michel, O. Colin, D. Veynante, Combust. Flame 152 (2008) 80-99] are presented. The aim is to describe autoignition and combustion in nonpremixed and partially premixed turbulent flames, while accounting for complex chemistry effects at a low computational cost. The starting point is the computation of approximate diffusion flames by solving the flamelet equation for the progress variable only, reading all chemical terms such as reaction rates or mass fractions from an FPI-type look-up table built from autoigniting PSR calculations using complexmore » chemistry. These flamelets are then used to generate a turbulent look-up table where mean values are estimated by integration over presumed probability density functions. Two different versions of ADF-PCM are presented, differing by the probability density functions used to describe the evolution of the stoichiometric scalar dissipation rate: a Dirac function centered on the mean value for the basic ADF-PCM formulation, and a lognormal function for the improved formulation referenced ADF-PCM{chi}. The turbulent look-up table is read in the CFD code in the same manner as for PCM models. The developed models have been implemented into the compressible RANS CFD code IFP-C3D and applied to the simulation of the Cabra et al. experiment of a lifted methane jet flame [R. Cabra, J. Chen, R. Dibble, A. Karpetis, R. Barlow, Combust. Flame 143 (2005) 491-506]. The ADF-PCM{chi} model accurately reproduces the experimental lift-off height, while it is underpredicted by the basic ADF-PCM model. The ADF-PCM{chi} model shows a very satisfactory reproduction of the experimental mean and fluctuating values of major species mass fractions and temperature, while ADF-PCM yields noticeable deviations. Finally, a comparison of the experimental conditional probability densities of the progress variable for a given mixture fraction with model predictions is performed, showing that ADF-PCM{chi} reproduces the experimentally observed bimodal shape and its dependency on the mixture fraction, whereas ADF-PCM cannot retrieve this shape. (author)« less
The Active Fault Parameters for Time-Dependent Earthquake Hazard Assessment in Taiwan
NASA Astrophysics Data System (ADS)
Lee, Y.; Cheng, C.; Lin, P.; Shao, K.; Wu, Y.; Shih, C.
2011-12-01
Taiwan is located at the boundary between the Philippine Sea Plate and the Eurasian Plate, with a convergence rate of ~ 80 mm/yr in a ~N118E direction. The plate motion is so active that earthquake is very frequent. In the Taiwan area, disaster-inducing earthquakes often result from active faults. For this reason, it's an important subject to understand the activity and hazard of active faults. The active faults in Taiwan are mainly located in the Western Foothills and the Eastern longitudinal valley. Active fault distribution map published by the Central Geological Survey (CGS) in 2010 shows that there are 31 active faults in the island of Taiwan and some of which are related to earthquake. Many researchers have investigated these active faults and continuously update new data and results, but few people have integrated them for time-dependent earthquake hazard assessment. In this study, we want to gather previous researches and field work results and then integrate these data as an active fault parameters table for time-dependent earthquake hazard assessment. We are going to gather the seismic profiles or earthquake relocation of a fault and then combine the fault trace on land to establish the 3D fault geometry model in GIS system. We collect the researches of fault source scaling in Taiwan and estimate the maximum magnitude from fault length or fault area. We use the characteristic earthquake model to evaluate the active fault earthquake recurrence interval. In the other parameters, we will collect previous studies or historical references and complete our parameter table of active faults in Taiwan. The WG08 have done the time-dependent earthquake hazard assessment of active faults in California. They established the fault models, deformation models, earthquake rate models, and probability models and then compute the probability of faults in California. Following these steps, we have the preliminary evaluated probability of earthquake-related hazards in certain faults in Taiwan. By accomplishing active fault parameters table in Taiwan, we would apply it in time-dependent earthquake hazard assessment. The result can also give engineers a reference for design. Furthermore, it can be applied in the seismic hazard map to mitigate disasters.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 6 2014-07-01 2014-07-01 false Test Concentration Ranges, Number of Measurements Required, and Maximum Discrepancy Specifications C Table C-1 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 53—Test Concentration Ranges...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 6 2012-07-01 2012-07-01 false Test Concentration Ranges, Number of Measurements Required, and Maximum Discrepancy Specifications C Table C-1 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 53—Test Concentration Ranges...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Test Concentration Ranges, Number of Measurements Required, and Maximum Discrepancy Specifications C Table C-1 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 53—Test Concentration Ranges...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Test Concentration Ranges, Number of Measurements Required, and Maximum Discrepancy Specifications C Table C-1 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 53—Test Concentration Ranges...
Zeng, Shu-Rong; Jiang, Bo; Xiao, Xiao-Rong
2007-06-01
Discuss sterilization effect of B-class pulsation table top vacuum pressure steam sterilizer for dental handpiece. Analysis selection of sterilizer for dental handpiece and sterilization management processes and sterilization effect monitoring, evaluation of monitoring result and effective sterilization method. The B-class pulsation table top vacuum pressure steam sterilizer to dental handpiece in West China Stomatological Hospital of Sichuan University met the requirement of the chemical and biological monitoring. Its efficiency of sterilization was 100%. The results of aerobic culture, anaerobic culture, B-type hepatitis mark monitoring to sterilized dental handpiece were negative. It is effective method for dental handpiece sterilization to use B-class pulsation table top vacuum pressure steam sterilizer.
Wolf, Sebastian; Brölz, Ellen; Keune, Philipp M; Wesa, Benjamin; Hautzinger, Martin; Birbaumer, Niels; Strehl, Ute
2015-02-01
Functional hemispheric asymmetry is assumed to constitute one underlying neurophysiological mechanism of flow-experience and skilled psycho-motor performance in table tennis athletes. We hypothesized that when initiating motor execution during motor imagery, elite table tennis players show higher right- than left-hemispheric temporal activity and stronger right temporal-premotor than left temporal-premotor theta coherence compared to amateurs. We additionally investigated, whether less pronounced left temporal cortical activity is associated with more world rank points and more flow-experience. To this aim, electroencephalographic data were recorded in 14 experts and 15 amateur table tennis players. Subjects watched videos of an opponent serving a ball and were instructed to imagine themselves responding with a specific table tennis stroke. Alpha asymmetry scores were calculated by subtracting left from right hemispheric 8-13 Hz alpha power. 4-7 Hz theta coherence was calculated between temporal (T3/T4) and premotor (Fz) cortex. Experts showed a significantly stronger shift towards lower relative left-temporal brain activity compared to amateurs and a significantly stronger right temporal-premotor coherence than amateurs. The shift towards lower relative left-temporal brain activity in experts was associated with more flow-experience and lower relative left temporal activity was correlated with more world rank points. The present findings suggest that skilled psycho-motor performance in elite table tennis players reflect less desynchronized brain activity at the left hemisphere and more coherent brain activity between fronto-temporal and premotor oscillations at the right hemisphere. This pattern probably reflect less interference of irrelevant communication of verbal-analytical with motor-control mechanisms which implies flow-experience and predict world rank in experts. Copyright © 2015 Elsevier B.V. All rights reserved.
Analysis of underlying and multiple-cause mortality data: the life table methods.
Moussa, M A
1987-02-01
The stochastic compartment model concepts are employed to analyse and construct complete and abbreviated total mortality life tables, multiple-decrement life tables for a disease, under the underlying and pattern-of-failure definitions of mortality risk, cause-elimination life tables, cause-elimination effects on saved population through the gain in life expectancy as a consequence of eliminating the mortality risk, cause-delay life tables designed to translate the clinically observed increase in survival time as the population gain in life expectancy that would occur if a treatment protocol was made available to the general population and life tables for disease dependency in multiple-cause data.
1985-06-01
Al B , STATIC SLOPE ANALYSIS METHOD USED FOR THE WAONT SLIDE -ANALYSES~ty D. L. Anderson ................................... B1 C SECTIONS,,USED...years 1960, 1961, 1962 and 1963 are given in this appendix in Tables Al , A2, A3 and A4, respectively. These were supplied through the courtesy of E.N.E.L...Tables Table Al . Daily precipitation record, Erto - 1960 Table A2. Daily precipitation record, Erto - 1961 Table A3. Daily precipitation record, Erto
Luo, Wei; Tran, Truyen; Berk, Michael; Venkatesh, Svetha
2016-01-01
Background Although physical illnesses, routinely documented in electronic medical records (EMR), have been found to be a contributing factor to suicides, no automated systems use this information to predict suicide risk. Objective The aim of this study is to quantify the impact of physical illnesses on suicide risk, and develop a predictive model that captures this relationship using EMR data. Methods We used history of physical illnesses (except chapter V: Mental and behavioral disorders) from EMR data over different time-periods to build a lookup table that contains the probability of suicide risk for each chapter of the International Statistical Classification of Diseases and Related Health Problems, 10th Revision (ICD-10) codes. The lookup table was then used to predict the probability of suicide risk for any new assessment. Based on the different lengths of history of physical illnesses, we developed six different models to predict suicide risk. We tested the performance of developed models to predict 90-day risk using historical data over differing time-periods ranging from 3 to 48 months. A total of 16,858 assessments from 7399 mental health patients with at least one risk assessment was used for the validation of the developed model. The performance was measured using area under the receiver operating characteristic curve (AUC). Results The best predictive results were derived (AUC=0.71) using combined data across all time-periods, which significantly outperformed the clinical baseline derived from routine risk assessment (AUC=0.56). The proposed approach thus shows potential to be incorporated in the broader risk assessment processes used by clinicians. Conclusions This study provides a novel approach to exploit the history of physical illnesses extracted from EMR (ICD-10 codes without chapter V-mental and behavioral disorders) to predict suicide risk, and this model outperforms existing clinical assessments of suicide risk. PMID:27400764
The multiple decrement life table: a unifying framework for cause-of-death analysis in ecology.
Carey, James R
1989-01-01
The multiple decrement life table is used widely in the human actuarial literature and provides statistical expressions for mortality in three different forms: i) the life table from all causes-of-death combined; ii) the life table disaggregated into selected cause-of-death categories; and iii) the life table with particular causes and combinations of causes eliminated. The purpose of this paper is to introduce the multiple decrement life table to the ecological literature by applying the methods to published death-by-cause information on Rhagoletis pomonella. Interrelations between the current approach and conventional tools used in basic and applied ecology are discussed including the conventional life table, Key Factor Analysis and Abbott's Correction used in toxicological bioassay.
DOT National Transportation Integrated Search
1998-01-01
The conventional methods of determining origin-destination (O-D) trip tables involve elaborate surveys, e.g., home interviews, that require considerable time, staff, and funds. To overcome this drawback, a number of theoretical models that synthesize...
76 FR 51354 - Notice of Proposed Information Collection Requests
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-18
... Teacher Education Assistance for College and Higher Education (TEACH) Grant; and the Iraq and Afghanistan... application components, descriptions and submission methods for each are listed in Table 1. Table 1--Federal Student Aid Application Components Component Description Submission method Initial Submission of FAFSA...
Life-table methods for detecting age-risk factor interactions in long-term follow-up studies.
Logue, E E; Wing, S
1986-01-01
Methodological investigation has suggested that age-risk factor interactions should be more evident in age of experience life tables than in follow-up time tables due to the mixing of ages of experience over follow-up time in groups defined by age at initial examination. To illustrate the two approaches, age modification of the effect of total cholesterol on ischemic heart disease mortality in two long-term follow-up studies was investigated. Follow-up time life table analysis of 116 deaths over 20 years in one study was more consistent with a uniform relative risk due to cholesterol, while age of experience life table analysis was more consistent with a monotonic negative age interaction. In a second follow-up study (160 deaths over 24 years), there was no evidence of a monotonic negative age-cholesterol interaction by either method. It was concluded that age-specific life table analysis should be used when age-risk factor interactions are considered, but that both approaches yield almost identical results in absence of age interaction. The identification of the more appropriate life-table analysis should be ultimately guided by the nature of the age or time phenomena of scientific interest.
NASA Astrophysics Data System (ADS)
Lai, Wencong; Ogden, Fred L.; Steinke, Robert C.; Talbot, Cary A.
2015-03-01
We have developed a one-dimensional numerical method to simulate infiltration and redistribution in the presence of a shallow dynamic water table. This method builds upon the Green-Ampt infiltration with Redistribution (GAR) model and incorporates features from the Talbot-Ogden (T-O) infiltration and redistribution method in a discretized moisture content domain. The redistribution scheme is more physically meaningful than the capillary weighted redistribution scheme in the T-O method. Groundwater dynamics are considered in this new method instead of hydrostatic groundwater front. It is also computationally more efficient than the T-O method. Motion of water in the vadose zone due to infiltration, redistribution, and interactions with capillary groundwater are described by ordinary differential equations. Numerical solutions to these equations are computationally less expensive than solutions of the highly nonlinear Richards' (1931) partial differential equation. We present results from numerical tests on 11 soil types using multiple rain pulses with different boundary conditions, with and without a shallow water table and compare against the numerical solution of Richards' equation (RE). Results from the new method are in satisfactory agreement with RE solutions in term of ponding time, deponding time, infiltration rate, and cumulative infiltrated depth. The new method, which we call "GARTO" can be used as an alternative to the RE for 1-D coupled surface and groundwater models in general situations with homogeneous soils with dynamic water table. The GARTO method represents a significant advance in simulating groundwater surface water interactions because it very closely matches the RE solution while being computationally efficient, with guaranteed mass conservation, and no stability limitations that can affect RE solvers in the case of a near-surface water table.
Winograd, Isaac Judah; Doty, Gene C.
1980-01-01
Knowledge of the magnitude of water-table rise during Pleistocene pluvial climates, and of the resultant shortening of groundwater flow path and reduction in unsaturated zone thickness, is mandatory for a technical evaluation of the Nevada Test Site (NTS) or other arid zone sites as repositories for high-level or transuranic radioactive wastes. The distribution of calcitic veins filling fractures in alluvium, and of tufa deposits between the Ash Meadows spring discharge area and the Nevada Test Site indicates that discharge from the regional Paleozoic carbonate aquifer during the Late( ) Pleistocene pluvial periods may have occurred at an altitude about 50 meters higher than at present and 14 kilometers northeast of Ash Meadows. Use of the underflow equation (relating discharge to transmissivity, aquifer width, and hydraulic gradient), and various assumptions regarding pluvial recharge, transmissivity, and altitude of groundwater base level, suggest possible rises in potentiometric level in the carbonate aquifer of about -90 meters beneath central Frenchman Flat. During Wisconsin time the rise probably did not exceed 30 meters. Water-level rises beneath Frenchman Flat during future pluvials are unlikely to exceed 30 meters and might even be 10 meters lower than modern levels. Neither the cited rise in potentiometric level in the regional carbonate aquifer, nor the shortened flow path during the Late( ) Pleistocene preclude utilization of the NTS as a repository for high-level or transuranic-element radioactive wastes provided other requisite conditions are met as this site. Deep water tables, attendant thick (up to several hundred meter) unsaturated zones, and long groundwater flow paths characterized the region during the Wisconsin Stage and probably throughout the Pleistocene Epoch and are likely to so characterize it during future glacial periods. (USGS)
A rapid local singularity analysis algorithm with applications
NASA Astrophysics Data System (ADS)
Chen, Zhijun; Cheng, Qiuming; Agterberg, Frits
2015-04-01
The local singularity model developed by Cheng is fast gaining popularity in characterizing mineralization and detecting anomalies of geochemical, geophysical and remote sensing data. However in one of the conventional algorithms involving the moving average values with different scales is time-consuming especially while analyzing a large dataset. Summed area table (SAT), also called as integral image, is a fast algorithm used within the Viola-Jones object detection framework in computer vision area. Historically, the principle of SAT is well-known in the study of multi-dimensional probability distribution functions, namely in computing 2D (or ND) probabilities (area under the probability distribution) from the respective cumulative distribution functions. We introduce SAT and it's variation Rotated Summed Area Table in the isotropic, anisotropic or directional local singularity mapping in this study. Once computed using SAT, any one of the rectangular sum can be computed at any scale or location in constant time. The area for any rectangular region in the image can be computed by using only 4 array accesses in constant time independently of the size of the region; effectively reducing the time complexity from O(n) to O(1). New programs using Python, Julia, matlab and C++ are implemented respectively to satisfy different applications, especially to the big data analysis. Several large geochemical and remote sensing datasets are tested. A wide variety of scale changes (linear spacing or log spacing) for non-iterative or iterative approach are adopted to calculate the singularity index values and compare the results. The results indicate that the local singularity analysis with SAT is more robust and superior to traditional approach in identifying anomalies.
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
A VLSI architecture for performing finite field arithmetic with reduced table look-up
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Reed, I. S.
1986-01-01
A new table look-up method for finding the log and antilog of finite field elements has been developed by N. Glover. In his method, the log and antilog of a field element is found by the use of several smaller tables. The method is based on a use of the Chinese Remainder Theorem. The technique often results in a significant reduction in the memory requirements of the problem. A VLSI architecture is developed for a special case of this new algorithm to perform finite field arithmetic including multiplication, division, and the finding of an inverse element in the finite field.
Viscoelastic Properties of Advanced Polymer Composites for Ballistic Protective Applications
1994-09-01
ofthe Damaged Sample 78 Figure 69: Fracture Surface of Damage Area Near the Point of Penetration 79 Figure 70. Closer View ofthe Damaged Area...LIST OF TABLES Table 1. Basic Mechanical Properties of the Materials 6 Table 2. Initial DMA Test Results 23 Table 3. Flexural Three Point Bend... point bend testing was conducted using an Instron 1127 Universal Tester to verify the DMA test method and specimen clamping configuration. Interfacial
D'Antuono, Isabella; Bruno, Angelica; Linsalata, Vito; Minervini, Fiorenza; Garbetta, Antonella; Tufariello, Maria; Mita, Giovanni; Logrieco, Antonio F; Bleve, Gianluca; Cardinali, Angela
2018-05-15
The effects of fermentation by autochthonous microbial starters on phenolics composition of Apulian table olives, Bella di Cerignola (BDC), Termite di Bitetto (TDB) and Cellina di Nardò (CEL) were studied, highlighting also the cultivars influence. In BDC with starter, polyphenols amount doubled compared with commercial sample, while in TDB and CEL, phenolics remain almost unchanged. The main phenolics were hydroxytyrosol, tyrosol, verbascoside and luteolin, followed by hydroxytyrosol-acetate detected in BDC and cyanidine-3-glucoside and quercetin in CEL. Scavenger capacity in both DPPH and CAA assays, assessed the highest antioxidant effect for CEL with starters (21.7 mg Trolox eq/g FW; 8.5 μmol hydroxytyrosol eq/100 g FW). The polyphenols were highly in vitro bioaccessible (>60%), although modifications in their profile, probably for combined effect of environment and microorganisms, were noted. Finally, fermented table olives are excellent source of health promoting compounds, since hydroxytyrosol and tyrosol are almost 8 times more than in olive oil. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Comparative Study on Safe Pile Capacity as Shown in Table 1 of IS 2911 (Part III): 1980
NASA Astrophysics Data System (ADS)
Pakrashi, Somdev
2017-06-01
Code of practice for design and construction of under reamed pile foundations: IS 2911 (Part-III)—1980 presents one table in respect of safe load for bored cast in situ under reamed piles in sandy and clayey soils including black cotton soils, stem dia. of pile ranging from 20 to 50 cm and its effective length being 3.50 m. A comparative study, was taken up by working out safe pile capacity for one 400 dia., 3.5 m long bored cast in situ under reamed pile based on subsoil properties obtained from soil investigation work as well as subsoil properties of different magnitudes of clayey, sandy soils and comparing the same with the safe pile capacity shown in Table 1 of that IS Code. The study reveals that safe pile capacity computed from subsoil properties, barring a very few cases, considerably differs from that shown in the aforesaid code and looks forward for more research work and study to find out a conclusive explanation of this probable anomaly.
DOT National Transportation Integrated Search
1999-01-01
This research project was undertaken to examine the practicality and adequacy of the FDOT specifications regarding compaction methods for pipe trench backfills under high water table. Given the difficulty to determine density and to attain desired de...
ACARA - AVAILABILITY, COST AND RESOURCE ALLOCATION
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
ACARA is a program for analyzing availability, lifecycle cost, and resource scheduling. It uses a statistical Monte Carlo method to simulate a system's capacity states as well as component failure and repair. Component failures are modelled using a combination of exponential and Weibull probability distributions. ACARA schedules component replacement to achieve optimum system performance. The scheduling will comply with any constraints on component production, resupply vehicle capacity, on-site spares, or crew manpower and equipment. ACARA is capable of many types of analyses and trade studies because of its integrated approach. It characterizes the system performance in terms of both state availability and equivalent availability (a weighted average of state availability). It can determine the probability of exceeding a capacity state to assess reliability and loss of load probability. It can also evaluate the effect of resource constraints on system availability and lifecycle cost. ACARA interprets the results of a simulation and displays tables and charts for: (1) performance, i.e., availability and reliability of capacity states, (2) frequency of failure and repair, (3) lifecycle cost, including hardware, transportation, and maintenance, and (4) usage of available resources, including mass, volume, and maintenance man-hours. ACARA incorporates a user-friendly, menu-driven interface with full screen data entry. It provides a file management system to store and retrieve input and output datasets for system simulation scenarios. ACARA is written in APL2 using the APL2 interpreter for IBM PC compatible systems running MS-DOS. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. A dot matrix printer is required if the user wishes to print a graph from a results table. A sample MS-DOS executable is provided on the distribution medium. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The standard distribution medium for this program is a set of three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ACARA was developed in 1992.
SU-F-P-44: A Direct Estimate of Peak Skin Dose for Interventional Fluoroscopy Procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weir, V; Zhang, J
Purpose: There is an increasing demand for medical physicist to calculate peak skin dose (PSD) for interventional fluoroscopy procedures. The dose information (Dose-Area-Product and Air Kerma) displayed in the console cannot directly be used for this purpose. Our clinical experience shows that the use of the existing methods may overestimate or underestimate PSD. This study attempts to develop a direct estimate of PSD from the displayed dose metrics. Methods: An anthropomorphic torso phantom was used for dose measurements for a common fluoroscopic procedure. Entrance skin doses were measured with a Piranha solid state point detector placed on the table surfacemore » below the torso phantom. An initial “reference dose rate” (RE) measurement was conducted by comparing the displayed dose rate (mGy/min) to the dose rate measured. The distance from table top to focal spot was taken as the reference distance (RD at the RE. Table height was then adjusted. The displayed air kerma and DAP were recorded and sent to three physicists to estimate PSD. An inverse square correction was applied to correct displayed air kerma at various table heights. The PSD estimated by physicists and the PSD by the proposed method were then compared with the measurements. The estimated DAPs were compared to displayed DAP readings (mGycm2). Results: The difference between estimated PSD by the proposed method and direct measurements was less than 5%. For the same set of data, the estimated PSD by each of three physicists is different from measurements by ±52%. The DAP calculated by the proposed method and displayed DAP readings in the console is less than 20% at various table heights. Conclusion: PSD may be simply estimated from displayed air kerma or DAP if the distance between table top and tube focal spot or if x-ray beam area on table top is available.« less
NASA Astrophysics Data System (ADS)
Belyaev, Andrey K.; Yakovleva, Svetlana A.
2017-12-01
Aims: A simplified model is derived for estimating rate coefficients for inelastic processes in low-energy collisions of heavy particles with hydrogen, in particular, the rate coefficients with high and moderate values. Such processes are important for non-local thermodynamic equilibrium modeling of cool stellar atmospheres. Methods: The derived method is based on the asymptotic approach for electronic structure calculations and the Landau-Zener model for nonadiabatic transition probability determination. Results: It is found that the rate coefficients are expressed via statistical probabilities and reduced rate coefficients. It is shown that the reduced rate coefficients for neutralization and ion-pair formation processes depend on single electronic bound energies of an atomic particle, while the reduced rate coefficients for excitation and de-excitation processes depend on two electronic bound energies. The reduced rate coefficients are calculated and tabulated as functions of electronic bound energies. The derived model is applied to barium-hydrogen ionic collisions. For the first time, rate coefficients are evaluated for inelastic processes in Ba+ + H and Ba2+ + H- collisions for all transitions between the states from the ground and up to and including the ionic state. Tables with calculated data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A33
Kim, Logyoung; Kim, Jee-Ae; Kim, Sanghyun
2014-01-01
The claims data of the Health Insurance Review and Assessment Service (HIRA) is an important source of information for healthcare service research. The claims data of HIRA is collected when healthcare service providers submit a claim to HIRA to be reimbursed for a service that they provided to patients. To improve the accessibility of healthcare service researchers to claims data of HIRA, HIRA has developed the Patient Samples which are extracted using a stratified randomized sampling method. The Patient Samples of HIRA consist of five tables: a table for general information (Table 20) containing socio-demographic information such as gender, age and medical aid, indicators for inpatient and outpatient services; a table for specific information on healthcare services provided (Table 30); a table for diagnostic information (Table 40); a table for outpatient prescriptions (Table 53) and a table for information on healthcare service providers (Table of providers). Researchers who are interested in using the Patient Sample data for research can apply via HIRA’s website (https://www.hira.or.kr). PMID:25078381
DiPaola, Matthew J; DiPaola, Christian P; Conrad, Bryan P; Horodyski, MaryBeth; Del Rossi, Gianluca; Sawers, Andrew; Bloch, David; Rechtine, Glenn R
2008-06-01
A study of spine biomechanics in a cadaver model. To quantify motion in multiple axes created by transfer methods from stretcher to operating table in the prone position in a cervical global instability model. Patients with an unstable cervical spine remain at high risk for further secondary injury until their spine is adequately surgically stabilized. Previous studies have revealed that collars have significant, but limited benefit in preventing cervical motion when manually transferring patients. The literature proposes multiple methods of patient transfer, although no one method has been universally adopted. To date, no study has effectively evaluated the relationship between spine motion and various patient transfer methods to an operating room table for prone positioning. A global instability was surgically created at C5-6 in 4 fresh cadavers with no history of spine pathology. All cadavers were tested both with and without a rigid cervical collar in the intact and unstable state. Three headrest permutations were evaluated Mayfield (SM USA Inc), Prone View (Dupaco, Oceanside, CA), and Foam Pillow (OSI, Union City, CA). A trained group of medical staff performed each of 2 transfer methods: the "manual" and the "Jackson table" transfer. The manual technique entailed performing a standard rotation of the supine patient on a stretcher to the prone position on the operating room table with in-line manual cervical stabilization. The "Jackson" technique involved sliding the supine patient to the Jackson table (OSI, Union City, CA) with manual in-line cervical stabilization, securing them to the table, then initiating the table's lock and turn mechanism and rotating them into a prone position. An electromagnetic tracking device captured angular motion between the C5 and C6 vertebral segments. Repeated measures statistical analysis was performed to evaluate the following conditions: collar use (2 levels), headrest (3 levels), and turning technique (2 levels). For all measures, there was significantly more cervical spine motion during manual prone positioning compared with using the Jackson table. The use of a collar provided a slight reduction in motion in all the planes of movement; however, this was only significantly different from the no collar condition in axial rotation. Differences in gross motion between the headrest type were observed in lateral bending (Foam Pillow
Barlow, Paul M.; Moench, Allen F.
1999-01-01
The computer program WTAQ calculates hydraulic-head drawdowns in a confined or water-table aquifer that result from pumping at a well of finite or infinitesimal diameter. The program is based on an analytical model of axial-symmetric ground-water flow in a homogeneous and anisotropic aquifer. The program allows for well-bore storage and well-bore skin at the pumped well and for delayed drawdown response at an observation well; by including these factors, it is possible to accurately evaluate the specific storage of a water-table aquifer from early-time drawdown data in observation wells and piezometers. For water-table aquifers, the program allows for either delayed or instantaneous drainage from the unsaturated zone. WTAQ calculates dimensionless or dimensional theoretical drawdowns that can be used with measured drawdowns at observation points to estimate the hydraulic properties of confined and water-table aquifers. Three sample problems illustrate use of WTAQ for estimating horizontal and vertical hydraulic conductivity, specific storage, and specific yield of a water-table aquifer by type-curve methods and by an automatic parameter-estimation method.
NASA Astrophysics Data System (ADS)
Langitan, F. W.
2018-02-01
The objective of this research is to find out the influence of training strategy and physical condition toward forehand drive ability in table tennis of student in faculty of sport in university of Manado, department of health and recreation education. The method used in this research was factorial 2x2 design method. The population was taken from the student of Faculty of Sport at Manado State University, Indonesia, in 2017 of 76 students for sample research. The result of this research shows that: In general, this training strategy of wall bounce gives better influence toward forehand drive ability compare with the strategy of pair training in table tennis. For the students who have strong forehand muscle, the wall bounce training strategy give better influence to their ability of forehand drive in table tennis. For the student who have weak forehand muscle, pair training strategy give better influence than wall bound training toward forehand drive ability in table tennis. There is an interaction between training using hand muscle strength to the training result in table tennis using forehand drive.
40 CFR 92.5 - Reference materials.
Code of Federal Regulations, 2010 CFR
2010-07-01
...: (1) ASTM material. The following table sets forth material from the American Society for Testing and...., Philadelphia, PA 19103. The table follows: Document number and name 40 CFR part 92 reference ASTM D 86-95, Standard Test Method for Distillation of Petroleum Products § 92.113 ASTM D 93-94, Standard Test Methods...
The Periodic Table as a Mnemonic Device for Writing Electronic Configurations.
ERIC Educational Resources Information Center
Mabrouk, Suzanne T.
2003-01-01
Presents an interactive method for using the periodic table as an effective mnemonic for writing electronic configurations. Discusses the intrinsic relevance of configurations to chemistry by building upon past analogies. Addresses pertinent background information, describes the hands-on method, and demonstrates its use. Transforms the traditional…
Percentage Problems in Bridging Courses
ERIC Educational Resources Information Center
Kachapova, Farida; Kachapov, Ilias
2012-01-01
Research on teaching high school mathematics shows that the topic of percentages often causes learning difficulties. This article describes a method of teaching percentages that the authors used in university bridging courses. In this method, the information from a word problem about percentages is presented in a two-way table. Such a table gives…
26 CFR 1.62-2 - Reimbursements and other expense allowance arrangements.
Code of Federal Regulations, 2012 CFR
2012-04-01
..., and Taxable Income § 1.62-2 Reimbursements and other expense allowance arrangements. (a) Table of contents. The contents of this section are as follows: (a) Table of contents. (b) Scope. (c) Reimbursement... general. (2) Safe harbors. (i) Fixed date method. (ii) Periodic payment method. (3) Pattern of...
26 CFR 1.62-2 - Reimbursements and other expense allowance arrangements.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., and Taxable Income § 1.62-2 Reimbursements and other expense allowance arrangements. (a) Table of contents. The contents of this section are as follows: (a) Table of contents. (b) Scope. (c) Reimbursement... general. (2) Safe harbors. (i) Fixed date method. (ii) Periodic payment method. (3) Pattern of...
26 CFR 1.62-2 - Reimbursements and other expense allowance arrangements.
Code of Federal Regulations, 2011 CFR
2011-04-01
..., and Taxable Income § 1.62-2 Reimbursements and other expense allowance arrangements. (a) Table of contents. The contents of this section are as follows: (a) Table of contents. (b) Scope. (c) Reimbursement... general. (2) Safe harbors. (i) Fixed date method. (ii) Periodic payment method. (3) Pattern of...
26 CFR 1.62-2 - Reimbursements and other expense allowance arrangements.
Code of Federal Regulations, 2013 CFR
2013-04-01
..., and Taxable Income § 1.62-2 Reimbursements and other expense allowance arrangements. (a) Table of contents. The contents of this section are as follows: (a) Table of contents. (b) Scope. (c) Reimbursement... general. (2) Safe harbors. (i) Fixed date method. (ii) Periodic payment method. (3) Pattern of...
Overview of fast algorithm in 3D dynamic holographic display
NASA Astrophysics Data System (ADS)
Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian
2013-08-01
3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.
ERIC Educational Resources Information Center
CEMREL, Inc., St. Louis, MO.
This Comprehensive School Mathematics Program (CSMP) guide is divided into three major parts. The first, the Languages of Strings and Arrows, opens with a suggested lesson order. Major sections cover instructional approaches for: (1) Games with Strings, (2) Necklaces, and (3) The Table Game. A Series of appendices is included. Part 2, Geometry and…
COSPAS-SARSAT Satellite Orbit Predictor. Volume 3
NASA Technical Reports Server (NTRS)
Friedman, Morton L.; Garrett, James
1984-01-01
The satellite orbit predictor is a graphical aid for determining the relationship between the satellite (SARSAT or COSPAS) orbit, antenna coverage of the spacecraft and coverage of the LUTs. The predictor allows the user to quickly visualize if a selected position will probably be detected and is composed of a base map and a satellite track overlay for each satellite. Additionally, a table of equator crossings for each satellite is included.
Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.
Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C
2014-01-01
The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator.
Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number
Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.
2014-01-01
The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470
Nuclear physics for geo-neutrino studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiorentini, Gianni; Istituto Nazionale di Fisica Nucleare, Sezione di Ferrara, I-44100 Ferrara; Ianni, Aldo
2010-03-15
Geo-neutrino studies are based on theoretical estimates of geo-neutrino spectra. We propose a method for a direct measurement of the energy distribution of antineutrinos from decays of long-lived radioactive isotopes. We present preliminary results for the geo-neutrinos from {sup 214}Bi decay, a process that accounts for about one-half of the total geo-neutrino signal. The feeding probability of the lowest state of {sup 214}Bi--the most important for geo-neutrino signal--is found to be p{sub 0}=0.177+-0.004 (stat){sub -0.001}{sup +0.003} (sys), under the hypothesis of universal neutrino spectrum shape (UNSS). This value is consistent with the (indirect) estimate of the table of isotopes. Wemore » show that achievable larger statistics and reduction of systematics should allow for the testing of possible distortions of the neutrino spectrum from that predicted using the UNSS hypothesis. Implications on the geo-neutrino signal are discussed.« less
McCauley, Erin J
2017-12-01
To estimate the cumulative probability (c) of arrest by age 28 years in the United States by disability status, race/ethnicity, and gender. I estimated cumulative probabilities through birth cohort life tables with data from the National Longitudinal Survey of Youth, 1997. Estimates demonstrated that those with disabilities have a higher cumulative probability of arrest (c = 42.65) than those without (c = 29.68). The risk was disproportionately spread across races/ethnicities, with Blacks with disabilities experiencing the highest cumulative probability of arrest (c = 55.17) and Whites without disabilities experiencing the lowest (c = 27.55). Racial/ethnic differences existed by gender as well. There was a similar distribution of disability types across race/ethnicity, suggesting that the racial/ethnic differences in arrest may stem from racial/ethnic inequalities as opposed to differential distribution of disability types. The experience of arrest for those with disabilities was higher than expected. Police officers should understand how disabilities may affect compliance and other behaviors, and likewise how implicit bias and structural racism may affect reactions and actions of officers and the systems they work within in ways that create inequities.
Cross-Matching Source Observations from the Palomar Transient Factory (PTF)
NASA Astrophysics Data System (ADS)
Laher, Russ; Grillmair, C.; Surace, J.; Monkewitz, S.; Jackson, E.
2009-01-01
Over the four-year lifetime of the PTF project, approximately 40 billion instances of astronomical-source observations will be extracted from the image data. The instances will correspond to the same astronomical objects being observed at roughly 25-50 different times, and so a very large catalog containing important object-variability information will be the chief PTF product. Organizing astronomical-source catalogs is conventionally done by dividing the catalog into declination zones and sorting by right ascension within each zone (e.g., the USNOA star catalog), in order to facilitate catalog searches. This method was reincarnated as the "zones" algorithm in a SQL-Server database implementation (Szalay et al., MSR-TR-2004-32), with corrections given by Gray et al. (MSR-TR-2006-52). The primary advantage of this implementation is that all of the work is done entirely on the database server and client/server communication is eliminated. We implemented the methods outlined in Gray et al. for a PostgreSQL database. We programmed the methods as database functions in PL/pgSQL procedural language. The cross-matching is currently based on source positions, but we intend to extend it to use both positions and positional uncertainties to form a chi-square statistic for optimal thresholding. The database design includes three main tables, plus a handful of internal tables. The Sources table stores the SExtractor source extractions taken at various times; the MergedSources table stores statistics about the astronomical objects, which are the result of cross-matching records in the Sources table; and the Merges table, which associates cross-matched primary keys in the Sources table with primary keys in the MergedSoures table. Besides judicious database indexing, we have also internally partitioned the Sources table by declination zone, in order to speed up the population of Sources records and make the database more manageable. The catalog will be accessible to the public after the proprietary period through IRSA (irsa.ipac.caltech.edu).
Identification of phreatophytic groundwater dependent ecosystems using geospatial technologies
NASA Astrophysics Data System (ADS)
Perez Hoyos, Isabel Cristina
The protection of groundwater dependent ecosystems (GDEs) is increasingly being recognized as an essential aspect for the sustainable management and allocation of water resources. Ecosystem services are crucial for human well-being and for a variety of flora and fauna. However, the conservation of GDEs is only possible if knowledge about their location and extent is available. Several studies have focused on the identification of GDEs at specific locations using ground-based measurements. However, recent progress in technologies such as remote sensing and their integration with geographic information systems (GIS) has provided alternative ways to map GDEs at much larger spatial extents. This study is concerned with the discovery of patterns in geospatial data sets using data mining techniques for mapping phreatophytic GDEs in the United States at 1 km spatial resolution. A methodology to identify the probability of an ecosystem to be groundwater dependent is developed. Probabilities are obtained by modeling the relationship between the known locations of GDEs and main factors influencing groundwater dependency, namely water table depth (WTD) and aridity index (AI). A methodology is proposed to predict WTD at 1 km spatial resolution using relevant geospatial data sets calibrated with WTD observations. An ensemble learning algorithm called random forest (RF) is used in order to model the distribution of groundwater in three study areas: Nevada, California, and Washington, as well as in the entire United States. RF regression performance is compared with a single regression tree (RT). The comparison is based on contrasting training error, true prediction error, and variable importance estimates of both methods. Additionally, remote sensing variables are omitted from the process of fitting the RF model to the data to evaluate the deterioration in the model performance when these variables are not used as an input. Research results suggest that although the prediction accuracy of a single RT is reduced in comparison with RFs, single trees can still be used to understand the interactions that might be taking place between predictor variables and the response variable. Regarding RF, there is a great potential in using the power of an ensemble of trees for prediction of WTD. The superior capability of RF to accurately map water table position in Nevada, California, and Washington demonstrate that this technique can be applied at scales larger than regional levels. It is also shown that the removal of remote sensing variables from the RF training process degrades the performance of the model. Using the predicted WTD, the probability of an ecosystem to be groundwater dependent (GDE probability) is estimated at 1 km spatial resolution. The modeling technique is evaluated in the state of Nevada, USA to develop a systematic approach for the identification of GDEs and it is then applied in the United States. The modeling approach selected for the development of the GDE probability map results from a comparison of the performance of classification trees (CT) and classification forests (CF). Predictive performance evaluation for the selection of the most accurate model is achieved using a threshold independent technique, and the prediction accuracy of both models is assessed in greater detail using threshold-dependent measures. The resulting GDE probability map can potentially be used for the definition of conservation areas since it can be translated into a binary classification map with two classes: GDE and NON-GDE. These maps are created by selecting a probability threshold. It is demonstrated that the choice of this threshold has dramatic effects on deterministic model performance measures.
Rapidly assessing the probability of exceptionally high natural hazard losses
NASA Astrophysics Data System (ADS)
Gollini, Isabella; Rougier, Jonathan
2014-05-01
One of the objectives in catastrophe modeling is to assess the probability distribution of losses for a specified period, such as a year. From the point of view of an insurance company, the whole of the loss distribution is interesting, and valuable in determining insurance premiums. But the shape of the righthand tail is critical, because it impinges on the solvency of the company. A simple measure of the risk of insolvency is the probability that the annual loss will exceed the company's current operating capital. Imposing an upper limit on this probability is one of the objectives of the EU Solvency II directive. If a probabilistic model is supplied for the loss process, then this tail probability can be computed, either directly, or by simulation. This can be a lengthy calculation for complex losses. Given the inevitably subjective nature of quantifying loss distributions, computational resources might be better used in a sensitivity analysis. This requires either a quick approximation to the tail probability or an upper bound on the probability, ideally a tight one. We present several different bounds, all of which can be computed nearly instantly from a very general event loss table. We provide a numerical illustration, and discuss the conditions under which the bound is tight. Although we consider the perspective of insurance and reinsurance companies, exactly the same issues concern the risk manager, who is typically very sensitive to large losses.
Improved look-up table method of computer-generated holograms.
Wei, Hui; Gong, Guanghong; Li, Ni
2016-11-10
Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.
The European Southern Observatory-MIDAS table file system
NASA Technical Reports Server (NTRS)
Peron, M.; Grosbol, P.
1992-01-01
The new and substantially upgraded version of the Table File System in MIDAS is presented as a scientific database system. MIDAS applications for performing database operations on tables are discussed, for instance, the exchange of the data to and from the TFS, the selection of objects, the uncertainty joins across tables, and the graphical representation of data. This upgraded version of the TFS is a full implementation of the binary table extension of the FITS format; in addition, it also supports arrays of strings. Different storage strategies for optimal access of very large data sets are implemented and are addressed in detail. As a simple relational database, the TFS may be used for the management of personal data files. This opens the way to intelligent pipeline processing of large amounts of data. One of the key features of the Table File System is to provide also an extensive set of tools for the analysis of the final results of a reduction process. Column operations using standard and special mathematical functions as well as statistical distributions can be carried out; commands for linear regression and model fitting using nonlinear least square methods and user-defined functions are available. Finally, statistical tests of hypothesis and multivariate methods can also operate on tables.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 Pt. 53...
40 CFR 1060.810 - What materials does this part reference?
Code of Federal Regulations, 2010 CFR
2010-07-01
.... (a) ASTM material. Table 1 to this section lists material from the American Society for Testing and..., West Conshohocken, PA 19428 or http://www.astm.com. Table 1 follows: Table 1 to § 1060.810—ASTM Materials Document number and name Part 1060reference ASTM D471-06, Standard Test Method for Rubber Property...
ERIC Educational Resources Information Center
Cherif, Abour A.; Adams, Gerald E.; Cannon, Charles E.
1997-01-01
Describes several activities used to teach students from middle school age to college nonmajors about the nature of matter, atoms, molecules and the periodic table. Strategies integrate such approaches as hands-on activities, visualization, writing, demonstrations, role play, and guided inquiry. For example, the periodic table is viewed as a town…
Completion of the Edward Air Force Base Statistical Guidance Wind Tool
NASA Technical Reports Server (NTRS)
Dreher, Joseph G.
2008-01-01
The goal of this task was to develop a GUI using EAFB wind tower data similar to the KSC SLF peak wind tool that is already in operations at SMG. In 2004, MSFC personnel began work to replicate the KSC SLF tool using several wind towers at EAFB. They completed the analysis and QC of the data, but due to higher priority work did not start development of the GUI. MSFC personnel calculated wind climatologies and probabilities of 10-minute peak wind occurrence based on the 2-minute average wind speed for several EAFB wind towers. Once the data were QC'ed and analyzed the climatologies were calculated following the methodology outlined in Lambert (2003). The climatologies were calculated for each tower and month, and then were stratified by hour, direction (10" sectors), and direction (45" sectors)/hour. For all climatologies, MSFC calculated the mean, standard deviation and observation counts of the Zminute average and 10-minute peak wind speeds. MSFC personnel also calculated empirical and modeled probabilities of meeting or exceeding specific 10- minute peak wind speeds using PDFs. The empirical PDFs were asymmetrical and bounded on the left by the 2- minute average wind speed. They calculated the parametric PDFs by fitting the GEV distribution to the empirical distributions. Parametric PDFs were calculated in order to smooth and interpolate over variations in the observed values due to possible under-sampling of certain peak winds and to estimate probabilities associated with average winds outside the observed range. MSFC calculated the individual probabilities of meeting or exceeding specific 10- minute peak wind speeds by integrating the area under each curve. The probabilities assist SMG forecasters in assessing the shuttle FR for various Zminute average wind speeds. The A M ' obtained the processed EAFB data from Dr. Lee Bums of MSFC and reformatted them for input to Excel PivotTables, which allow users to display different values with point-click-drag techniques. The GUI was created from the PivotTables using VBA code. It is run through a macro within Excel and allows forecasters to quickly display and interpret peak wind climatology and probabilities in a fast-paced operational environment. The GUI was designed to look and operate exactly the same as the KSC SLF tool since SMG forecasters were already familiar with that product. SMG feedback was continually incorporated into the GUI ensuring the end product met their needs. The final version of the GUI along with all climatologies, PDFs, and probabilities has been delivered to SMG and will be put into operational use.
Destro, M. T.; Leitao, M.; Farber, J. M.
1996-01-01
Volume 62, no. 2, p. 705, column 2, line 5 from bottom: "neutralized with chlorine" should read "chlorine neutralized by the addition of 5 ml of a 1% solution of sodium thiosulfate." Page 706, Table 1, footnote b: Footnote b should read "The designation in parentheses is the area or type of sample collected as indicated in Table 3." Page 709, Tables 3 and 4: Tables 3 and 4 should read as shown below. PMID:16535326
Automated edge finishing using an active XY table
Loucks, Clifford S.; Starr, Gregory P.
1993-01-01
The disclosure is directed to an apparatus and method for automated edge finishing using hybrid position/force control of an XY table. The disclosure is particularly directed to learning the trajectory of the edge of a workpiece by "guarded moves". Machining is done by controllably moving the XY table, with the workpiece mounted thereon, along the learned trajectory with feedback from a force sensor. Other similar workpieces can be mounted, without a fixture on the XY table, located and the learned trajectory adjusted
Bibliography of terrestrial impact structures
NASA Technical Reports Server (NTRS)
Grolier, M. J.
1985-01-01
This bibliography lists 105 terrestrial impact structures, of which 12 are proven structures, that is, structures associated with meteorites, and 93 are probable. Of the 93 probable structures, 18 are known to contain rocks with meteoritic components or to be enriched in meteoritic signature-elements, both of which enhance their probability of having originated by impact. Many of the structures investigated in the USSR to date are subsurface features that are completely or partly buried by sedimentary rocks. At least 16 buried impact structures have already been identified in North America and Europe. No proven nor probable submarine impact structure rising above the ocean floor is presently known; none has been found in Antarctica or Greenland. An attempt has been made to cite for each impact structure all literature published prior to mid-1983. The structures are presented in alphabetical order by continent, and their geographic distribution is indicated on a sketch map of each continent in which they occur. They are also listed tables in: (1) alphabetical order, (2) order of increasing latitude, (3) order of decreasing diameter, and (4) order of increasing geologic age.
26 CFR 1.461-0 - Table of contents.
Code of Federal Regulations, 2010 CFR
2010-04-01
...) Payment liabilities. (l) [Reserved] (m) Change in method of accounting required by this section. (1) In general. (2) Change in method of accounting for long-term contracts and payment liabilities. § 1.461... 26 Internal Revenue 6 2010-04-01 2010-04-01 false Table of contents. 1.461-0 Section 1.461-0...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Use With the Stack Test Method (300 mm and 450 mm Wafers) I Table I-12 to Subpart I of Part 98... (Bijk) for Semiconductor Manufacturing for Use With the Stack Test Method (300 mm and 450 mm Wafers...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Use With the Stack Test Method (150 mm and 200 mm Wafers) I Table I-11 to Subpart I of Part 98... (Bijk) for Semiconductor Manufacturing for Use With the Stack Test Method (150 mm and 200 mm Wafers...
Arroyo-López, F N; Bautista-Gallego, J; Romero-Gil, V; Rodríguez-Gómez, F; Garrido-Fernández, A
2012-04-16
The present work uses a logistic/probabilistic model to obtain the growth/no growth interfaces of Saccharomyces cerevisiae, Wickerhamomyces anomalus and Candida boidinii (three yeast species commonly isolated from table olives) as a function of the diverse combinations of natamycin (0-30 mg/L), citric acid (0.00-0.45%) and sodium chloride (3-6%). Mathematical models obtained individually for each yeast species showed that progressive concentrations of citric acid decreased the effect of natamycin, which was only observed below 0.15% citric acid. Sodium chloride concentrations around 5% slightly increased S. cerevisiae and C. boidinii resistance to natamycin, although concentrations above 6% of NaCl always favoured inhibition by this antimycotic. An overall growth/no growth interface, built considering data from the three yeast species, revealed that inhibition in the absence of citric acid and at 4.5% NaCl can be reached using natamycin concentrations between 12 and 30 mg/L for growth probabilities between 0.10 and 0.01, respectively. Results obtained in this survey show that is not advisable to use jointly natamycin and citric acid in table olive packaging because of the observed antagonistic effects between both preservatives, but table olives processed without citric acid could allow the application of the antifungal. Copyright © 2012 Elsevier B.V. All rights reserved.
Cloth-covered chiropractic treatment tables as a source of allergens and pathogenic microbes☆
Evans, Marion W.; Campbell, Alan; Husbands, Chris; Breshears, Jennell; Ndetan, Harrison; Rupert, Ronald
2008-01-01
Abstract Objective Vinyl chiropractic tables have been found to harbor pathogenic bacteria, but wiping with a simple disinfection agent can significantly reduce the risk of bacteria. The aim of this study was to assess the presence of microbes and other allergens or pathogens on cloth chiropractic tables. Methods Cloth-covered tables in a chiropractic college teaching clinic were selected. Samples were taken from the facial piece and hand rests with RODAC plates containing nutrient agar, followed by confirmatory testing when indicated. Results Numerous microbacteria strains were found, including Staphylococcus aureus and Propionibacterium. Allergen-producing molds, including Candida, were also found. Conclusion Cloth tables were shown to contain pathogenic microbacteria and allergens. The chiropractic profession should establish an infection control protocol relevant to treatment tables and discard use of cloth-covered treatment tables in this process. PMID:19674718
Relation between ground water and surface water in Brandywine Creek basin, Pennsylvania
Olmsted, F.H.; Hely, A.G.
1962-01-01
The relation between ground water and surface water was studied in Brandywine Creek basin, an area of 287 square miles in the Piedmont physiographic province in southeastern Pennsylvania. Most of the basin is underlain by crystalline rocks that yield only small to moderate supplies of water to wells, but the creek has an unusually well-sustained base flow. Streamflow records for the Chadds Ford, Pa., gaging station were analyzed; base flow recession curves and hydrographs of base flow were defined for the calendar years 1928-31 and 1952-53. Water budgets calculated for these two periods indicate that about two-thirds of the runoff of Brandywine Creek is base flow--a significantly higher proportion of base flow than in streams draining most other types of consolidated rocks in the region and almost as high as in streams in sandy parts of the Coastal Plain province in New Jersey and Delaware. Ground-water levels in 16 observation wells were compared with the base flow of the creek for 1952-53. The wells are assumed to provide a reasonably good sample of average fluctuations of the water table and its depth below the land surface. Three of the wells having the most suitable records were selected as index wells to use in a more detailed analysis. A direct, linear relation between the monthly average ground-water stage in the index wells and the base flow of the creek in winter months was found. The average ground-water discharge in the basin for 1952-53 was 489 cfs (316 mgd), of which slightly less than one-fourth was estimated to be loss by evapotranspiration. However, the estimated evapotranspiration from ground water, and consequently the estimated total ground-water discharge, may be somewhat high. The average gravity yield (short-term coefficient of storage) of the zone of water-table fluctuation was calculated by two methods. The first method, based on the ratio of change in ground-water storage as calculated from a witner base-flow recession curve is seasonal change in ground-water stage in the observation wells, gave values of about 7 percent using 16 wells) and 7 1/2 percent (using 3 index wells). The second method, in which the change in ground water storage is based on a hypothetical base-flow recession curve (derived from the observed linear relation between ground-water stage in the index wells and base flow), gave a value of about 10 1/2 percent. The most probable value of gravity yield is between 7 1/2 and 10 percent, but this estimate may require modification when more information on the average magnitude of water-table fluctuation and the sources of base flow of the creek become available. Rough estimates were made of the average coefficient of transmissibility of the rocks in the basin by use of the estimated total ground-water discharge for the period 1952-53, approximate values of length of discharge areas, and average water-table gradients adjacent to the discharge areas. The estimated average coefficient of transmissibility for 1952-53 is roughly 1,000 gpd per foot. The transmissibility is variable, decreasing with decreasing ground-water stage. The seeming inconsistency between the small to moderate ground-water yield to wells and the high yield to streams is explained in terms of the deep permeable soils, the relatively high gravity yield of the zone of water-table fluctuation, the steep water-table gradients toward the streams, the relatively low transmissibility of the rocks, and the rapid decreases in gravity yield below the lower limit of water-table fluctuation. It is concluded that no simple relation exists between the amount of natural ground-water discharge in an area and all the proportion of this discharge that can be diverted to wells.
A summary of transition probabilities for atomic absorption lines formed in low-density clouds
NASA Technical Reports Server (NTRS)
Morton, D. C.; Smith, W. H.
1973-01-01
A table of wavelengths, statistical weights, and excitation energies is given for 944 atomic spectral lines in 221 multiplets whose lower energy levels lie below 0.275 eV. Oscillator strengths were adopted for 635 lines in 155 multiplets from the available experimental and theoretical determinations. Radiation damping constants also were derived for most of these lines. This table contains the lines most likely to be observed in absorption in interstellar clouds, circumstellar shells, and the clouds in the direction of quasars where neither the particle density nor the radiation density is high enough to populate the higher levels. All ions of all elements from hydrogen to zinc are included which have resonance lines longward of 912 A, although a number of weaker lines of neutrals and first ions have been omitted.
Coupling of Groundwater Recharge and Biodegradation of Subsurface Crude-Oil Contamination (Invited)
NASA Astrophysics Data System (ADS)
Bekins, B. A.; Hostettler, F. D.; Delin, G. N.; Herkelrath, W. N.; Warren, E.; Campbell, P.; Rosenbauer, R. J.; Cozzarelli, I.
2010-12-01
Surface hydrologic properties controlling groundwater recharge can have a large effect on biodegradation rates in the subsurface. Two studies of crude oil contamination show that degradation rates are dramatically increased where recharge rates are enhanced. The first site, located near Bemidji, Minnesota, was contaminated in August, 1979 when oil from a pipeline rupture infiltrated into a surficial glacial outwash aquifer. Discrete oil phases form three separate pools at the water table, the largest of which is 25x75 m at a depth of 6-8 m. Gas and water concentrations and microbial community data show that methanogenic conditions prevail in this oil pool. There is extreme spatial dependence in the degradation rates such that most of the n-alkanes have been degraded in the upgradient end, but in the downgradient end n-alkane concentrations are nearly unaltered from the original spill. Recharge rates through the two ends of the oil body were estimated using a water table fluctuation method. In 2002, the more degraded end received 15.2 cm of recharge contrasted to 10.7 cm at the less degraded end. The enhanced recharge is caused by topographic focusing of runoff toward a local depression. Microbial data using the Most Probable Number method show that the methanogen concentrations are 10-100 times greater in the more degraded end of the oil body suggesting that a growth nutrient is supplied by recharge. A decrease in partial pressure of N2 compared to Ar in the soil gas indicates nitrogen fixation probably meets N requirements (Amos et al., 2005, WRR, doi:10.1029/2004WR003433). Organic phosphorus is the main form of P in infiltrating pore water and concentration decreases with depth. The second site is located 40 km southeast of the Bemidji site at an oil pipeline pumping station near Cass Lake, Minnesota. This site was contaminated by oil leaking from a pipe coupling for an unknown duration of time between 1971 and 2002. The oil body at this site lies under a fenced area of the pumping station and is comparable in size to the largest Bemidji site oil pool. The oil is heavily degraded with complete loss of the n-alkane fraction suggesting that degradation is accelerated at this site. The pumping station is flat, gravel-covered, devoid of vegetation, and surrounded by a berm. Thus, the combined effects of no runoff, rapid infiltration, and zero transpiration all enhance recharge to the oil body. Recharge rates through the gravel yard and the adjacent forested area were estimated using a water table fluctuation method. Data for the first six months of 2010 showed that recharge below the gravel yard was 40% greater than below the forested area. Groundwater ammonia concentrations increase from 0.02 to 0.5 mmol/L under the oil body, while background NO3 is only 0.01 mmol/L and there is negligible N in the oil, again suggesting that N fixation meets N requirements. Combined, these studies suggest that enhanced transport of a limiting nutrient other than N from the surface may accelerate degradation of subsurface contamination.
Study on polarization image methods in turbid medium
NASA Astrophysics Data System (ADS)
Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong
2014-11-01
Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.
[Cleanliness Norms 1964-1975].
Noelle-Neumann, E
1976-01-01
In 1964 the Institut für Demoskopie Allensbach made a first survey taking stock of norms concerning cleanliness in the Federal Republic of Germany. At that time, 78% of respondents thought that the vogue among young people of cultivating an unkempt look was past or on the wane (Table 1.). Today we know that this fashion was an indicator of more serious desires for change in many different areas like politics, sexual morality, education and that its high point was still to come. In the fall of 1975 a second survey, modelled on the one of 1964, was conducted. Again, it concentrated on norms, not on behavior. As expected, norms have changed over this period but not in a one-directional or simple manner. In general, people are much more large-minded about children's looks: neat, clean school-dress, properly combed hair, clean shoes, all this and also holding their things in order has become less important in 1975 (Table 2). To carry a clean handkerchief is becoming oldfashioned (Table 3). On the other hand, principles of bringing-up children have not loosened concerning personal hygiene - brushing ones teeth, washing hands, feet, and neck, clean fingernails (Table 4). On one item related to protection of the environment, namely throwing around waste paper, standards have even become more strict (Table 5). With regard to school-leavers, norms of personal hygiene have generally become more strict (Table 6). As living standards have gone up and the number of full bathrooms has risen from 42% to 75% of households, norms of personal hygiene have also increased: one warm bath a week seemed enough to 56% of adults in 1964, but to only 32% in 1975 (Table 7). Also standards for changing underwear have changed a lot: in 1964 only 12% of respondents said "every day", in 1975 48% said so (Table 8). Even more stringent norms are applied to young women (Tables 9/10). For comparison: 1964 there were automatic washing machines in 16%, 1975 in 79% of households. Answers to questions which qualities men value especially in women and which qualities women value especially in men show a decrease in valutation of "cleanliness". These results can be interpreted in different ways (Tables 11/12). It seems, however, that "cleanliness" is not going out as a cultural value. We have found that young people today do not consider clean dress important but that they are probably better washed under their purposely neglected clothing than young people were ten years ago. As a nation, Germans still consider cleanliness to be a articularly German virtue, 1975 even more so than 1964 (Table 13). An association test, first made in March 1976, confirms this: When they hear "Germany", 68% of Germans think of "cleanliness" (Table 14).
Spectroscopy and atomic physics of highly ionized Cr, Fe, and Ni for tokamak plasmas
NASA Technical Reports Server (NTRS)
Feldman, U.; Doschek, G. A.; Cheng, C.-C.; Bhatia, A. K.
1980-01-01
The paper considers the spectroscopy and atomic physics for some highly ionized Cr, Fe, and Ni ions produced in tokamak plasmas. Forbidden and intersystem wavelengths for Cr and Ni ions are extrapolated and interpolated using the known wavelengths for Fe lines identified in solar-flare plasmas. Tables of transition probabilities for the B I, C I, N I, O I, and F I isoelectronic sequences are presented, and collision strengths and transition probabilities for Cr, Fe, and Ni ions of the Be I sequence are given. Similarities of tokamak and solar spectra are discussed, and it is shown how the atomic data presented may be used to determine ion abundances and electron densities in low-density plasmas.
NASA Technical Reports Server (NTRS)
Johnson, J. R. (Principal Investigator)
1974-01-01
The author has identified the following significant results. The broad scale vegetation classification was developed for a 3,200 sq mile area in southeastern Arizona. The 31 vegetation types were derived from association tables which contained information taken at about 500 ground sites. The classification provided an information base that was suitable for use with small scale photography. A procedure was developed and tested for objectively comparing photo images. The procedure consisted of two parts, image groupability testing and image complexity testing. The Apollo and ERTS photos were compared for relative suitability as first stage stratification bases in two stage proportional probability sampling. High altitude photography was used in common at the second stage.
Using known map category marginal frequencies to improve estimates of thematic map accuracy
NASA Technical Reports Server (NTRS)
Card, D. H.
1982-01-01
By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.
Planning for subacute care: predicting demand using acute activity data.
Green, Janette P; McNamee, Jennifer P; Kobel, Conrad; Seraji, Md Habibur R; Lawrence, Suanne J
2016-01-01
Objective The aim of the present study was to develop a robust model that uses the concept of 'rehabilitation-sensitive' Diagnosis Related Groups (DRGs) in predicting demand for rehabilitation and geriatric evaluation and management (GEM) care following acute in-patient episodes provided in Australian hospitals. Methods The model was developed using statistical analyses of national datasets, informed by a panel of expert clinicians and jurisdictional advice. Logistic regression analysis was undertaken using acute in-patient data, published national hospital statistics and data from the Australasian Rehabilitation Outcomes Centre. Results The predictive model comprises tables of probabilities that patients will require rehabilitation or GEM care after an acute episode, with columns defined by age group and rows defined by grouped Australian Refined (AR)-DRGs. Conclusions The existing concept of rehabilitation-sensitive DRGs was revised and extended. When applied to national data, the model provided a conservative estimate of 83% of the activity actually provided. An example demonstrates the application of the model for service planning. What is known about the topic? Health service planning is core business for jurisdictions and local areas. With populations ageing and an acknowledgement of the underservicing of subacute care, it is timely to find improved methods of estimating demand for this type of care. Traditionally, age-sex standardised utilisation rates for individual DRGs have been applied to Australian Bureau of Statistics (ABS) population projections to predict the future need for subacute services. Improved predictions became possible when some AR-DRGs were designated 'rehabilitation-sensitive'. This improved methodology has been used in several Australian jurisdictions. What does this paper add? This paper presents a new tool, or model, to predict demand for rehabilitation and GEM services based on in-patient acute activity. In this model, the methodology based on rehabilitation-sensitive AR-DRGs has been extended by updating them to AR-DRG Version 7.0, quantifying the level of 'sensitivity' and incorporating the patient's age to improve the prediction of demand for subacute services. What are the implications for practitioners? The predictive model takes the form of tables of probabilities that patients will require rehabilitation or GEM care after an acute episode and can be applied to acute in-patient administrative datasets in any Australian jurisdiction or local area. The use of patient-level characteristics will enable service planners to improve their forecasting of demand for these services. Clinicians and jurisdictional representatives consulted during the project regarded the model favourably and believed that it was an improvement on currently available methods.
Salvarani, C; Macchioni, P L; Tartoni, P L; Rossi, F; Baricchi, R; Castri, C; Chiaravalloti, F; Portioli, I
1987-01-01
Among the population of Reggio Emilia, Italy, 56 patients with polymyalgia rheumatica (PR) and giant cell arteritis (GCA) were identified during the 5-year period 1981-85. The average annual incidence rates of PR and GCA were 12.8 and 8.8 respectively per 100,000 population aged 50 years or older. Forty-nine patients were followed up and the mean duration of follow-up was 32 months. All the patients received steroid therapy. We have evaluated the cumulative probability of requiring continued steroid therapy between patients with PR only, GCA only, and PR associated with GCA using life-table methods with permanent discontinuation of therapy as an end point. The different duration of steroid therapy between these 3 groups did not achieve statistical significance by the method of Lee and Desu. We identified a 5 variable discriminant function that correctly predicted whether the duration of therapy would be longer or shorter than 16 months (median duration of therapy) in 80% of our patients followed up for at least 24 months. The presence of synovitis in PR is also discussed.
Possible Health Benefits From Reducing Occupational Magnetic Fields
Bowman, Joseph D.; Ray, Tapas K.; Park, Robert M.
2015-01-01
Background Magnetic fields (MF) from AC electricity are a Possible Human Carcinogen, based on limited epidemiologic evidence from exposures far below occupational health limits. Methods To help formulate government guidance on occupational MF, the cancer cases prevented and the monetary benefits accruing to society by reducing workplace exposures were determined. Life-table methods produced Disability Adjusted Life Years, which were converted to monetary values. Results Adjusted for probabilities of causality, the expected increase in a worker’s disability-free life are 0.04 year (2 weeks) from a 1 microtesla (μT) MF reduction in average worklife exposure, which is equivalent to $5,100/worker/μT in year 2010 U.S. dollars (95% confidence interval $1,000–$9,000/worker/μT). Where nine electrosteel workers had 13.8 μT exposures, for example, moving them to ambient MFs would provide $600,000 in benefits to society (uncertainty interval $0–$1,000,000). Conclusions When combined with the costs of controls, this analysis provides guidance for precautionary recommendations for managing occupational MF exposures. PMID:23129537
Optimizing TLB entries for mixed page size storage in contiguous memory
Chen, Dong; Gara, Alan; Giampapa, Mark E.; Heidelberger, Philip; Kriegel, Jon K.; Ohmacht, Martin; Steinmacher-Burow, Burkhard
2013-04-30
A system and method for accessing memory are provided. The system comprises a lookup buffer for storing one or more page table entries, wherein each of the one or more page table entries comprises at least a virtual page number and a physical page number; a logic circuit for receiving a virtual address from said processor, said logic circuit for matching the virtual address to the virtual page number in one of the page table entries to select the physical page number in the same page table entry, said page table entry having one or more bits set to exclude a memory range from a page.
Gardner, L.R.; Reeves, H.W.
2002-01-01
Time series of ground-water head at a mid-marsh site near North Inlet, South Carolina, USA can be classified into five types of forcing signatures based on the dominant water flux governing water-level dynamics during a given time interval. The fluxes that can be recognized are recharge by tides and rain, evapotranspiration (ET), seepage into the near surface soil from below, and seepage across the soil surface to balance either ET losses or seepage influxes from below. Minimal estimates for each flux can be made by multiplying the head change induced by it by the measured specific yield of the soil. These flux estimates are provide minimal values because ET fluxes resulting from this method are about half as large as those estimated from calculated potential evapotranspiration (PET), which place an upper limit on the actual ET. As evapotranspiration is not moisture-limited at this regularly submerged site, the actual ET is probably nearly equal to PET. Thus, all of the other fluxes are probably twice as large as those given by this method. Application of this method shows that recharge by tides and rain only occurs during spring and summer when ET exceeds upward seepage from below and is thereby able to draw down the water table below the marsh surface occasionally. During fall and winter, seepage of fresh water from below is largely balanced by seepage out of the soil into overlying tidal water or into sheet flow during tidal exposure. The resulting reduction in soil water salinity may thereby enhance the growth of Spartina in the following spring. ?? 2002, The Society of Wetland Scientists.
2018-06-01
decomposition products from bis-(2-chloroethyl) sulfide (HD). These data were measured using an ASTM International method that is based on differential...2.1 Materials and Method ........................................................................................2 2.2 Data Analysis...and Method The source and purity of the materials studied are listed in Table 1. Table 1. Sample Information for Title Compounds Compound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yao; Wan, Liang; Chen, Kai
An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mappedmore » automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.« less
Li, Yao; Wan, Liang; Chen, Kai
2015-04-25
An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mappedmore » automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.« less
A Remote Registration Based on MIDAS
NASA Astrophysics Data System (ADS)
JIN, Xin
2017-04-01
We often need for software registration to protect the interests of the software developers. This article narrated one kind of software long-distance registration technology. The registration method is: place the registration information in a database table, after the procedure starts in check table registration information, if it has registered then the procedure may the normal operation; Otherwise, the customer must input the sequence number and registers through the network on the long-distance server. If it registers successfully, then records the registration information in the database table. This remote registration method can protect the rights of software developers.
NASA Astrophysics Data System (ADS)
Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong
2016-03-01
Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.
RadVel: General toolkit for modeling Radial Velocities
NASA Astrophysics Data System (ADS)
Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan
2018-01-01
RadVel models Keplerian orbits in radial velocity (RV) time series. The code is written in Python with a fast Kepler's equation solver written in C. It provides a framework for fitting RVs using maximum a posteriori optimization and computing robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel can perform Bayesian model comparison and produces publication quality plots and LaTeX tables.
(177)Lu: DDEP Evaluation of the decay scheme for an emerging radiopharmaceutical.
Kellett, M A
2016-03-01
A new decay scheme evaluation using the DDEP methodology for (177)Lu is presented. Recently measured half-life measurements have been incorporated, as well as newly available γ-ray emission probabilities. For the first time, a thorough investigation has been made of the γ-ray multipolarities. The complete data tables and detailed evaluator comments are available through the DDEP website. Copyright © 2015 Elsevier Ltd. All rights reserved.
Does exercise improve symptoms in fibromyalgia?
Rain, Carmen; Seguel, Willy; Vergara, Luis
2015-12-14
It has been proposed that fibromyalgia could be managed by pharmacological and non-pharmacological interventions. Regular physical exercise is commonly used as a non-pharmacological intervention. Searching in Epistemonikos database, which is maintained by screening 30 databases, we identified 14 systematic reviews including 25 randomized trials. We combined the evidence using meta-analysis and generated a summary of findings table following the GRADE approach. We conclude that regular physical exercise probably reduces pain in patients with fibromyalgia.
Price of Fairness in Kidney Exchange
2014-05-01
solver uses branch-and-price, a technique that proves optimality by in- crementally generating only a small part of the model during tree search [8...factors like fail- ure probability and chain position, as in the probabilistic model ). We will use this multiplicative re-weighting in our experiments in...Table 2 gives the average loss in efficiency for each of these models over multiple generated pool sizes, with 40 runs per pool size per model , under
U.S. Army Advertising from the Recruits’ Viewpoint
1985-09-01
were found. Males report higher recall for Mail advertising (see Table 82-8A). This is certainly consistent with the marketing strategy of targeting...accessions motivated through advertising . A major strength of this survey Is in measuring the motives of specific market segments. The timing of this survey Is...differences, while the larger Senior/Grad differences are probably the result of market targeting of advertising media. 1982 findings are similar for
Ship Underwater Threat Response System (SUTRS): A Feasibility Study of Organic Mine Point-Defense
2012-09-01
by implementing and testing the design until a final product has been established that addresses (and has been traced throughout to) the...The assumptions used to evaluate those TPMs are as follows: • The threshold Probability of Success for the total system should be 90% survival...Threat Response System xviii TOA Table of Allowance TPM Technical Performance Measures TTP Tactics Techniques and Procedures U.S. United
Quantifying C-17 Aircrew Training Priorities
2015-06-19
mentally process the situation. Skill-Probability-Risk (SPR) Score Once the data from the SS, SP, and SR has been calculated, an SPR Score can...discrepancies between the SPR Rank-Score and the Vol 1 Rank-Score. Figure 4.8 illustrates the differences in scores between the two processes . Ideally, the...tables. The extreme lower left and extreme upper right of the chart are areas of major discrepancy between the two processes and potentially provide the
MAG4 Versus Alternative Techniques for Forecasting Active-Region Flare Productivity
NASA Technical Reports Server (NTRS)
Falconer, David A.; Moore, Ronald L.; Barghouty, Abdulnasser F.; Khazanov, Igor
2014-01-01
MAG4 is a technique of forecasting an active region's rate of production of major flares in the coming few days from a free-magnetic-energy proxy. We present a statistical method of measuring the difference in performance between MAG4 and comparable alternative techniques that forecast an active region's major-flare productivity from alternative observed aspects of the active region. We demonstrate the method by measuring the difference in performance between the "Present MAG4" technique and each of three alternative techniques, called "McIntosh Active-Region Class," "Total Magnetic Flux," and "Next MAG4." We do this by using (1) the MAG4 database of magnetograms and major-flare histories of sunspot active regions, (2) the NOAA table of the major-flare productivity of each of 60 McIntosh active-region classes of sunspot active regions, and (3) five technique-performance metrics (Heidke Skill Score, True Skill Score, Percent Correct, Probability of Detection, and False Alarm Rate) evaluated from 2000 random two-by-two contingency tables obtained from the databases. We find that (1) Present MAG4 far outperforms both McIntosh Active-Region Class and Total Magnetic Flux, (2) Next MAG4 significantly outperforms Present MAG4, (3) the performance of Next MAG4 is insensitive to the forward and backward temporal windows used, in the range of one to a few days, and (4) forecasting from the free-energy proxy in combination with either any broad category of McIntosh active-region classes or any Mount Wilson active-region class gives no significant performance improvement over forecasting from the free-energy proxy alone (Present MAG4).
Risks and probabilities of breast cancer: short-term versus lifetime probabilities.
Bryant, H E; Brasher, P M
1994-01-01
OBJECTIVE: To calculate age-specific short-term and lifetime probabilities of breast cancer among a cohort of Canadian women. DESIGN: Double decrement life table. SETTING: Alberta. SUBJECTS: Women with first invasive breast cancers registered with the Alberta Cancer Registry between 1985 and 1987. MAIN OUTCOME MEASURES: Lifetime probability of breast cancer from birth and for women at various ages; short-term (up to 10 years) probability of breast cancer for women at various ages. RESULTS: The lifetime probability of breast cancer is 10.17% at birth and peaks at 10.34% at age 25 years, after which it decreases owing to a decline in the number of years over which breast cancer risk will be experienced. However, the probability of manifesting breast cancer in the next year increases steadily from the age of 30 onward, reaching 0.36% at 85 years. The probability of manifesting the disease within the next 10 years peaks at 2.97% at age 70 and decreases thereafter, again owing to declining probabilities of surviving the interval. CONCLUSIONS: Given that the incidence of breast cancer among Albertan women during the study period was similar to the national average, we conclude that currently more than 1 in 10 women in Canada can expect to have breast cancer at some point during their life. However, risk varies considerably over a woman's lifetime, with most risk concentrated after age 49. On the basis of the shorter-term age-specific risks that we present, the clinician can put breast cancer risk into perspective for younger women and heighten awareness among women aged 50 years or more. PMID:8287343
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Use of benefit table in finding your primary insurance amount from your average monthly wage. 404.222 Section 404.222 Employees' Benefits SOCIAL SECURITY... Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.222 Use of benefit table in...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Use of benefit table in finding your primary insurance amount from your average monthly wage. 404.222 Section 404.222 Employees' Benefits SOCIAL SECURITY... Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.222 Use of benefit table in...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Use of benefit table in finding your primary insurance amount from your average monthly wage. 404.222 Section 404.222 Employees' Benefits SOCIAL SECURITY... Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.222 Use of benefit table in...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Use of benefit table in finding your primary insurance amount from your average monthly wage. 404.222 Section 404.222 Employees' Benefits SOCIAL SECURITY... Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.222 Use of benefit table in...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Use of benefit table in finding your primary insurance amount from your average monthly wage. 404.222 Section 404.222 Employees' Benefits SOCIAL SECURITY... Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.222 Use of benefit table in...
1990-02-01
infancy during Cycle I, at the novice level during Cycle II, and at the advanced beginner level during Cycle III. The next two sections and Chapters 6...5 Table 1 - 1983 NSWC Planning Activities . . . . . . . 14 Table 1A - Planning Activity Flowchart . . . . . . . 14.1 Table 2 - Sector/SBU
Towards "Inverse" Character Tables? A One-Step Method for Decomposing Reducible Representations
ERIC Educational Resources Information Center
Piquemal, J.-Y.; Losno, R.; Ancian, B.
2009-01-01
In the framework of group theory, a new procedure is described for a one-step automated reduction of reducible representations. The matrix inversion tool, provided by standard spreadsheet software, is applied to the central part of the character table that contains the characters of the irreducible representation. This method is not restricted to…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-06
... (TEACH) Grant; and the Iraq and Afghanistan Service Grant. Federal Student Aid, an office of the U.S..., descriptions and submission methods for each are listed in Table 1. Table 1--Federal Student Aid Application Components Component Description Submission method Initial Submission of FAFSA FAFSA on the Web (FOTW...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Methods and Procedures for Conducting Emissions Test for Stack Systems I Table I-9 to Subpart I of Part 98 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electronics...
Mathisen, R W; Mazess, R B
1981-02-01
The authors present a revised method for calculating life expectancy tables for populations where individual ages at death are known or can be estimated. The conventional and revised methods are compared using data for U.S. and Hungarian males in an attempt to determine the accuracy of each method in calculating life expectancy at advanced ages. Means of correcting errors caused by age rounding, age exaggeration, and infant mortality are presented
The Star Schema Benchmark and Augmented Fact Table Indexing
NASA Astrophysics Data System (ADS)
O'Neil, Patrick; O'Neil, Elizabeth; Chen, Xuedong; Revilak, Stephen
We provide a benchmark measuring star schema queries retrieving data from a fact table with Where clause column restrictions on dimension tables. Clustering is crucial to performance with modern disk technology, since retrievals with filter factors down to 0.0005 are now performed most efficiently by sequential table search rather than by indexed access. DB2’s Multi-Dimensional Clustering (MDC) provides methods to "dice" the fact table along a number of orthogonal "dimensions", but only when these dimensions are columns in the fact table. The diced cells cluster fact rows on several of these "dimensions" at once so queries restricting several such columns can access crucially localized data, with much faster query response. Unfortunately, columns of dimension tables of a star schema are not usually represented in the fact table. In this paper, we show a simple way to adjoin physical copies of dimension columns to the fact table, dicing data to effectively cluster query retrieval, and explain how such dicing can be achieved on database products other than DB2. We provide benchmark measurements to show successful use of this methodology on three commercial database products.
Dosi, G; Taggi, F; Macchia, T
2009-01-01
To reduce the prevalence of driving under the influence, tables allowing to estimate one's own blood alcohol concentration (BAC) by type and quantity of alcoholic drinks intake have been enacted by decree in Italy. Such tables, based on a modified Widmark's formula, are now put up in all public concerns serving alcoholic beverages. Aim of this initiative is to try to get subjects which consume alcoholics and then will drive a vehicle take in account their own estimated BAC and, on this base, put into effect, if needed, suitable actions (to avoid or to limit a further consumption, to wait more time before driving, to leave driving a sober subject). Nevertheless, many occasions exist in which these tables are not available. To allow anybody to rough estimate his own BAC in these cases too, a proper method has been developed. Briefly, the weight (in grams) of consumed alcohol has to be divided by half her own weight, if female drunk on an empty stomach (by the 90% of her own weight, if she drunk on a full stomach); by 70% of his own weight, if male drunk on an empty stomach (by 120% of his own weight, if he drunk in a full stomach). Consistency between BAC values estimated by the proposed method and those shown in the ministerial tables is very narrow: they differ in a few hundredth of grams/liter. Unlike the ministerial tables, the proposed method needs to compute the grams of ingested alcohol. This maybe involves some difficulties that, nevertheless, can be overcome easily. In our opinion, the skillfulness in computing the grams of assumed alcohol is of great significance since it provides the subject with a strong signal not only in road safety terms, but also in health terms. The ministerial tables and the proposed method should be part of teaching to issue the driving licence and to recovery of driving licence taken away points. In broad terms, the school should teach youngs to calculate alcohol quantities assumed by drink to acquaint them with the risks paving the way for a more aware drinking when they will come age.
Ingram, Deborah D; Lochner, Kimberly A; Cox, Christine S
2008-10-01
The National Center for Health Statistics (NCHS) has produced the 1986-2000 National Health Interview Survey (NHIS) Linked Mortality Files by linking eligible adults in the 1986-2000 NHIS cohorts through probabilistic record linkage to the National Death Index to obtain mortality follow-up through December 31, 2002. The resulting files contain more than 120,000 deaths and an average of 9 years of survival time. To assess how well mortality was ascertained in the linked mortality files, NCHS has conducted a comparison of the mortality experience of the 1986-2000 NHIS cohorts with that of the U.S. population. This report presents the results of this comparative mortality assessment. Methods The survival of each annual NHIS cohort was compared with that of the U.S. population during the same period. Cumulative survival probabilities for each annual NHIS cohort were derived using the Kaplan-Meier product limit method, and corresponding cumulative survival probabilities were computed for the U.S. population using information from annual U.S. life tables. The survival probabilities were calculated at various lengths of follow-up for each age-race-sex group of each NHIS cohort and for the U.S. population. Results As expected, mortality tended to be underestimated in the NHIS cohorts because the sample includes only civilian, noninstitutionalized persons, but this underestimation generally was not statistically significant. Statistically significant differences increased with length of follow-up, occurred more often for white females than for the other race-sex groups, and occurred more often in the oldest age groups. In general, the survival experience of the age-race-sex groups of each NHIS cohort corresponds quite closely to that of the U.S. population, providing support that the ascertainment of mortality through the probabilistic record linkage accurately reflects the mortality experience of the NHIS cohorts.
iTTVis: Interactive Visualization of Table Tennis Data.
Wu, Yingcai; Lan, Ji; Shu, Xinhuan; Ji, Chenyang; Zhao, Kejian; Wang, Jiachen; Zhang, Hui
2018-01-01
The rapid development of information technology paved the way for the recording of fine-grained data, such as stroke techniques and stroke placements, during a table tennis match. This data recording creates opportunities to analyze and evaluate matches from new perspectives. Nevertheless, the increasingly complex data poses a significant challenge to make sense of and gain insights into. Analysts usually employ tedious and cumbersome methods which are limited to watching videos and reading statistical tables. However, existing sports visualization methods cannot be applied to visualizing table tennis competitions due to different competition rules and particular data attributes. In this work, we collaborate with data analysts to understand and characterize the sophisticated domain problem of analysis of table tennis data. We propose iTTVis, a novel interactive table tennis visualization system, which to our knowledge, is the first visual analysis system for analyzing and exploring table tennis data. iTTVis provides a holistic visualization of an entire match from three main perspectives, namely, time-oriented, statistical, and tactical analyses. The proposed system with several well-coordinated views not only supports correlation identification through statistics and pattern detection of tactics with a score timeline but also allows cross analysis to gain insights. Data analysts have obtained several new insights by using iTTVis. The effectiveness and usability of the proposed system are demonstrated with four case studies.
Benndorf, Matthias; Neubauer, Jakob; Langer, Mathias; Kotter, Elmar
2017-03-01
In the diagnostic process of primary bone tumors, patient age, tumor localization and to a lesser extent sex affect the differential diagnosis. We therefore aim to develop a pretest probability calculator for primary malignant bone tumors based on population data taking these variables into account. We access the SEER (Surveillance, Epidemiology and End Results Program of the National Cancer Institute, 2015 release) database and analyze data of all primary malignant bone tumors diagnosed between 1973 and 2012. We record age at diagnosis, tumor localization according to the International Classification of Diseases (ICD-O-3) and sex. We take relative probability of the single tumor entity as a surrogate parameter for unadjusted pretest probability. We build a probabilistic (naïve Bayes) classifier to calculate pretest probabilities adjusted for age, tumor localization and sex. We analyze data from 12,931 patients (647 chondroblastic osteosarcomas, 3659 chondrosarcomas, 1080 chordomas, 185 dedifferentiated chondrosarcomas, 2006 Ewing's sarcomas, 281 fibroblastic osteosarcomas, 129 fibrosarcomas, 291 fibrous malignant histiocytomas, 289 malignant giant cell tumors, 238 myxoid chondrosarcomas, 3730 osteosarcomas, 252 parosteal osteosarcomas, 144 telangiectatic osteosarcomas). We make our probability calculator accessible at http://ebm-radiology.com/bayesbone/index.html . We provide exhaustive tables for age and localization data. Results from tenfold cross-validation show that in 79.8 % of cases the pretest probability is correctly raised. Our approach employs population data to calculate relative pretest probabilities for primary malignant bone tumors. The calculator is not diagnostic in nature. However, resulting probabilities might serve as an initial evaluation of probabilities of tumors on the differential diagnosis list.
Langner, G
1998-01-01
"The first available written source in human history relating to the description of the life expectancy of a living population is a legal text which originates from the Roman jurist Ulpianus (murdered in AD 228). In contrast to the prevailing opinion in demography, I not only do consider the text to be of ¿historical interest'...but to be a document of inestimable worth for evaluating the population survival probability in the Roman empire. The criteria specified by Ulpianus are in line with the ¿pan-human' survival function as described by modern model life tables, when based on adulthood. Values calculated from tomb inscriptions follow the lowest level of the model life tables as well and support Ulpianus' statements. The specifications by Ulpianus for the population of the Roman world empire as a whole in the ¿best fit' with modern life tables lead to an average level of 20 years of life expectancy. As a consequence a high infant mortality rate of almost 400 [per thousand] can be concluded resulting in no more than three children at the age of five in an average family in spite of a high fertility rate." (EXCERPT)
Pires, Carla; Vigário, Marina; Cavaco, Afonso
2016-06-01
The graphical content of the Medicines Package Inserts (MPIs), such as illustrations and typographic features should be legible and appropriate, as required by international pharmaceutical regulations. To study: (1) the frequency and type of MPIs' key graphic elements, (2) their compliance with regulations and (3) how educated people understand them. Descriptive study: characterisation of the graphical content of 651 MPIs. Usability study: illustrations and tables (purposively selected) were evaluated with questionnaires in three groups of humanities undergraduates (illustrations only, illustrations plus text and text only). Descriptive study: illustrations and tables were respectively identified in 6.3% and 11.8% of the MPIs. The illustrations were mainly related to how to take/use the medicine. Non-recommended graphical representations were found (e.g. italic or underline). Usability test: legibility issues were identified, especially for the group of isolated illustrations. The scarce use of illustrations and tables possibly affected the legibility of the MPIs. Compulsory legibility tests are needed to guarantee the MPIs' proper use, thus contributing to a safe use of medicines. Overall, this study highlighted the need to carefully revise/assess the MPIs' design and probably increase health information experts' awareness on this issue. © 2015 Health Libraries Group.
Making a vision document tangible using "vision-tactics-metrics" tables.
Drury, Ivo; Slomski, Carol
2006-01-01
We describe a method of making a vision document tangible by attaching specific tactics and metrics to the key elements of the vision. We report on the development and early use of a "vision-tactics-metrics" table in a department of surgery. Use of the table centered the vision in the daily life of the department and its faculty, and facilitated cultural change.
Lynette R. Potvin; Evan S. Kane; Rodney A. Chimner; Randall K. Kolka; Erik A. Lilleskov
2015-01-01
Aims Our objective was to assess the impacts of water table position and plant functional type on peat structure, plant community composition and aboveground plant production. Methods We initiated a full factorial experiment with 2 water table (WT) treatments (high and low) and 3 plant functional groups (PFG: sedge, Ericaceae,...
NASA Astrophysics Data System (ADS)
Gilmore, T. E.; Zlotnik, V. A.; Johnson, M.
2017-12-01
Groundwater table elevations are one of the most fundamental measurements used to characterize unconfined aquifers, groundwater flow patterns, and aquifer sustainability over time. In this study, we developed an analytical model that relies on analysis of groundwater elevation contour (equipotential) shape, aquifer transmissivity, and streambed gradient between two parallel, perennial streams. Using two existing regional water table maps, created at different times using different methods, our analysis of groundwater elevation contours, transmissivity and streambed gradient produced groundwater recharge rates (42-218 mm yr-1) that were consistent with previous independent recharge estimates from different methods. The three regions we investigated overly the High Plains Aquifer in Nebraska and included some areas where groundwater is used for irrigation. The three regions ranged from 1,500 to 3,300 km2, with either Sand Hills surficial geology, or Sand Hills transitioning to loess. Based on our results, the approach may be used to increase the value of existing water table maps, and may be useful as a diagnostic tool to evaluate the quality of groundwater table maps, identify areas in need of detailed aquifer characterization and expansion of groundwater monitoring networks, and/or as a first approximation before investing in more complex approaches to groundwater recharge estimation.
[Accuracy of three common optometry methods in examination of refraction in juveniles].
Su, Ting; Min, Xiaoshan; Liu, Shuangzhen; Li, Fengyun; Tan, Xingping; Zhong, Yanni; Deng, Shaoling
2016-02-01
To compare the results of the three methods of Suresight handheld autorefractor, table-mounted autorefractor and retinoscopy in examination of juveniles patients with or without cycloplegia. Firstly, 156 eyes of 78 juveniles (5 to 17 years old) were examined by using WelchAllyn Suresight handheld autorefractor and NIDEK ARK-510A table-mounted autorefractor with or without cycloplegia; secondly, retinoscopy was performed with cycloplegia. The spherical power measured by methods without cycloplegia were significantly greater than those measured with cycloplegia (P<0.05); without cycloplegia, there was no significant difference in spherical power, cylindrical power and cylindrical axis between Suresight handheld autorefractor and retinoscopy (P>0.05). These results were highly consistent, suggesting a tendency towards a short sight. However, the spherical power and cylindrical power measured by table-mounted autorefractor was significantly different (P<0.05); with cycloplegia, there was significant difference in spherical power between Suresight handheld autorefractor and retinoscopy (P<0.05). Cycloplegic retinoscopy is necessary for juvenile refraction examination. Under natural pupil situation, Suresight handheld autorefractor is better than table-mounted autorefractor, though both show a myopia tendency. Nevertheless, table-mounted autorefractor can be taken as a recommendation for the prescription of lens trial. As a strong reference for subjective optometry, retinoscopy should be the gold standard for measuring refractive errors.
Taylor, Darlene; Durigon, Monica; Davis, Heather; Archibald, Chris; Konrad, Bernhard; Coombs, Daniel; Gilbert, Mark; Cook, Darrel; Krajden, Mel; Wong, Tom; Ogilvie, Gina
2015-03-01
Failure to understand the risk of false-negative HIV test results during the window period results in anxiety. Patients typically want accurate test results as soon as possible while clinicians prefer to wait until the probability of a false-negative is virtually nil. This review summarizes the median window periods for third-generation antibody and fourth-generation HIV tests and provides the probability of a false-negative result for various days post-exposure. Data were extracted from published seroconversion panels. A 10-day eclipse period was used to estimate days from infection to first detection of HIV RNA. Median (interquartile range) days to seroconversion were calculated and probabilities of a false-negative result at various time periods post-exposure are reported. The median (interquartile range) window period for third-generation tests was 22 days (19-25) and 18 days (16-24) for fourth-generation tests. The probability of a false-negative result is 0.01 at 80 days' post-exposure for third-generation tests and at 42 days for fourth-generation tests. The table of probabilities of falsely-negative HIV test results may be useful during pre- and post-test HIV counselling to inform co-decision making regarding the ideal time to test for HIV. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
The 2-10 keV unabsorbed luminosity function of AGN from the LSS, CDFS, and COSMOS surveys
NASA Astrophysics Data System (ADS)
Ranalli, P.; Koulouridis, E.; Georgantopoulos, I.; Fotopoulou, S.; Hsu, L.-T.; Salvato, M.; Comastri, A.; Pierre, M.; Cappelluti, N.; Carrera, F. J.; Chiappetti, L.; Clerc, N.; Gilli, R.; Iwasawa, K.; Pacaud, F.; Paltani, S.; Plionis, E.; Vignali, C.
2016-05-01
The XMM-Large scale structure (XMM-LSS), XMM-Cosmological evolution survey (XMM-COSMOS), and XMM-Chandra deep field south (XMM-CDFS) surveys are complementary in terms of sky coverage and depth. Together, they form a clean sample with the least possible variance in instrument effective areas and point spread function. Therefore this is one of the best samples available to determine the 2-10 keV luminosity function of active galactic nuclei (AGN) and their evolution. The samples and the relevant corrections for incompleteness are described. A total of 2887 AGN is used to build the LF in the luminosity interval 1042-1046 erg s-1 and in the redshift interval 0.001-4. A new method to correct for absorption by considering the probability distribution for the column density conditioned on the hardness ratio is presented. The binned luminosity function and its evolution is determined with a variant of the Page-Carrera method, which is improved to include corrections for absorption and to account for the full probability distribution of photometric redshifts. Parametric models, namely a double power law with luminosity and density evolution (LADE) or luminosity-dependent density evolution (LDDE), are explored using Bayesian inference. We introduce the Watanabe-Akaike information criterion (WAIC) to compare the models and estimate their predictive power. Our data are best described by the LADE model, as hinted by the WAIC indicator. We also explore the recently proposed 15-parameter extended LDDE model and find that this extension is not supported by our data. The strength of our method is that it provides unabsorbed, non-parametric estimates, credible intervals for luminosity function parameters, and a model choice based on predictive power for future data. Based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA member states and NASA.Tables with the samples of the posterior probability distributions are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/590/A80
40 CFR Appendix D to Part 61 - Methods for Estimating Radionuclide Emissions
Code of Federal Regulations, 2011 CFR
2011-07-01
... Table 1. Table 1—Adjustment to Emission Factors for Effluent Controls Controls Types of radionuclides... applicable to gaseous radionuclides; periodic testing is prudent to ensure high removal efficiency. Fabric...
40 CFR Appendix D to Part 61 - Methods for Estimating Radionuclide Emissions
Code of Federal Regulations, 2013 CFR
2013-07-01
... Table 1. Table 1—Adjustment to Emission Factors for Effluent Controls Controls Types of radionuclides... applicable to gaseous radionuclides; periodic testing is prudent to ensure high removal efficiency. Fabric...
VizieR Online Data Catalog: SLoWPoKES-II catalog (Dhital+, 2015)
NASA Astrophysics Data System (ADS)
Dhital, S.; West, A. A.; Stassun, K. G.; Schluns, K. J.; Massey, A. P.
2015-11-01
We have identified the Sloan Low-mass Wide Pairs of Kinematically Equivalent Systems (SLoWPoKES)-II catalog of 105537 wide, low-mass binaries without using proper motions. We extend the SLoWPoKES catalog (Paper I; Dhital et al. 2010, cat. J/AJ/139/2566) by identifying binary systems with angular separations of 1-20'' based entirely on SDSS photometry and astrometry. As in Paper I, we used the Catalog Archive Server query tool (CasJobs6; http://skyserver.sdss3.org/CasJobs/) to select the sample of low-mass stars from the SDSS-DR8 star table as having r-i>=0.3 and i-z>=0.2, consistent with spectral types of K5 or later. Following Paper I (Dhital et al. 2010, cat. J/AJ/139/2566) we classified candidate pairs with a probability of chance alignment Pf{<=}0.05 as real binaries. We note that this limit does not have any physical motivation but was chosen to minimize the number of spurious pairs. This cut results in 105537 M dwarf (dM)+MS (see Table3), 78 white dwarf (WD)+dM (see Table5), and 184 sdM+sdM (see Table6) binary systems with separations of 1-20''. Of the dM+MS binaries, 44 are very low-mass (VLM) binary candidates (see Table4), with colors redder than the median M7 dwarf for both components. This represents a significant increase over the SLoWPoKES catalog of 1342 common proper motion (CPM) binaries that we presented in Paper I (Dhital et al. 2010, cat. J/AJ/139/2566). The SLoWPoKES and SLoWPoKES-II catalogs are available on the Filtergraph portal (http://slowpokes.vanderbilt.edu/). (4 data files).
NASA Technical Reports Server (NTRS)
Holmquist, R.; Pearl, D.
1980-01-01
Theoretical equations are derived for molecular divergence with respect to gene and protein structure in the presence of genetic events with unequal probabilities: amino acid and base compositions, the frequencies of nucleotide replacements, the usage of degenerate codons, the distribution of fixed base replacements within codons and the distribution of fixed base replacements among codons. Results are presented in the form of tables relating the probabilities of given numbers of codon base changes with respect to the original codon for the alpha hemoglobin, beta hemoglobin, myoglobin, cytochrome c and parvalbumin group gene families. Application of the calculations to the rabbit alpha and beta hemoglobin mRNAs and proteins indicates that the genes are separated by about 425 fixed based replacements distributed over 114 codon sites, which is a factor of two greater than previous estimates. The theoretical results also suggest that many more base replacements are required to effect a given gene or protein structural change than previously believed.
Minefield reconnaissance and detector system
Butler, M.T.; Cave, S.P.; Creager, J.D.; Johnson, C.M.; Mathes, J.B.; Smith, K.J.
1994-04-26
A multi-sensor system is described for detecting the presence of objects on the surface of the ground or buried just under the surface, such as anti-personnel or anti-tank mines or the like. A remote sensor platform has a plurality of metal detector sensors and a plurality of short pulse radar sensors. The remote sensor platform is remotely controlled from a processing and control unit and signals from the remote sensor platform are sent to the processing and control unit where they are individually evaluated in separate data analysis subprocess steps to obtain a probability score for each of the pluralities of sensors. These probability scores are combined in a fusion subprocess step by comparing score sets to a probability table which is derived based upon the historical incidence of object present conditions given that score set. A decision making rule is applied to provide an output which is optionally provided to a marker subprocess for controlling a marker device to mark the location of found objects. 7 figures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Hazardous Air Pollutants: Flexible Polyurethane Foam Fabrication Operations Pt. 63, Subpt. MMMMM, Table 3... use chlorinated fire retardants in the laminated foam a. Method 26A in appendix A to part 60 of this... chlorinated fire retardants in the laminated foam a. A method approved by the Administrator i. Conduct the...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Hazardous Air Pollutants: Flexible Polyurethane Foam Fabrication Operations Pt. 63, Subpt. MMMMM, Table 3... use chlorinated fire retardants in the laminated foam a. Method 26A in appendix A to part 60 of this... chlorinated fire retardants in the laminated foam a. A method approved by the Administrator i. Conduct the...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Hazardous Air Pollutants: Flexible Polyurethane Foam Fabrication Operations Pt. 63, Subpt. MMMMM, Table 3... use chlorinated fire retardants in the laminated foam a. Method 26A in appendix A to part 60 of this... chlorinated fire retardants in the laminated foam a. A method approved by the Administrator i. Conduct the...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Hazardous Air Pollutants: Flexible Polyurethane Foam Fabrication Operations Pt. 63, Subpt. MMMMM, Table 3... use chlorinated fire retardants in the laminated foam a. Method 26A in appendix A to part 60 of this... chlorinated fire retardants in the laminated foam a. A method approved by the Administrator i. Conduct the...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Hazardous Air Pollutants: Flexible Polyurethane Foam Fabrication Operations Pt. 63, Subpt. MMMMM, Table 3... use chlorinated fire retardants in the laminated foam a. Method 26A in appendix A to part 60 of this... chlorinated fire retardants in the laminated foam a. A method approved by the Administrator i. Conduct the...
Efficient generation of 3D hologram for American Sign Language using look-up table
NASA Astrophysics Data System (ADS)
Park, Joo-Sup; Kim, Seung-Cheol; Kim, Eun-Soo
2010-02-01
American Sign Language (ASL) is one of the languages giving the greatest help for communication of the hearing impaired person. Current 2-D broadcasting, 2-D movies are used the ASL to give some information, help understand the situation of the scene and translate the foreign language. These ASL will not be disappeared in future three-dimensional (3-D) broadcasting or 3-D movies because the usefulness of the ASL. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic ASL in holographic 3DTV or 3-D movies using look-up table method. The proposed method is largely consisted of five steps: construction of the LUT for each ASL images, extraction of characters in scripts or situation, call the fringe patterns for characters in the LUT for each ASL, composition of hologram pattern for 3-D video and hologram pattern for ASL and reconstruct the holographic 3D video with ASL. Some simulation results confirmed the feasibility of the proposed method in efficient generation of CGH patterns for ASL.
Leske, David A; Hatt, Sarah R; Liebermann, Laura; Holmes, Jonathan M
2016-02-01
We compare two methods of analysis for Rasch scoring pre- to postintervention data: Rasch lookup table versus de novo stacked Rasch analysis using the Adult Strabismus-20 (AS-20). One hundred forty-seven subjects completed the AS-20 questionnaire prior to surgery and 6 weeks postoperatively. Subjects were classified 6 weeks postoperatively as "success," "partial success," or "failure" based on angle and diplopia status. Postoperative change in AS-20 scores was compared for all four AS-20 domains (self-perception, interactions, reading function, and general function) overall and by success status using two methods: (1) applying historical Rasch threshold measures from lookup tables and (2) performing a stacked de novo Rasch analysis. Change was assessed by analyzing effect size, improvement exceeding 95% limits of agreement (LOA), and score distributions. Effect sizes were similar for all AS-20 domains whether obtained from lookup tables or stacked analysis. Similar proportions exceeded 95% LOAs using lookup tables versus stacked analysis. Improvement in median score was observed for all AS-20 domains using lookup tables and stacked analysis ( P < 0.0001 for all comparisons). The Rasch-scored AS-20 is a responsive and valid instrument designed to measure strabismus-specific health-related quality of life. When analyzing pre- to postoperative change in AS-20 scores, Rasch lookup tables and de novo stacked Rasch analysis yield essentially the same results. We describe a practical application of lookup tables, allowing the clinician or researcher to score the Rasch-calibrated AS-20 questionnaire without specialized software.
A 5 Year Study of Carbon Fluxes from a Restored English Blanket Bog
NASA Astrophysics Data System (ADS)
Worrall, F.; Dixon, S.; Evans, M.
2014-12-01
This study aimed to measure the effects of ecological restoration on blanket peat water table depths, DOC concentrations and CO2 fluxes. In April 2003 the Bleaklow Plateau, an extensive area of deep blanket peat in the Peak District National Park, northern England, was devegetated by a wildfire. As a result the area was selected for large scale restoration. In this study we considered a 5-year study of four restored sites in comparison to both an unrestored, bare peat control and to vegetated control that did not require restoration. Results suggested that sites with revegetation alongside slope stabilisation had the highest rates of photosynthesis and were the largest net (daylight hours) sinks of CO2. Bare sites were the largest net sources of CO2 and had the deepest water table depths. Sites with gully wall stabilisation were between 5-8 times more likely to be net CO2 sinks than the bare sites. Revegetation without gully flow blocking using plastic dams did not have a large effect on water table depths in and around the gullies investigated whereas a blocked gully had water table depths comparable to a naturally revegetating gully. A ten centimetre lowering in water table depth decreased the probability of observing a net CO2 sink, on a given site, by up to 30%. With respect to DOC the study showed that the average soil porewater DOC concentration on the restored sites rose significantly over the 5 year study representing a 34% increase relative to the vegetated control and an 11% increase relative to the unrestored, bare control. Soil pore water concentrations were not significantly different from surface runoff DOC concentrations and therefore restoration as conducted by this study would have contributed to water quality deterioration in the catchment. The most important conclusion of this research was that restoration interventions were apparently effective at increasing the likelihood of net CO2 sink behaviour and raising water tables on degraded, climatically marginal blanket bog. However, had water table restoration been conducted alongside revegetation then a significant decline in DOC concentrations could have also been realised.
Updated Budget Projections: 2016 to 2026
2016-03-01
flexibility to use tax and spending policies to respond to unexpected challenges. The probability of a fiscal crisis in the United States would...baseline, after accounting for all of the government’s borrowing needs, shows debt held by the public rising from $13.1 trillion at the end of 2015 to...2016 remain unchanged from last year’s 18.2 percent (see Table 3). Receipts of individual income taxes are expected to edge up by 0.1 percentage point
NASA Astrophysics Data System (ADS)
Leclaire, N.; Cochet, B.; Le Dauphin, F. X.; Haeck, W.; Jacquet, O.
2014-06-01
The present paper aims at providing experimental validation for the use of the MORET 5 code for advanced concepts of reactor involving thorium and heavy water. It therefore constitutes an opportunity to test and improve the thermal-scattering data of heavy water and also to test the recent implementation of probability tables in the MORET 5 code.
Proficiency Scaling Based on Conditional Probability Functions for Attributes
1993-10-01
so we provide you with your diagnosed cognitive state on the right most side of the above table. we recommend that you practice word problems and...pay more attention to the meaning of principles, theorems and properties. You should also follow the instructions more carefully. 48 II. A report for a...Princeton NJ 08541-0001 PO Box 16268 Princeton NJ 08541 Alexandria VA 22302-0268 Mr Hsin -hung Li Dr Ratna Nandakumar University of Illinois Dr James R
Tables of Calculated Transition Probabilities for the A-X System of OH
1981-06-01
June 1981 US ARMY ARMAMENT RESEARCH AND DEVELOPMENT COMMAND BALLISTIC RESEARCH LABORATORY ABERDEEN PROVING GROUND , MARYLAND Approved for public release...Laboratory ATTN: DRDAR-BLP Aberdeen Proving Ground , MD 21005 1L16112AH43 II. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE USA Armament Research and...Development Command /I JUNE. 1981 USA Ballistic Research Laboratory 4 ATTN: DRDAR-BL 1/1) S 2P’GE Aberden Provine Ground . MD 21001 56 Pazes 14
Inflight fuel tank temperature survey data
NASA Technical Reports Server (NTRS)
Pasion, A. J.
1979-01-01
Statistical summaries of the fuel and air temperature data for twelve different routes and for different aircraft models (B747, B707, DC-10 and DC-8), are given. The minimum fuel, total air and static air temperature expected for a 0.3% probability were summarized in table form. Minimum fuel temperature extremes agreed with calculated predictions and the minimum fuel temperature did not necessarily equal the minimum total air temperature even for extreme weather, long range flights.
Analysis of Whole-Sky Imager Data to Determine the Validity of PCFLOS models
1992-12-01
included in the data sample. 2-5 3.1. Data arrangement for a r x c contingency table ....................... 3-2 3.2. ARIMA models estimated for each...satellites. This model uses the multidimen- sional Boehm Sawtooth Wave Model to establish climatic probabilities through repetitive simula- tions of...analysis techniques to develop an ARIMAe model for each direction at the Columbia and Kirtland sites. Then, the models can be compared and analyzed to
On NEO Threat Mitigation (Preprint)
2007-10-15
Yucatan event is at least a major contributor, if not the direct cause of the extinction of the dinosaurs . Moreover, it is clear that NEO impacts can... extinction of the human race. The probability of these events decreases with the severity of the impact, and size (mass) of the NEO. Figure 1 and Table 1...thus, it is more reasonable to infer that all the large NEOs can be catalogued within a reasonable time, while smaller and less consequential
Parameterization of Injury Victim Condition as a Function of Time
1976-05-01
degree burns. HEAD AND NECK --- Cerebral injury with headache; dizziness; no loss of consciousness. --- " Whiplash " complaint with no anatomical or...34Probability of Survival Tables", was derived for three injury categories: Compound and Comminuted Fractures of the Head , 4.4 [-44 0 N. >w 044 Ci2V ri) w...Commuklity Injury Category Ambulance Helicopter No Facilitie. Head Injuries .36 .34 (no treatment) Face Injuries .19 .15 Extremity Injuries .16 .13 No
Wickham, Hadley; Hofmann, Heike
2011-12-01
We propose a new framework for visualising tables of counts, proportions and probabilities. We call our framework product plots, alluding to the computation of area as a product of height and width, and the statistical concept of generating a joint distribution from the product of conditional and marginal distributions. The framework, with extensions, is sufficient to encompass over 20 visualisations previously described in fields of statistical graphics and infovis, including bar charts, mosaic plots, treemaps, equal area plots and fluctuation diagrams. © 2011 IEEE
Wide Body Aircraft Demand Potential at Washington National Airport,
1977-09-01
the city-pair markets. Probably the most important feature of FA-7 is the fact that it allows for investigation of the behavior of airlines to changes...FINANCIAL INFORMfATION YLIGHTS BY AIRCRAFT TPE ~.4/J \\ FUEL COSUMED PASSENGERS UARRIED BY TOA IR F FLIGHITS TOTAL AIRCRAFT USAGE coded data. Sample...the various levels of operations. Similar behavior can be identified in the simultaneous increase of both types of aircraft at Dulles. Tables lAthrough
2007-07-01
RBL was observed to be higher than RIL, due to the presence of hot combustion products and the effect of cavity wall temperature. Table 1...Vol. 17, No. 4, 2001, pp. 869-877. 3. Yu, K., Wilson K., and Schadow, K., “ Effect of Flame-Holding Cavities on Supersonic- Combustion Performance... combustion products and is relatively rich with main fuel only. Consequently, additional fuel injection into the cavity increases the probability of
1987-08-01
conflicts between the facility and local standards, and to evaluate the probability of conflict resulting from any planned expansions. 2-2 Visual Resources...on staffing and housing for the facility itself is contained in Table 2-7. Additional data on the socio - economic background of Ebeye, including...resources, noise, and socio - economics. As a result of that evaluation, consequences vere assigned to one of three categories: insignificant, mitigable, or
Recognizing human actions by learning and matching shape-motion prototype trees.
Jiang, Zhuolin; Lin, Zhe; Davis, Larry S
2012-03-01
A shape-motion prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, an action prototype tree is learned in a joint shape and motion space via hierarchical K-means clustering and each training sequence is represented as a labeled prototype sequence; then a look-up table of prototype-to-prototype distances is generated. During testing, based on a joint probability model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint probability, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance measures used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 92.86 percent on a large gesture data set (with dynamic backgrounds), 100 percent on the Weizmann action data set, 95.77 percent on the KTH action data set, 88 percent on the UCF sports data set, and 87.27 percent on the CMU action data set.
The Gas Hills uranium district and some probable controls for ore deposition
Zeller, Howard Davis
1957-01-01
Uranium deposits occur in the upper coarse-grained facies of the Wind River formation of Eocene age in the Gas Hills district of the southern part of the Wind River Basin. Some of the principal deposits lie below the water table in the unoxidized zone and consist of uraninite and coffinite occurring as interstitial fillings in irregular blanket-like bodies. In the near-surface deposits that lie above the water table, the common yellow uranium minerals consist of uranium phosphates, silicates, and hydrous oxides. The black unoxidized uraninite -coffinite ores show enrichment of molybdenum, arsenic, and selenium when compared to the barren sandstone. Probable geologic controls for ore deposits include: 1) permeable sediments that allowed passage of ore-bearing solutions; 2) numerous faults that acted as impermeable barriers impounding the ore -bearing solutions; 3) locally abundant pyrite, carbonaceous material, and natuial gas containing hydrogen sulfide that might provide a favorable environment for precipitation of uranium. Field and laboratory evidence indicate that the uranium deposits in the Gas Hills district are very young and related to the post-Miocene to Pleistocene regional tilting to the south associated with the collapse of the Granite Mountains fault block. This may have stopped or reversed ground water movement from a northward (basinward) direction and alkaline ground water rich in carbonate could have carried the uranium into the favorable environment that induced precipitation.
Abbott, Brian R
Twenty-two jurisdictions in the United States permit the involuntary civil confinement of sexual offenders upon expiration of their criminal sentence and, if committed, these individuals face possible lifetime commitment. One of the legal requirements that psychologists must address in sexually violent predator evaluations is the likelihood that an individual will engage in dangerous sexual behavior and consideration of the probabilities for sexual recidivism contained in actuarial experience tables best address this inquiry. Clinicians find it increasingly difficult to affirm the likelihood threshold in the face of decreasing base rates and score-wise probability estimates for sexual recidivism reported in contemporary actuarial experience tables. The Violence Risk Appraisal Guide-Revised (VRAG-R) has been promoted to assess sexually violent predators because it has been presented as a more accurate predictor of sexual recidivism and the results more likely satisfy the legal standard of sexual dangerousness. This article conducts an in-depth analysis of the predictive and psychometric properties of the VRAG-R that are most relevant to the fit of the VRAG-R when addressing the sexual dangerousness standard proscribed by SVP laws. Recommendations for future research are offered to improve the fit of the VRAG-R to the legal inquiry of sexual dangerousness and implications for using the current iteration of the VRAG-R in forensic practice are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Landscape of α preformation probability for even-even nuclei in medium mass region
NASA Astrophysics Data System (ADS)
Qian, Yibin; Ren, Zhongzhou
2018-03-01
The behavior of α cluster preformation probability, in α decay, is a rich source of the structural information, such as the clustering, pairing, and shell evolution in heavy nuclei. Meanwhile, the experimental α decay data have been very recently compiled in the newest table NUBASE2016. Through a least square fit to the available experimental data of nuclear charge radii plus the neutron skin thickness, we obtain a new set of parameters for the two-parameter Fermi nucleon density distributions in target nuclei. Subsequently, we make use of these refreshed inputs, involved in the density-dependent cluster model, to extract α preformation factor ({P}α ) for a large range of medium α emitters with N < 126 from the newest data table. Besides checking the supposed smooth pattern of P α in the open-shell region, the special attention has been paid to those exotic α-decaying nuclei around the Z = 50 and N = 82 shell closures. Moreover, the correlation between the α preformation factor and the microscopic correction of nuclear mass, corresponding to the effect of shell and pairing plus deformation, is in particular investigated, to pursue the valuable knowledge of the P α pattern over the nuclide chart. The feature of α preformation factor along with the neutron-proton asymmetry is then detected and discussed to some extent.
Heenan, Jeffrey; Ntarlagiannis, Dimitris; Slater, Lee; Beaver, Carol; Rossbach, S.; Revil, A.; Atekwana, E.A.; Bekins, Barbara A.
2017-01-01
We present evidence of a geobattery associated with microbial degradation of a mature crude oil spill. Self-potential measurements were collected using a vertical array of nonpolarizing electrodes, starting at the land surface and passing through the smear zone where seasonal water table fluctuations have resulted in the coating of hydrocarbons on the aquifer solids. These passive electrical potential measurements exhibit a dipolar pattern associated with a current source. The anodic and cathodic reactions of this natural battery occur below and above the smear zone, respectively. The smear zone is characterized by high magnetic susceptibility values associated with the precipitation of semiconductive magnetic iron phase minerals as a by-product of biodegradation, facilitating electron transfer between the anode and the cathode. This geobattery response appears to have a transient nature, changing on a monthly scale, probably resulting from chemical and physical changes in subsurface conditions such as water table fluctuations.
Fluvial valleys in the heavily cratered terrains of Mars: Evidence for paleoclimatic change?
NASA Technical Reports Server (NTRS)
Gulick, V. C.; Baker, V. R.
1993-01-01
Whether the formation of the Martian valley networks provides unequivocal evidence for drastically different climatic conditions remains debatable. Recent theoretical climate modeling precludes the existence of a temperate climate early in Mars' geological history. An alternative hypothesis suggests that Mars had a globally higher heat flow early in its geological history, bringing water tables to within 350 m of the surface. While a globally higher heat flow would initiate ground water circulation at depth, the valley networks probably required water tables to be even closer to the surface. Additionally, it was previously reported that the clustered distribution of the valley networks within terrain types, particularly in the heavily cratered highlands, suggests regional hydrological processes were important. The case for localized hydrothermal systems is summarized and estimates of both erosion volumes and of the implied water volumes for several Martian valley systems are presented.
Ohno-Machado, Lucila; Vinterbo, Staal; Dreiseitl, Stephan
2002-01-01
Protecting individual data in disclosed databases is essential. Data anonymization strategies can produce table ambiguation by suppression of selected cells. Using table ambiguation, different degrees of anonymization can be achieved, depending on the number of individuals that a particular case must become indistinguishable from. This number defines the level of anonymization. Anonymization by cell suppression does not necessarily prevent inferences from being made from the disclosed data. Preventing inferences may be important to preserve confidentiality. We show that anonymized data sets can preserve descriptive characteristics of the data, but might also be used for making inferences on particular individuals, which is a feature that may not be desirable. The degradation of predictive performance is directly proportional to the degree of anonymity. As an example, we report the effect of anonymization on the predictive performance of a model constructed to estimate the probability of disease given clinical findings.
Ohno-Machado, L.; Vinterbo, S. A.; Dreiseitl, S.
2001-01-01
Protecting individual data in disclosed databases is essential. Data anonymization strategies can produce table ambiguation by suppression of selected cells. Using table ambiguation, different degrees of anonymization can be achieved, depending on the number of individuals that a particular case must become indistinguishable from. This number defines the level of anonymization. Anonymization by cell suppression does not necessarily prevent inferences from being made from the disclosed data. Preventing inferences may be important to preserve confidentiality. We show that anonymized data sets can preserve descriptive characteristics of the data, but might also be used for making inferences on particular individuals, which is a feature that may not be desirable. The degradation of predictive performance is directly proportional to the degree of anonymity. As an example, we report the effect of anonymization on the predictive performance of a model constructed to estimate the probability of disease given clinical findings. PMID:11825239
A photometric study of the Orion OB 1 association. 1: Observational data
NASA Technical Reports Server (NTRS)
Warren, W. H., Jr.; Hesser, J. E.
1976-01-01
An extensive catalog of observational data is presented for stars in the region of the young stellar association Orion OB 1. In addition to new photoelectric observations obtained on the uvbyB and UBV systems, photoelectric and spectroscopic data were compiled for the stars observed and for several bright members of the association having available photometric indices. Mean weighted values were computed for the uvbyB and UBV data and are tabulated in summary tables which include all references for individual values. These tables are expected to be reasonably complete for association members earlier than spectral type A0. From an analysis of currently available proper motion, radial velocity, and photometric data, membership criteria were derived and qualitative membership probabilities for 526 stars were summarized. A set of charts is included for assistance in identification of the program stars in all regions of the association.
NASA Astrophysics Data System (ADS)
Acharya, S.; Mylavarapu, R.; Jawitz, J. W.
2012-12-01
In shallow unconfined aquifers, the water table usually shows a distinct diurnal fluctuation pattern corresponding to the twenty-four hour solar radiation cycle. This diurnal water table fluctuation (DWTF) signal can be used to estimate the groundwater evapotranspiration (ETg) by vegetation, a method known as the White [1932] method. Water table fluctuations in shallow phreatic aquifers is controlled by two distinct storage parameters, drainable porosity (or specific yield) and the fillable porosity. Yet, it is implicitly assumed in most studies that these two parameters are equal, unless hysteresis effect is considered. The White based method available in the literature is also based on a single drainable porosity parameter to estimate the ETg. In this study, we present a modification of the White based method to estimate ETg from DWTF using separate drainable (λd) and fillable porosity (λf) parameters. Separate analytical expressions based on successive steady state moisture profiles are used to estimate λd and λf, instead of the commonly employed hydrostatic moisture profile approach. The modified method is then applied to estimate ETg using the DWTF data observed in a field in northeast Florida and the results are compared with ET estimations from the standard Penman-Monteith equation. It is found that the modified method resulted in significantly better estimates of ETg than the previously available method that used only a single, hydrostatic-moisture-profile based λd. Furthermore, the modified method is also used to estimate ETg even during rainfall events which produced significantly better estimates of ETg as compared to the single λd parameter method.
Stationary table CT dosimetry and anomalous scanner-reported values of CTDI{sub vol}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixon, Robert L., E-mail: rdixon@wfubmc.edu; Boone, John M.
2014-01-15
Purpose: Anomalous, scanner-reported values of CTDI{sub vol} for stationary phantom/table protocols (having elevated values of CTDI{sub vol} over 300% higher than the actual dose to the phantom) have been observed; which are well-beyond the typical accuracy expected of CTDI{sub vol} as a phantom dose. Recognition of these outliers as “bad data” is important to users of CT dose index tracking systems (e.g., ACR DIR), and a method for recognition and correction is provided. Methods: Rigorous methods and equations are presented which describe the dose distributions for stationary-table CT. A comparison with formulae for scanner-reported values of CTDI{sub vol} clearly identifiesmore » the source of these anomalies. Results: For the stationary table, use of the CTDI{sub 100} formula (applicable to a moving phantom only) overestimates the dose due to extra scatter and also includes an overbeaming correction, both of which are nonexistent when the phantom (or patient) is held stationary. The reported DLP remains robust for the stationary phantom. Conclusions: The CTDI-paradigm does not apply in the case of a stationary phantom and simpler nonintegral equations suffice. A method of correction of the currently reported CTDI{sub vol} using the approach-to-equilibrium formula H(a) and an overbeaming correction factor serves to scale the reported CTDI{sub vol} values to more accurate levels for stationary-table CT, as well as serving as an indicator in the detection of “bad data.”.« less
Tian, Guo-Liang; Li, Hui-Qiong
2017-08-01
Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.
Lamy, Jean-Baptiste; Ugon, Adrien; Berthelot, Hélène
2016-01-01
Potential adverse effects (AEs) of drugs are described in their summary of product characteristics (SPCs), a textual document. Automatic extraction of AEs from SPCs is useful for detecting AEs and for building drug databases. However, this task is difficult because each AE is associated with a frequency that must be extracted and the presentation of AEs in SPCs is heterogeneous, consisting of plain text and tables in many different formats. We propose a taxonomy for the presentation of AEs in SPCs. We set up natural language processing (NLP) and table parsing methods for extracting AEs from texts and tables of any format, and evaluate them on 10 SPCs. Automatic extraction performed better on tables than on texts. Tables should be recommended for the presentation of the AEs section of the SPCs.
Zeng, Y
1987-01-01
Trends in marital status among women in China for the period 1950-1970 and for 1981 are analyzed using the multiple decrement life table method. The results confirm those obtained with traditional methods of data analysis. It is found that over the past 30 years, Chinese women have experienced a high rate of marriage and a low divorce rate. The significant increase in age at marriage and the lowering of the death rate have affected marital status at all ages. The development of a marital status life table permits the author to estimate current numbers of women in the four marital statuses of unmarried, currently married, widowed, and divorced by age and their future likelihood of changing marital status.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pedersen, K; Irwin, J; Sansourekidou, P
Purpose: To investigate the impact of the treatment table on skin dose for prone breast patients for which the breast contacts the table and to develop a method to decrease skin dose. Methods: We used 12cm stack of 15cmx15cm solid water slabs to imitate breast. Calibrated EBT3 radiochromic film was affixed to the bottom of the phantom. Treatments for 32 patients were analyzed to determine typical prone breast beam parameters. Based on the analysis, a field size and a range of gantry angles were chosen for the test beams. Three experimental setups were used. The first represented the patient setupmore » currently used in our clinics with the phantom directly on the table. The second was the skin sparing setup, with a 1.5cm Styrofoam slab between the phantom and the table. The third used a 7.5cm Styrofoam slab to examine the extent of skin sparing potential. The calibration curve was applied to each film to determine dose. Percent difference in dose between the current and skin sparing setups was calculated for each gantry angle and gantry angle pair. Results: Data showed that beams entering through the table showed a skin dose decrease ranging from 13%–30% with the addition of 7.5cm Styrofoam, while beams exiting through the table showed no significant difference. The addition of 1.5cm Styrofoam resulted in differences ranging from 0.5%–13% with the skin sparing setup. Conclusion: The results demonstrate that skin in contact with the table receives increased dose from beams entering through the table. By creating separation between the breast and the table with Styrofoam the skin dose can be lowered, but 1.5 cm did not fully mitigate the effect. Further investigation will be performed to identify a clinically practical thickness that maximizes this mitigation.« less
Postprocessing for character recognition using pattern features and linguistic information
NASA Astrophysics Data System (ADS)
Yoshikawa, Takatoshi; Okamoto, Masayosi; Horii, Hiroshi
1993-04-01
We propose a new method of post-processing for character recognition using pattern features and linguistic information. This method corrects errors in the recognition of handwritten Japanese sentences containing Kanji characters. This post-process method is characterized by having two types of character recognition. Improving the accuracy of the character recognition rate of Japanese characters is made difficult by the large number of characters, and the existence of characters with similar patterns. Therefore, it is not practical for a character recognition system to recognize all characters in detail. First, this post-processing method generates a candidate character table by recognizing the simplest features of characters. Then, it selects words corresponding to the character from the candidate character table by referring to a word and grammar dictionary before selecting suitable words. If the correct character is included in the candidate character table, this process can correct an error, however, if the character is not included, it cannot correct an error. Therefore, if this method can presume a character does not exist in a candidate character table by using linguistic information (word and grammar dictionary). It then can verify a presumed character by character recognition using complex features. When this method is applied to an online character recognition system, the accuracy of character recognition improves 93.5% to 94.7%. This proved to be the case when it was used for the editorials of a Japanese newspaper (Asahi Shinbun).
Radiometric age map of Aleutian Islands
Wilson, Frederic H.; Turner, D.L.
1975-01-01
This map includes published, thesis, and open-file radiometric data available to us as of June, 1975. Some dates are not plotted because of inadequate location data in the original references.The map is divided into five sections, based on 1:1,000,000 scale enlargements of the National Atlas maps of Alaska. Within each section (e.g., southeastern Alaska), radiometric dates are plotted and keyed to 1:250,000 scale quadrangles. Accompanying each map section is table 1, listing map numbers and the sample identification numbers used in DGGS Special Report 10: Radiometric Dates from Alaska-A 1975 Compilation”. The reader is referred to Special Report 10 for more complete information on location, rock type, dating method, and literature references for each age entry. A listing of dates in Special Report lo which require correction or deletion is included S table 2. Corrected and additional entries are listed in table 3. The listings in tables 2 and 3 follow the format of Special Report 10. Table 4 is a glossary of abbreviations used for quadrangle name, rock type, mineral dated, and type of dating method used.
Radiometric age map of southcentral Alaska
Wilson, Frederic H.; Turner, D.L.
1975-01-01
This map includes published, thesis, and open-file radiometric data available to us as of June, 1975. Some dates are not plotted because of inadequate location data in the original references.The map is divided into five sections, based on 1:1,000,000 scale enlargements of the National Atlas maps of Alaska. Within each section (e.g., southeastern Alaska), radiometric dates are plotted and keyed to 1:250,000 scale quadrangles. Accompanying each map section is table 1, listing map numbers and the sample identification numbers used in DGGS Special Report 10: Radiometric Dates from Alaska-A 1975 Compilation”. The reader is referred to Special Report 10 for more complete information on location, rock type, dating method, and literature references for each age entry. A listing of dates in Special Report lo which require correction or deletion is included S table 2. Corrected and additional entries are listed in table 3. The listings in tables 2 and 3 follow the format of Special Report 10. Table 4 is a glossary of abbreviations used for quadrangle name, rock type, mineral dated, and type of dating method used.
Radiometric age map of southwest Alaska
Wilson, Frederic H.; Turner, D.L.
1975-01-01
This map includes published, thesis, and open-file radiometric data available to us as of June, 1975. Some dates are not plotted because of inadequate location data in the original references.The map is divided into five sections, based on 1:1,000,000 scale enlargements of the National Atlas maps of Alaska. Within each section (e.g., southeastern Alaska), radiometric dates are plotted and keyed to 1:250,000 scale quadrangles. Accompanying each map section is table 1, listing map numbers and the sample identification numbers used in DGGS Special Report 10: Radiometric Dates from Alaska-A 1975 Compilation”. The reader is referred to Special Report 10 for more complete information on location, rock type, dating method, and literature references for each age entry. A listing of dates in Special Report lo which require correction or deletion is included S table 2. Corrected and additional entries are listed in table 3. The listings in tables 2 and 3 follow the format of Special Report 10. Table 4 is a glossary of abbreviations used for quadrangle name, rock type, mineral dated, and type of dating method used.
Radiometric age map of southeast Alaska
Wilson, Frederic H.; Turner, D.L.
1975-01-01
This map includes published, thesis, and open-file radiometric data available to us as of June, 1975. Some dates are not plotted because of inadequate location data in the original references.The map is divided into five sections, based on 1:1,000,000 scale enlargements of the National Atlas maps of Alaska. Within each section (e.g., southeastern Alaska), radiometric dates are plotted and keyed to 1:250,000 scale quadrangles. Accompanying each map section is table 1, listing map numbers and the sample identification numbers used in DGGS Special Report 10: Radiometric Dates from Alaska-A 1975 Compilation”. The reader is referred to Special Report 10 for more complete information on location, rock type, dating method, and literature references for each age entry. A listing of dates in Special Report lo which require correction or deletion is included S table 2. Corrected and additional entries are listed in table 3. The listings in tables 2 and 3 follow the format of Special Report 10. Table 4 is a glossary of abbreviations used for quadrangle name, rock type, mineral dated, and type of dating method used.
Radiometric age map of northern Alaska
Wilson, Frederic H.; Turner, D.L.
1975-01-01
This map includes published, thesis, and open-file radiometric data available to us as of June, 1975. Some dates are not plotted because of inadequate location data in the original references.The map is divided into five sections, based on 1:1,000,000 scale enlargements of the National Atlas maps of Alaska. Within each section (e.g., southeastern Alaska), radiometric dates are plotted and keyed to 1:250,000 scale quadrangles. Accompanying each map section is table 1, listing map numbers and the sample identification numbers used in DGGS Special Report 10: Radiometric Dates from Alaska-A 1975 Compilation”. The reader is referred to Special Report 10 for more complete information on location, rock type, dating method, and literature references for each age entry. A listing of dates in Special Report lo which require correction or deletion is included S table 2. Corrected and additional entries are listed in table 3. The listings in tables 2 and 3 follow the format of Special Report 10. Table 4 is a glossary of abbreviations used for quadrangle name, rock type, mineral dated, and type of dating method used.
NASA Technical Reports Server (NTRS)
Nguyen, Huy H.; Martin, Michael A.
2004-01-01
The two most common approaches used to formulate thermodynamic properties of pure substances are fundamental (or characteristic) equations of state (Helmholtz and Gibbs functions) and a piecemeal approach that is described in Adebiyi and Russell (1992). This paper neither presents a different method to formulate thermodynamic properties of pure substances nor validates the aforementioned approaches. Rather its purpose is to present a method to generate property tables from existing property packages and a method to facilitate the accurate interpretation of fluid thermodynamic property data from those tables. There are two parts to this paper. The first part of the paper shows how efficient and usable property tables were generated, with the minimum number of data points, using an aerospace industry standard property package. The second part describes an innovative interpolation technique that has been developed to properly obtain thermodynamic properties near the saturated liquid and saturated vapor lines.
A Shot Number Based Approach to Performance Analysis in Table Tennis
Yoshida, Kazuto; Yamada, Koshi
2017-01-01
Abstract The current study proposes a novel approach that improves the conventional performance analysis in table tennis by introducing the concept of frequency, or the number of shots, of each shot number. The improvements over the conventional method are as follows: better accuracy of the evaluation of skills and tactics of players, additional insights into scoring and returning skills and ease of understanding the results with a single criterion. The performance analysis of matches played at the 2012 Summer Olympics in London was conducted using the proposed method. The results showed some effects of the shot number and gender differences in table tennis. Furthermore, comparisons were made between Chinese players and players from other countries, what threw light on the skills and tactics of the Chinese players. The present findings demonstrate that the proposed method provides useful information and has some advantages over the conventional method. PMID:28210334
NASA Astrophysics Data System (ADS)
Baawain, Mahad S.; Al-Futaisi, Ahmed M.; Ebrahimi, A.; Omidvarborna, Hamid
2018-04-01
Time Domain Electromagnetic (TDEM) survey as well as drilling investigations were conducted to identify possible contamination of a dumping site in an unsaturated zone located in Barka, Oman. The method was applied to evaluate conductivity of the contaminated plumes in hot and arid/semiarid region, where high temperatures commonly ranged between 35 and 50 °C. The drilling investigation was carried out over the survey area to verify the geophysical results. The low-resistivity zone (<80 Ωm), encountered near the subsurface, indicated plume migration caused by liquid waste disposal activities. The combination of TDEM survey results with the lithology of piezometers showed that higher resistivity (>90 Ωm) was correlated with compacted or cemented gravels and cobbles, particularly that of medium dense to very dense gravels and cobbles. Additionally, the TDEM profiles suggested that the plume migration followed a preferential flow path. The resistivity range 40-80 Ωm considered as contaminated areas; however, the drilling results showed the close resistivity domain in the depth >70 m below water table for some profiles (BL1, BL2, BL3, BL4 and BL5). The combined results of drilling wells, piezometers, and TDEM apparent resistivity maps showed a coincidence of the migrated leachate plume and water table. Predicted zone of the probable contamination was located at the depth of around 65 m and horizontal offset ranges 0-280 m, 80-240 m, and 40-85 m in the sounding traverses of BL4, BL6 and BL7, respectively.
Survival rates among Seventh Day Adventists compared with the general population in Poland.
Jedrychowski, W; Tobiasz-Adamczyk, B; Olma, A; Gradzikiewicz, P
1985-01-01
The purpose of the work was to test the hypothesis that the survival rate is higher among the Seventh Day Adventists (SDA) than in the general population of Poland, because of the strictly respected customs adhered to by members of this church community, such as abstinence from smoking and from alcohol. The data on life expectancy in the SDA community covered a total of 236 members of this denomination in Kraków (86 males and 150 females). The survival probability rates were estimated by the life table method, for both men and women separately, and were subsequently compared with the corresponding parameters of the Polish Life Tables. Over a period of 10 years, in which these data were studied, there were 11 deaths in males and 24 deaths in females. Mean age at death was 71.9 years among men and 75.1 among women. The survival curves traced over the age groups of both sexes of SDA members were fairly similar, but they were markedly higher than in the general population of Poland. In the general population the survival rates for people over 40 years old were higher in females than in males, whereas no corresponding sex differences in rates concerning SDA members were observed. The greater benefit in life expectancy is gained in the SDA group in comparison with men in the general population. This is attributable to their abstinence from very harmful habits, otherwise more widespread in this sex group.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Estimated Mass Concentration... Equivalent Methods for PM2.5 Pt. 53, Subpt. F, Table F-4 Table F-4 to Subpart F of Part 53—Estimated Mass... (µm) Test Sampler Fractional Sampling Effectiveness Interval Mass Concentration (µg/m3) Estimated Mass...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Estimated Mass Concentration... Equivalent Methods for PM2.5 Pt. 53, Subpt. F, Table F-4 Table F-4 to Subpart F of Part 53—Estimated Mass... (µm) Test Sampler Fractional Sampling Effectiveness Interval Mass Concentration (µg/m3) Estimated Mass...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Estimated Mass Concentration... Equivalent Methods for PM2.5 Pt. 53, Subpt. F, Table F-6 Table F-6 to Subpart F of Part 53—Estimated Mass... (µm) Test Sampler Fractional Sampling Effectiveness Interval Mass Concentration (µg/m3) Estimated Mass...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Estimated Mass Concentration... Equivalent Methods for PM2.5 Pt. 53, Subpt. F, Table F-6 Table F-6 to Subpart F of Part 53—Estimated Mass... (µm) Test Sampler Fractional Sampling Effectiveness Interval Mass Concentration (µg/m3) Estimated Mass...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Pollutants for Stationary Combustion Turbines Pt. 63, Subpt. YYYY, Table 3 Table 3 to Subpart YYYY of Part 63... points AND Method 1 or 1A of 40 CFR part 60, appendix A § 63.7(d)(1)(i) if using an air pollution control device, the sampling site must be located at the outlet of the air pollution control device. c. determine...
Processor register error correction management
Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.
2016-12-27
Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.
TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS
Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.
2017-01-01
Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971
Non-invasive water-table imaging with joint DC-resistivity/microgravity/hydrologic-model inversion
NASA Astrophysics Data System (ADS)
Kennedy, J.; Macy, J. P.
2017-12-01
The depth of the water table, and fluctuations thereof, is a primary concern in hydrology. In riparian areas, the water table controls when and where vegetation grows. Fluctuations in the water table depth indicate changes in aquifer storage and variation in ET, and may also be responsible for the transport and degradation of contaminants. In the latter case, installation of monitoring wells is problematic because of the potential to create preferential flow pathways. We present a novel method for non-invasive water table monitoring using combined DC resistivity and repeat microgravity data. Resistivity profiles provide spatial resolution, but a quantifiable relation between resistivity changes and aquifer-storage changes depends on a petrophysical relation (typically, Archie's Law), with additional parameters and therefore uncertainty. Conversely, repeat microgravity data provide a direct, quantifiable measurement of aquifer-storage change but lack depth resolution. We show how these two geophysical measurements, together with an unsaturated-zone flow model (Hydrogeosphere), effectively constrain the water table position and help identify groundwater-flow model parameters. A demonstration of the method is made using field data collected during the historic 2014 pulse flow in the Colorado River Delta, which shows that geophysical data can effectively constrain a coupled surface-water/groundwater model used to simulate the potential for riparian vegetation germination and recruitment.
League tables and school effectiveness: a mathematical model.
Hoyle, Rebecca B; Robinson, James C
2003-01-01
'School performance tables', an alphabetical list of secondary schools along with aggregates of their pupils' performances in national tests, have been published in the UK since 1992. Inevitably, the media have responded by publishing ranked 'league tables'. Despite concern over the potentially divisive effect of such tables, the current government has continued to publish this information in the same form. The effect of this information on standards and on the social make-up of the community has been keenly debated. Since there is no control group available that would allow us to investigate this issue directly, we present here a simple mathematical model. Our results indicate that, while random fluctuations from year to year can cause large distortions in the league-table positions, some schools still establish themselves as 'desirable'. To our surprise, we found that 'value-added' tables were no more accurate than tables based on raw exam scores, while a different method of drawing up the tables, in which exam results are averaged over a period of time, appears to give a much more reliable measure of school performance. PMID:12590748
Method For Growth of Crystal Surfaces and Growth of Heteroepitaxial Single Crystal Films Thereon
NASA Technical Reports Server (NTRS)
Powell, J. Anthony (Inventor); Larkin, David J. (Inventor); Neudeck, Philip G. (Inventor); Matus, Lawrence G. (Inventor)
2000-01-01
A method of growing atomically-flat surfaces and high quality low-defect crystal films of semiconductor materials and fabricating improved devices thereon is discussed. The method is also suitable for growing films heteroepitaxially on substrates that are different than the film. The method is particularly suited for growth of elemental semiconductors (such as Si), compounds of Groups III and V elements of the Periodic Table (such as GaN), and compounds and alloys of Group IV elements of the Periodic Table (such as SiC).
The impact of heterogeneity in individual frailty on the dynamics of mortality.
Vaupel, J W; Manton, K G; Stallard, E
1979-08-01
Life table methods are developed for populations whose members differ in their endowment for longevity. Unlike standard methods, which ignore such heterogeneity, these methods use different calculations to construct cohort, period, and individual life tables. The results imply that standard methods overestimate current life expectancy and potential gains in life expectancy from health and safety interventions, while underestimating rates of individual aging, past progress in reducing mortality, and mortality differentials between pairs of populations. Calculations based on Swedish mortality data suggest that these errors may be important, especially in old age.
Yager’s ranking method for solving the trapezoidal fuzzy number linear programming
NASA Astrophysics Data System (ADS)
Karyati; Wutsqa, D. U.; Insani, N.
2018-03-01
In the previous research, the authors have studied the fuzzy simplex method for trapezoidal fuzzy number linear programming based on the Maleki’s ranking function. We have found some theories related to the term conditions for the optimum solution of fuzzy simplex method, the fuzzy Big-M method, the fuzzy two-phase method, and the sensitivity analysis. In this research, we study about the fuzzy simplex method based on the other ranking function. It is called Yager's ranking function. In this case, we investigate the optimum term conditions. Based on the result of research, it is found that Yager’s ranking function is not like Maleki’s ranking function. Using the Yager’s function, the simplex method cannot work as well as when using the Maleki’s function. By using the Yager’s function, the value of the subtraction of two equal fuzzy numbers is not equal to zero. This condition makes the optimum table of the fuzzy simplex table is undetected. As a result, the simplified fuzzy simplex table becomes stopped and does not reach the optimum solution.
A periodic table of coiled-coil protein structures.
Moutevelis, Efrosini; Woolfson, Derek N
2009-01-23
Coiled coils are protein structure domains with two or more alpha-helices packed together via interlacing of side chains known as knob-into-hole packing. We analysed and classified a large set of coiled-coil structures using a combination of automated and manual methods. This led to a systematic classification that we termed a "periodic table of coiled coils," which we have made available at http://coiledcoils.chm.bris.ac.uk/ccplus/search/periodic_table. In this table, coiled-coil assemblies are arranged in columns with increasing numbers of alpha-helices and in rows of increased complexity. The table provides a framework for understanding possibilities in and limits on coiled-coil structures and a basis for future prediction, engineering and design studies.
ERIC Educational Resources Information Center
Zou, Junhua; Liu, Qingtang; Yang, Zongkai
2012-01-01
Based on Competence Motivation Theory (CMT), a Moodle course for schoolchildren's table tennis learning was developed (The URL is http://www.bssepp.com, and this course allows guest access). The effects of the course on students' knowledge, perceived competence and interest were evaluated through quantitative methods. The sample of the study…
New graphic methods for determining the depth and thickness of strata and the projection of dip
Palmer, Harold S.
1919-01-01
Geologists, both in the field and in the office, frequently encounter trigonometric problems the solution of which, though simple enough, is somewhat laborious by the use of trigonometric and logarithmic tables. Charts, tables, and diagrams of various types for facilitating the computations have been published, and a new method may seem to be a superfluous addition to the literature.
Cambered Jet-Flapped Airfoil Theory with Tables and Computer Programs for Application.
1977-09-01
influence function which is a parametric function of the jet-momentum coefficient. In general, the integrals involved must be evaluated by numerical methods. Tables of the necessary influence functions are given in the report.
Graphical CONOPS Prototype to Demonstrate Emerging Methods, Processes, and Tools at ARDEC
2013-07-17
Concept Engineering Framework (ICEF), an extensive literature review was conducted to discover metrics that exist for evaluating concept engineering...language to ICEF to SysML ................................................ 34 Table 5 Artifact metrics ...50 Table 6 Collaboration metrics
NASA Astrophysics Data System (ADS)
Montanari, C. C.; Miraglia, J. E.
2018-01-01
In this contribution we present ab initio results for ionization total cross sections, probabilities at zero impact parameter, and impact parameter moments of order +1 and -1 of Ne, Ar, Kr, and Xe by proton impact in an extended energy range from 100 keV up to 10 MeV. The calculations were performed by using the continuum distorted wave eikonal initial state approximation (CDW-EIS) for energies up to 1 MeV, and using the first Born approximation for larger energies. The convergence of the CDW-EIS to the first Born above 1 MeV is clear in the present results. Our inner-shell ionization cross sections are compared with the available experimental data and with the ECPSSR results. We also include in this contribution the values of the ionization probabilities at the origin, and the impact parameter dependence. These values have been employed in multiple ionization calculations showing very good description of the experimental data. Tables of the ionization probabilities are presented, disaggregated for the different initial bound states, considering all the shells for Ne and Ar, the M-N shells of Kr and the N-O shells of Xe.
Schuhbäck, A; Kolwelter, J; Achenbach, S
2016-08-01
Apart from the Diamond-Forrester classification, which is widely used particularly in the USA for the pretest probability of coronary artery disease, other scores also exist, such as an updated version of the classification table by Genders et al., the Morise score and the Duke clinical risk score. These scores estimate the probability of coronary artery disease, defined as the presence of at least one high-grade stenosis, based on symptom characteristics, age, gender and other parameters. All of the scores were derived from patient cohorts in which invasive coronary angiography had been performed for clinical reasons. It has subsequently been shown that these scores, especially those developed several decades ago, substantially overestimate the pretest probability of coronary artery disease. When these risk scores are applied to patients for whom a non-invasive work-up of suspected coronary artery disease is planned, for example by coronary computed tomography (CT) angiography, the expected prevalence of significant coronary stenosis will be overestimated. This, in turn, influences the test characteristics and the significance of the non-invasive examination (positive and negative predictive values) and needs to be taken into account when interpreting test results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sublet, J.-Ch., E-mail: jean-christophe.sublet@ukaea.uk; Eastwood, J.W.; Morgan, J.G.
Fispact-II is a code system and library database for modelling activation-transmutation processes, depletion-burn-up, time dependent inventory and radiation damage source terms caused by nuclear reactions and decays. The Fispact-II code, written in object-style Fortran, follows the evolution of material irradiated by neutrons, alphas, gammas, protons, or deuterons, and provides a wide range of derived radiological output quantities to satisfy most needs for nuclear applications. It can be used with any ENDF-compliant group library data for nuclear reactions, particle-induced and spontaneous fission yields, and radioactive decay (including but not limited to TENDL-2015, ENDF/B-VII.1, JEFF-3.2, JENDL-4.0u, CENDL-3.1 processed into fine-group-structure files, GEFY-5.2more » and UKDD-16), as well as resolved and unresolved resonance range probability tables for self-shielding corrections and updated radiological hazard indices. The code has many novel features including: extension of the energy range up to 1 GeV; additional neutron physics including self-shielding effects, temperature dependence, thin and thick target yields; pathway analysis; and sensitivity and uncertainty quantification and propagation using full covariance data. The latest ENDF libraries such as TENDL encompass thousands of target isotopes. Nuclear data libraries for Fispact-II are prepared from these using processing codes PREPRO, NJOY and CALENDF. These data include resonance parameters, cross sections with covariances, probability tables in the resonance ranges, PKA spectra, kerma, dpa, gas and radionuclide production and energy-dependent fission yields, supplemented with all 27 decay types. All such data for the five most important incident particles are provided in evaluated data tables. The Fispact-II simulation software is described in detail in this paper, together with the nuclear data libraries. The Fispact-II system also includes several utility programs for code-use optimisation, visualisation and production of secondary radiological quantities. Included in the paper are summaries of results from the suite of verification and validation reports available with the code.« less
FISPACT-II: An Advanced Simulation System for Activation, Transmutation and Material Modelling
NASA Astrophysics Data System (ADS)
Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.; Gilbert, M. R.; Fleming, M.; Arter, W.
2017-01-01
Fispact-II is a code system and library database for modelling activation-transmutation processes, depletion-burn-up, time dependent inventory and radiation damage source terms caused by nuclear reactions and decays. The Fispact-II code, written in object-style Fortran, follows the evolution of material irradiated by neutrons, alphas, gammas, protons, or deuterons, and provides a wide range of derived radiological output quantities to satisfy most needs for nuclear applications. It can be used with any ENDF-compliant group library data for nuclear reactions, particle-induced and spontaneous fission yields, and radioactive decay (including but not limited to TENDL-2015, ENDF/B-VII.1, JEFF-3.2, JENDL-4.0u, CENDL-3.1 processed into fine-group-structure files, GEFY-5.2 and UKDD-16), as well as resolved and unresolved resonance range probability tables for self-shielding corrections and updated radiological hazard indices. The code has many novel features including: extension of the energy range up to 1 GeV; additional neutron physics including self-shielding effects, temperature dependence, thin and thick target yields; pathway analysis; and sensitivity and uncertainty quantification and propagation using full covariance data. The latest ENDF libraries such as TENDL encompass thousands of target isotopes. Nuclear data libraries for Fispact-II are prepared from these using processing codes PREPRO, NJOY and CALENDF. These data include resonance parameters, cross sections with covariances, probability tables in the resonance ranges, PKA spectra, kerma, dpa, gas and radionuclide production and energy-dependent fission yields, supplemented with all 27 decay types. All such data for the five most important incident particles are provided in evaluated data tables. The Fispact-II simulation software is described in detail in this paper, together with the nuclear data libraries. The Fispact-II system also includes several utility programs for code-use optimisation, visualisation and production of secondary radiological quantities. Included in the paper are summaries of results from the suite of verification and validation reports available with the code.
Estimated Depth to Ground Water and Configuration of the Water Table in the Portland, Oregon Area
Snyder, Daniel T.
2008-01-01
Reliable information on the configuration of the water table in the Portland metropolitan area is needed to address concerns about various water-resource issues, especially with regard to potential effects from stormwater injection systems such as UIC (underground injection control) systems that are either existing or planned. To help address these concerns, this report presents the estimated depth-to-water and water-table elevation maps for the Portland area, along with estimates of the relative uncertainty of the maps and seasonal water-table fluctuations. The method of analysis used to determine the water-table configuration in the Portland area relied on water-level data from shallow wells and surface-water features that are representative of the water table. However, the largest source of available well data is water-level measurements in reports filed by well constructors at the time of new well installation, but these data frequently were not representative of static water-level conditions. Depth-to-water measurements reported in well-construction records generally were shallower than measurements by the U.S. Geological Survey (USGS) in the same or nearby wells, although many depth-to-water measurements were substantially deeper than USGS measurements. Magnitudes of differences in depth-to-water measurements reported in well records and those measured by the USGS in the same or nearby wells ranged from -119 to 156 feet with a mean of the absolute value of the differences of 36 feet. One possible cause for the differences is that water levels in many wells reported in well records were not at equilibrium at the time of measurement. As a result, the analysis of the water-table configuration relied on water levels measured during the current study or used in previous USGS investigations in the Portland area. Because of the scarcity of well data in some areas, the locations of select surface-water features including major rivers, streams, lakes, wetlands, and springs representative of where the water table is at land surface were used to augment the analysis. Ground-water and surface-water data were combined for use in interpolation of the water-table configuration. Interpolation of the two representations typically used to define water-table position - depth to the water table below land surface and elevation of the water table above a datum - can produce substantially different results and may represent the end members of a spectrum of possible interpolations largely determined by the quantity of recharge and the hydraulic properties of the aquifer. Datasets of depth-to-water and water-table elevation for the current study were interpolated independently based on kriging as the method of interpolation with parameters determined through the use of semivariograms developed individually for each dataset. Resulting interpolations were then combined to create a single, averaged representation of the water-table configuration. Kriging analysis also was used to develop a map of relative uncertainty associated with the values of the water-table position. Accuracy of the depth-to-water and water-table elevation maps is dependent on various factors and assumptions pertaining to the data, the method of interpolation, and the hydrogeologic conditions of the surficial aquifers in the study area. Although the water-table configuration maps generally are representative of the conditions in the study area, the actual position of the water-table may differ from the estimated position at site-specific locations, and short-term, seasonal, and long-term variations in the differences also can be expected. The relative uncertainty map addresses some but not all possible errors associated with the analysis of the water-table configuration and does not depict all sources of uncertainty. Depth to water greater than 300 feet in the Portland area is limited to parts of the Tualatin Mountains, the foothills of the Cascade Range, and muc
Code of Federal Regulations, 2014 CFR
2014-07-01
.... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM 2.5 and PM 10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...
Code of Federal Regulations, 2011 CFR
2011-07-01
.... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...
Code of Federal Regulations, 2012 CFR
2012-07-01
.... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...
Code of Federal Regulations, 2013 CFR
2013-07-01
.... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM 2.5 and PM 10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...
High Temperature, Long Service Life Fuel Cell Bladder Materials
2004-03-01
50 Table 19. Inner Liner Rubber , D471 Results – Fluid Aging in JP8+100 @ 225°F............................. 52 Table 20. Inner Liner Rubber ...Tensile Properties – Fluid Aging in JP8+100 @ 225°F ..................... 52 Table 21. Inner Liner Rubber , Tear Properties – Fluid Aging in JP8+100...samples in accordance with ASTM D 471: Test Method for Rubber Property - Effects of Liquids. Fluid aging experiments were performed in friction
Parsing GML data based on integrative GML syntactic and semantic schemas database
NASA Astrophysics Data System (ADS)
Miao, Lizhi; Zhang, Shuliang; Lu, Guonian; Gao, Xiaoli; Jiao, Donglai; Gan, Jiayan
2007-06-01
This paper proposes a new method to parse various application schemas of Geography Markup Language (GML) for understanding syntax and semantic of their element and type in order to implement uniform interpretation of the same GML instance data among diverse users. The proposed method generates an Integrative GML Syntactic and Semantic Schemas Database (IGSSSDB) from GML3.1 core schemas and corresponding application schema. This paper parses GML data based on IGSSSDB, which is composed of syntactic and semantic information, nesting information and mapping rules of GML core schemas and application schemas. Three kinds of relational tables are designed for storing information from schemas when constructing IGSSSDB. Those are info tables for schemas included and namespace imported in application schemas, tables for information related to schemas and catalog tables of core schemas. In relational tables, we propose to use homologous regular expression to describe model of elements and complex types in schemas, which can ensure model complete and readable. Based on IGSSSDB, we design and develop many APIs to implement GML data parsing, and can process syntactic and semantic information of GML data from diverse fields and users. At the latter part of this paper, test study is implemented to show that the proposed method is feasible and appropriate for parsing GML data. Also, it founds a good basis for future GML data studies such as storage, index and query etc.
Proper motions and membership probabilities of stars in the region of globular cluster NGC 6366
NASA Astrophysics Data System (ADS)
Sariya, Devesh P.; Yadav, R. K. S.
2015-12-01
Context. NGC 6366 is a metal-rich globular cluster that is relatively unstudied. It is a kinematically interesting cluster, reported as belonging to the slowly rotating halo system, which is unusual given its metallicity and spatial location in the Galaxy. Aims: The purpose of this research is to determine the relative proper motion and membership probability of the stars in the region of globular cluster NGC 6366. To target cluster members reliably during spectroscopic surveys without including field stars, a good proper motion and membership probability catalogue of NGC 6366 is needed. Methods: To derive relative proper motions, the archival data from the Wide Field Imager mounted on the ESO 2.2 m telescope have been reduced using a high precision astrometric software. The images used are in the B,V, and I photometric bands with an epoch gap of ~3.2 yr. The calibrated BVI magnitudes have been determined using recent data for secondary standard stars. Results: We determined relative proper motions and cluster membership probabilities for 2530 stars in the field of globular cluster NGC 6366. The median proper motion rms errors for stars brighter than V ~ 18 mag is ~2 mas yr-1, which gradually increases to ~5 mas yr-1 for stars having magnitudes V ~ 20 mag. Based on the membership catalogue, we checked the membership status of the X-ray sources and variable stars of NGC 6366 mentioned in the literature. We also provide the astronomical community with an electronic catalogue that includes B, V, and I magnitudes; relative proper motions; and membership probabilities of the stars in the region of NGC 6366. Based on observations with the MPG/ESO 2.2 m and ESO/VLT telescopes, located at La Silla and Paranal Observatory, Chile, under DDT programs 164.O-0561(F), 71.D-0220(A) and the archive material.Full Table 4 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/584/A59
Guo, Miao; Mishra, Abhinav; Buchanan, Robert L; Dubey, Jitender P; Hill, Dolores E; Gamble, H Ray; Pradhan, Abani K
2016-07-01
Toxoplasma gondii is a prevalent protozoan parasite worldwide. Human toxoplasmosis is responsible for considerable morbidity and mortality in the United States, and meat products have been identified as an important source of T. gondii infections in humans. The goal of this study was to develop a farm-to-table quantitative microbial risk assessment model to predict the public health burden in the United States associated with consumption of U.S. domestically produced lamb. T. gondii prevalence in market lambs was pooled from the 2011 National Animal Health Monitoring System survey, and the concentration of the infectious life stage (bradyzoites) was calculated in the developed model. A log-linear regression and an exponential doseresponse model were used to model the reduction of T. gondii during home cooking and to predict the probability of infection, respectively. The mean probability of infection per serving of lamb was estimated to be 1.5 cases per 100,000 servings, corresponding to ∼6,300 new infections per year in the U.S. Based on the sensitivity analysis, we identified cooking as the most effective method to influence human health risk. This study provided a quantitative microbial risk assessment framework for T. gondii infection through consumption of lamb and quantified the infection risk and public health burden associated with lamb consumption.