The purpose of this manuscript is to describe the practical strategies developed for the implementation of the Minnesota Children's Pesticide Exposure Study (MNCPES), which is one of the first probability-based samples of multi-pathway and multi-pesticide exposures in children....
The sampling design for the National Children¿s Study (NCS) calls for a population-based, multi-stage, clustered household sampling approach (visit our website for more information on the NCS : www.nationalchildrensstudy.gov). The full sample is designed to be representative of ...
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
Detecting truly clonal alterations from multi-region profiling of tumours
Werner, Benjamin; Traulsen, Arne; Sottoriva, Andrea; Dingli, David
2017-01-01
Modern cancer therapies aim at targeting tumour-specific alterations, such as mutations or neo-antigens, and maximal treatment efficacy requires that targeted alterations are present in all tumour cells. Currently, treatment decisions are based on one or a few samples per tumour, creating uncertainty on whether alterations found in those samples are actually present in all tumour cells. The probability of classifying clonal versus sub-clonal alterations from multi-region profiling of tumours depends on the earliest phylogenetic branching event during tumour growth. By analysing 181 samples from 10 renal carcinoma and 11 colorectal cancers we demonstrate that the information gain from additional sampling falls onto a simple universal curve. We found that in colorectal cancers, 30% of alterations identified as clonal with one biopsy proved sub-clonal when 8 samples were considered. The probability to overestimate clonal alterations fell below 1% in 7/11 patients with 8 samples per tumour. In renal cell carcinoma, 8 samples reduced the list of clonal alterations by 40% with respect to a single biopsy. The probability to overestimate clonal alterations remained as high as 92% in 7/10 renal cancer patients. Furthermore, treatment was associated with more unbalanced tumour phylogenetic trees, suggesting the need of denser sampling of tumours at relapse. PMID:28344344
Detecting truly clonal alterations from multi-region profiling of tumours
NASA Astrophysics Data System (ADS)
Werner, Benjamin; Traulsen, Arne; Sottoriva, Andrea; Dingli, David
2017-03-01
Modern cancer therapies aim at targeting tumour-specific alterations, such as mutations or neo-antigens, and maximal treatment efficacy requires that targeted alterations are present in all tumour cells. Currently, treatment decisions are based on one or a few samples per tumour, creating uncertainty on whether alterations found in those samples are actually present in all tumour cells. The probability of classifying clonal versus sub-clonal alterations from multi-region profiling of tumours depends on the earliest phylogenetic branching event during tumour growth. By analysing 181 samples from 10 renal carcinoma and 11 colorectal cancers we demonstrate that the information gain from additional sampling falls onto a simple universal curve. We found that in colorectal cancers, 30% of alterations identified as clonal with one biopsy proved sub-clonal when 8 samples were considered. The probability to overestimate clonal alterations fell below 1% in 7/11 patients with 8 samples per tumour. In renal cell carcinoma, 8 samples reduced the list of clonal alterations by 40% with respect to a single biopsy. The probability to overestimate clonal alterations remained as high as 92% in 7/10 renal cancer patients. Furthermore, treatment was associated with more unbalanced tumour phylogenetic trees, suggesting the need of denser sampling of tumours at relapse.
A risk assessment method for multi-site damage
NASA Astrophysics Data System (ADS)
Millwater, Harry Russell, Jr.
This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.
Multi-Scale/Multi-Functional Probabilistic Composite Fatigue
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2008-01-01
A multi-level (multi-scale/multi-functional) evaluation is demonstrated by applying it to three different sample problems. These problems include the probabilistic evaluation of a space shuttle main engine blade, an engine rotor and an aircraft wing. The results demonstrate that the blade will fail at the highest probability path, the engine two-stage rotor will fail by fracture at the rim and the aircraft wing will fail at 109 fatigue cycles with a probability of 0.9967.
The Minnesota Children's Pesticide Exposure Study is a probability-based sample of 102 children 3-13 years old who were monitored for commonly used pesticides. During the summer of 1997, first-morning-void urine samples (1-3 per child) were obtained for 88% of study children a...
NASA Technical Reports Server (NTRS)
Baker, G. R.; Fethe, T. P.
1975-01-01
Research in the application of remotely sensed data from LANDSAT or other airborne platforms to the efficient management of a large timber based forest industry was divided into three phases: (1) establishment of a photo/ground sample correlation, (2) investigation of techniques for multi-spectral digital analysis, and (3) development of a semi-automated multi-level sampling system. To properly verify results, three distinct test areas were selected: (1) Jacksonville Mill Region, Lower Coastal Plain, Flatwoods, (2) Pensacola Mill Region, Middle Coastal Plain, and (3) Mississippi Mill Region, Middle Coastal Plain. The following conclusions were reached: (1) the probability of establishing an information base suitable for management requirements through a photo/ground double sampling procedure, alleviating the ground sampling effort, is encouraging, (2) known classification techniques must be investigated to ascertain the level of precision possible in separating the many densities involved, and (3) the multi-level approach must be related to an information system that is executable and feasible.
Simultaneous Multi-Scale Diffusion Estimation and Tractography Guided by Entropy Spectrum Pathways
Galinsky, Vitaly L.; Frank, Lawrence R.
2015-01-01
We have developed a method for the simultaneous estimation of local diffusion and the global fiber tracts based upon the information entropy flow that computes the maximum entropy trajectories between locations and depends upon the global structure of the multi-dimensional and multi-modal diffusion field. Computation of the entropy spectrum pathways requires only solving a simple eigenvector problem for the probability distribution for which efficient numerical routines exist, and a straight forward integration of the probability conservation through ray tracing of the convective modes guided by a global structure of the entropy spectrum coupled with a small scale local diffusion. The intervoxel diffusion is sampled by multi b-shell multi q-angle DWI data expanded in spherical waves. This novel approach to fiber tracking incorporates global information about multiple fiber crossings in every individual voxel and ranks it in the most scientifically rigorous way. This method has potential significance for a wide range of applications, including studies of brain connectivity. PMID:25532167
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
NASA Astrophysics Data System (ADS)
Sadegh, M.; Moftakhari, H.; AghaKouchak, A.
2017-12-01
Many natural hazards are driven by multiple forcing variables, and concurrence/consecutive extreme events significantly increases risk of infrastructure/system failure. It is a common practice to use univariate analysis based upon a perceived ruling driver to estimate design quantiles and/or return periods of extreme events. A multivariate analysis, however, permits modeling simultaneous occurrence of multiple forcing variables. In this presentation, we introduce the Multi-hazard Assessment and Scenario Toolbox (MhAST) that comprehensively analyzes marginal and joint probability distributions of natural hazards. MhAST also offers a wide range of scenarios of return period and design levels and their likelihoods. Contribution of this study is four-fold: 1. comprehensive analysis of marginal and joint probability of multiple drivers through 17 continuous distributions and 26 copulas, 2. multiple scenario analysis of concurrent extremes based upon the most likely joint occurrence, one ruling variable, and weighted random sampling of joint occurrences with similar exceedance probabilities, 3. weighted average scenario analysis based on a expected event, and 4. uncertainty analysis of the most likely joint occurrence scenario using a Bayesian framework.
Wen, Xiaozhong; Chen, Weiqing; Gans, Kim M; Colby, Suzanne M; Lu, Ciyong; Liang, Caihua; Ling, Wenhua
2010-01-01
Background The prevalence of adolescent smoking has been increasing rapidly in China. Theory-based smoking prevention programmes in schools may be an effective approach in preventing smoking among Chinese adolescents. Methods A school-level cluster randomized controlled trial was conducted among 7th and 8th grade students (N = 2343) in four junior high schools in southern China during 2004–06. The theory-based, multi-level intervention was compared with the standard health curriculum. Outcome measures comprised changes in students’ smoking-related knowledge, attitudes and behaviour. Results The mean knowledge scores from baseline to the 1- and 2-year follow-ups increased more in the intervention group than in the control group, whereas there was little change in attitude scores. At the 1-year follow-up (the total sample), the interventions reduced the probability of baseline experimental smokers’ escalating to regular smoker [7.9 vs 18.3%; adjusted odds ratio (OR) 0.34, 95% confidence interval (CI) 0.12–0.97, P = 0.043], but did not reduce the probability of baseline non-smokers’ initiating smoking (7.9 vs 10.6%; adjusted OR 0.86, 95% CI 0.54–1.38, P = 0.538). At the 2-year follow-up (only 7th grade students), similar proportions of baseline non-smokers initiated smoking in the intervention group and the control group (13.5 vs 13.1%), while a possibly lower proportion of baseline experimental smokers escalated to regular smoking in the intervention group than the control group (22.6 vs 40.0%; adjusted OR 0.43, 95% CI 0.12–1.57, P = 0.199). Conclusions This multi-level intervention programme had a moderate effect on inhibiting the escalation from experimental to regular smoking among Chinese adolescents, but had little effect on the initiation of smoking. The programme improved adolescents’ smoking-related knowledge, but did not change their attitudes towards smoking. PMID:20236984
ERIC Educational Resources Information Center
Singer, Judith D.; Willett, John B.
The National Center for Education Statistics (NCES) is exploring the possibility of conducting a large-scale multi-year study of teachers' careers. The proposed new study is intended to follow a national probability sample of teachers over an extended period of time. A number of methodological issues need to be addressed before the study can be…
Forecasting Solar Flares Using Magnetogram-based Predictors and Machine Learning
NASA Astrophysics Data System (ADS)
Florios, Kostas; Kontogiannis, Ioannis; Park, Sung-Hong; Guerra, Jordan A.; Benvenuto, Federico; Bloomfield, D. Shaun; Georgoulis, Manolis K.
2018-02-01
We propose a forecasting approach for solar flares based on data from Solar Cycle 24, taken by the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO) mission. In particular, we use the Space-weather HMI Active Region Patches (SHARP) product that facilitates cut-out magnetograms of solar active regions (AR) in the Sun in near-realtime (NRT), taken over a five-year interval (2012 - 2016). Our approach utilizes a set of thirteen predictors, which are not included in the SHARP metadata, extracted from line-of-sight and vector photospheric magnetograms. We exploit several machine learning (ML) and conventional statistics techniques to predict flares of peak magnitude {>} M1 and {>} C1 within a 24 h forecast window. The ML methods used are multi-layer perceptrons (MLP), support vector machines (SVM), and random forests (RF). We conclude that random forests could be the prediction technique of choice for our sample, with the second-best method being multi-layer perceptrons, subject to an entropy objective function. A Monte Carlo simulation showed that the best-performing method gives accuracy ACC=0.93(0.00), true skill statistic TSS=0.74(0.02), and Heidke skill score HSS=0.49(0.01) for {>} M1 flare prediction with probability threshold 15% and ACC=0.84(0.00), TSS=0.60(0.01), and HSS=0.59(0.01) for {>} C1 flare prediction with probability threshold 35%.
Probability of detection of clinical seizures using heart rate changes.
Osorio, Ivan; Manly, B F J
2015-08-01
Heart rate-based seizure detection is a viable complement or alternative to ECoG/EEG. This study investigates the role of various biological factors on the probability of clinical seizure detection using heart rate. Regression models were applied to 266 clinical seizures recorded from 72 subjects to investigate if factors such as age, gender, years with epilepsy, etiology, seizure site origin, seizure class, and data collection centers, among others, shape the probability of EKG-based seizure detection. Clinical seizure detection probability based on heart rate changes, is significantly (p<0.001) shaped by patients' age and gender, seizure class, and years with epilepsy. The probability of detecting clinical seizures (>0.8 in the majority of subjects) using heart rate is highest for complex partial seizures, increases with a patient's years with epilepsy, is lower for females than for males and is unrelated to the side of hemisphere origin. Clinical seizure detection probability using heart rate is multi-factorially dependent and sufficiently high (>0.8) in most cases to be clinically useful. Knowledge of the role that these factors play in shaping said probability will enhance its applicability and usefulness. Heart rate is a reliable and practical signal for extra-cerebral detection of clinical seizures originating from or spreading to central autonomic network structures. Copyright © 2015 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.
A generic multi-hazard and multi-risk framework and its application illustrated in a virtual city
NASA Astrophysics Data System (ADS)
Mignan, Arnaud; Euchner, Fabian; Wiemer, Stefan
2013-04-01
We present a generic framework to implement hazard correlations in multi-risk assessment strategies. We consider hazard interactions (process I), time-dependent vulnerability (process II) and time-dependent exposure (process III). Our approach is based on the Monte Carlo method to simulate a complex system, which is defined from assets exposed to a hazardous region. We generate 1-year time series, sampling from a stochastic set of events. Each time series corresponds to one risk scenario and the analysis of multiple time series allows for the probabilistic assessment of losses and for the recognition of more or less probable risk paths. Each sampled event is associated to a time of occurrence, a damage footprint and a loss footprint. The occurrence of an event depends on its rate, which is conditional on the occurrence of past events (process I, concept of correlation matrix). Damage depends on the hazard intensity and on the vulnerability of the asset, which is conditional on previous damage on that asset (process II). Losses are the product of damage and exposure value, this value being the original exposure minus previous losses (process III, no reconstruction considered). The Monte Carlo method allows for a straightforward implementation of uncertainties and for implementation of numerous interactions, which is otherwise challenging in an analytical multi-risk approach. We apply our framework to a synthetic data set, defined by a virtual city within a virtual region. This approach gives the opportunity to perform multi-risk analyses in a controlled environment while not requiring real data, which may be difficultly accessible or simply unavailable to the public. Based on the heuristic approach, we define a 100 by 100 km region where earthquakes, volcanic eruptions, fluvial floods, hurricanes and coastal floods can occur. All hazards are harmonized to a common format. We define a 20 by 20 km city, composed of 50,000 identical buildings with a fixed economic value. Vulnerability curves are defined in terms of mean damage ratio as a function of hazard intensity. All data are based on simple equations found in the literature and on other simplifications. We show the impact of earthquake-earthquake interaction and hurricane-storm surge coupling, as well as of time-dependent vulnerability and exposure, on aggregated loss curves. One main result is the emergence of low probability-high consequences (extreme) events when correlations are implemented. While the concept of virtual city can suggest the theoretical benefits of multi-risk assessment for decision support, identifying their real-world practicality will require the study of real test sites.
Red-shouldered hawk occupancy surveys in central Minnesota, USA
Henneman, C.; McLeod, M.A.; Andersen, D.E.
2007-01-01
Forest-dwelling raptors are often difficult to detect because many species occur at low density or are secretive. Broadcasting conspecific vocalizations can increase the probability of detecting forest-dwelling raptors and has been shown to be an effective method for locating raptors and assessing their relative abundance. Recent advances in statistical techniques based on presence-absence data use probabilistic arguments to derive probability of detection when it is <1 and to provide a model and likelihood-based method for estimating proportion of sites occupied. We used these maximum-likelihood models with data from red-shouldered hawk (Buteo lineatus) call-broadcast surveys conducted in central Minnesota, USA, in 1994-1995 and 2004-2005. Our objectives were to obtain estimates of occupancy and detection probability 1) over multiple sampling seasons (yr), 2) incorporating within-season time-specific detection probabilities, 3) with call type and breeding stage included as covariates in models of probability of detection, and 4) with different sampling strategies. We visited individual survey locations 2-9 times per year, and estimates of both probability of detection (range = 0.28-0.54) and site occupancy (range = 0.81-0.97) varied among years. Detection probability was affected by inclusion of a within-season time-specific covariate, call type, and breeding stage. In 2004 and 2005 we used survey results to assess the effect that number of sample locations, double sampling, and discontinued sampling had on parameter estimates. We found that estimates of probability of detection and proportion of sites occupied were similar across different sampling strategies, and we suggest ways to reduce sampling effort in a monitoring program.
Texas School Survey of Substance Abuse: Grades 7-12. 1992.
ERIC Educational Resources Information Center
Liu, Liang Y.; Fredlund, Eric V.
The 1992 Texas School Survey results for secondary students are based on data collected from a sample of 73,073 students in grades 7 through 12. Students were randomly selected from school districts throughout the state using a multi-stage probability design. The procedure ensured that students living in metropolitan and rural areas of Texas are…
NASA Astrophysics Data System (ADS)
Giovanis, D. G.; Shields, M. D.
2018-07-01
This paper addresses uncertainty quantification (UQ) for problems where scalar (or low-dimensional vector) response quantities are insufficient and, instead, full-field (very high-dimensional) responses are of interest. To do so, an adaptive stochastic simulation-based methodology is introduced that refines the probability space based on Grassmann manifold variations. The proposed method has a multi-element character discretizing the probability space into simplex elements using a Delaunay triangulation. For every simplex, the high-dimensional solutions corresponding to its vertices (sample points) are projected onto the Grassmann manifold. The pairwise distances between these points are calculated using appropriately defined metrics and the elements with large total distance are sub-sampled and refined. As a result, regions of the probability space that produce significant changes in the full-field solution are accurately resolved. An added benefit is that an approximation of the solution within each element can be obtained by interpolation on the Grassmann manifold. The method is applied to study the probability of shear band formation in a bulk metallic glass using the shear transformation zone theory.
A case for multi-model and multi-approach based event attribution: The 2015 European drought
NASA Astrophysics Data System (ADS)
Hauser, Mathias; Gudmundsson, Lukas; Orth, René; Jézéquel, Aglaé; Haustein, Karsten; Seneviratne, Sonia Isabelle
2017-04-01
Science on the role of anthropogenic influence on extreme weather events such as heat waves or droughts has evolved rapidly over the past years. The approach of "event attribution" compares the occurrence probability of an event in the present, factual world with the probability of the same event in a hypothetical, counterfactual world without human-induced climate change. Every such analysis necessarily faces multiple methodological choices including, but not limited to: the event definition, climate model configuration, and the design of the counterfactual world. Here, we explore the role of such choices for an attribution analysis of the 2015 European summer drought (Hauser et al., in preparation). While some GCMs suggest that anthropogenic forcing made the 2015 drought more likely, others suggest no impact, or even a decrease in the event probability. These results additionally differ for single GCMs, depending on the reference used for the counterfactual world. Observational results do not suggest a historical tendency towards more drying, but the record may be too short to provide robust assessments because of the large interannual variability of drought occurrence. These results highlight the need for a multi-model and multi-approach framework in event attribution research. This is especially important for events with low signal to noise ratio and high model dependency such as regional droughts. Hauser, M., L. Gudmundsson, R. Orth, A. Jézéquel, K. Haustein, S.I. Seneviratne, in preparation. A case for multi-model and multi-approach based event attribution: The 2015 European drought.
Unbiased multi-fidelity estimate of failure probability of a free plane jet
NASA Astrophysics Data System (ADS)
Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin
2017-11-01
Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.
Dealing with uncertainty in the probability of overtopping of a flood mitigation dam
NASA Astrophysics Data System (ADS)
Michailidi, Eleni Maria; Bacchi, Baldassare
2017-05-01
In recent years, copula multivariate functions were used to model, probabilistically, the most important variables of flood events: discharge peak, flood volume and duration. However, in most of the cases, the sampling uncertainty, from which small-sized samples suffer, is neglected. In this paper, considering a real reservoir controlled by a dam as a case study, we apply a structure-based approach to estimate the probability of reaching specific reservoir levels, taking into account the key components of an event (flood peak, volume, hydrograph shape) and of the reservoir (rating curve, volume-water depth relation). Additionally, we improve information about the peaks from historical data and reports through a Bayesian framework, allowing the incorporation of supplementary knowledge from different sources and its associated error. As it is seen here, the extra information can result in a very different inferred parameter set and consequently this is reflected as a strong variability of the reservoir level, associated with a given return period. Most importantly, the sampling uncertainty is accounted for in both cases (single-site and multi-site with historical information scenarios), and Monte Carlo confidence intervals for the maximum water level are calculated. It is shown that water levels of specific return periods in a lot of cases overlap, thus making risk assessment, without providing confidence intervals, deceiving.
Spiegelhalter, D J; Freedman, L S
1986-01-01
The 'textbook' approach to determining sample size in a clinical trial has some fundamental weaknesses which we discuss. We describe a new predictive method which takes account of prior clinical opinion about the treatment difference. The method adopts the point of clinical equivalence (determined by interviewing the clinical participants) as the null hypothesis. Decision rules at the end of the study are based on whether the interval estimate of the treatment difference (classical or Bayesian) includes the null hypothesis. The prior distribution is used to predict the probabilities of making the decisions to use one or other treatment or to reserve final judgement. It is recommended that sample size be chosen to control the predicted probability of the last of these decisions. An example is given from a multi-centre trial of superficial bladder cancer.
NASA Astrophysics Data System (ADS)
Pries, V. V.; Proskuriakov, N. E.
2018-04-01
To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.
Reproducibility of preclinical animal research improves with heterogeneity of study samples
Vogt, Lucile; Sena, Emily S.; Würbel, Hanno
2018-01-01
Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
FUNSTAT and statistical image representations
NASA Technical Reports Server (NTRS)
Parzen, E.
1983-01-01
General ideas of functional statistical inference analysis of one sample and two samples, univariate and bivariate are outlined. ONESAM program is applied to analyze the univariate probability distributions of multi-spectral image data.
Manzini, G; Ettrich, T J; Kremer, M; Kornmann, M; Henne-Bruns, D; Eikema, D A; Schlattmann, P; de Wreede, L C
2018-02-13
Standard survival analysis fails to give insight into what happens to a patient after a first outcome event (like first relapse of a disease). Multi-state models are a useful tool for analyzing survival data when different treatments and results (intermediate events) can occur. Aim of this study was to implement a multi-state model on data of patients with rectal cancer to illustrate the advantages of multi-state analysis in comparison to standard survival analysis. We re-analyzed data from the RCT FOGT-2 study by using a multi-state model. Based on the results we defined a high and low risk reference patient. Using dynamic prediction, we estimated how the survival probability changes as more information about the clinical history of the patient becomes available. A patient with stage UICC IIIc (vs UICC II) has a higher risk to develop distant metastasis (DM) or both DM and local recurrence (LR) if he/she discontinues chemotherapy within 6 months or between 6 and 12 months, as well as after the completion of 12 months CTx with HR 3.55 (p = 0.026), 5.33 (p = 0.001) and 3.37 (p < 0.001), respectively. He/she also has a higher risk to die after the development of DM (HR 1.72, p = 0.023). Anterior resection vs. abdominoperineal amputation means 63% risk reduction to develop DM or both DM and LR (HR 0.37, p = 0.003) after discontinuation of chemotherapy between 6 and 12 months. After development of LR, a woman has a 4.62 times higher risk to die (p = 0.006). A high risk reference patient has an estimated 43% 5-year survival probability at start of CTx, whereas for a low risk patient this is 79%. After the development of DM 1 year later, the high risk patient has an estimated 5-year survival probability of 11% and the low risk patient one of 21%. Multi-state models help to gain additional insight into the complex events after start of treatment. Dynamic prediction shows how survival probabilities change by progression of the clinical history.
NASA Astrophysics Data System (ADS)
Wallace, Jon Michael
2003-10-01
Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.
O'Brien, Kathryn; Edwards, Adrian; Hood, Kerenza; Butler, Christopher C
2013-02-01
Urinary tract infection (UTI) in children may be associated with long-term complications that could be prevented by prompt treatment. To determine the prevalence of UTI in acutely ill children ≤ 5 years presenting in general practice and to explore patterns of presenting symptoms and urine sampling strategies. Prospective observational study with systematic urine sampling, in general practices in Wales, UK. In total, 1003 children were recruited from 13 general practices between March 2008 and July 2010. The prevalence of UTI was determined and multivariable analysis performed to determine the probability of UTI. Out of 597 (60.0%) children who provided urine samples within 2 days, the prevalence of UTI was 5.9% (95% confidence interval [CI] = 4.3% to 8.0%) overall, 7.3% in those < 3 years and 3.2% in 3-5 year olds. Neither a history of fever nor the absence of an alternative source of infection was associated with UTI (P = 0.64; P = 0.69, respectively). The probability of UTI in children aged ≥3 years without increased urinary frequency or dysuria was 2%. The probability of UTI was ≥5% in all other groups. Urine sampling based purely on GP suspicion would have missed 80% of UTIs, while a sampling strategy based on current guidelines would have missed 50%. Approximately 6% of acutely unwell children presenting to UK general practice met the criteria for a laboratory diagnosis of UTI. This higher than previously recognised prior probability of UTI warrants raised awareness of the condition and suggests clinicians should lower their threshold for urine sampling in young children. The absence of fever or presence of an alternative source of infection, as emphasised in current guidelines, may not rule out UTI in young children with adequate certainty.
NASA Astrophysics Data System (ADS)
Kamal Chowdhury, AFM; Lockart, Natalie; Willgoose, Garry; Kuczera, George; Kiem, Anthony; Parana Manage, Nadeeka
2016-04-01
Stochastic simulation of rainfall is often required in the simulation of streamflow and reservoir levels for water security assessment. As reservoir water levels generally vary on monthly to multi-year timescales, it is important that these rainfall series accurately simulate the multi-year variability. However, the underestimation of multi-year variability is a well-known issue in daily rainfall simulation. Focusing on this issue, we developed a hierarchical Markov Chain (MC) model in a traditional two-part MC-Gamma Distribution modelling structure, but with a new parameterization technique. We used two parameters of first-order MC process (transition probabilities of wet-to-wet and dry-to-dry days) to simulate the wet and dry days, and two parameters of Gamma distribution (mean and standard deviation of wet day rainfall) to simulate wet day rainfall depths. We found that use of deterministic Gamma parameter values results in underestimation of multi-year variability of rainfall depths. Therefore, we calculated the Gamma parameters for each month of each year from the observed data. Then, for each month, we fitted a multi-variate normal distribution to the calculated Gamma parameter values. In the model, we stochastically sampled these two Gamma parameters from the multi-variate normal distribution for each month of each year and used them to generate rainfall depth in wet days using the Gamma distribution. In another study, Mehrotra and Sharma (2007) proposed a semi-parametric Markov model. They also used a first-order MC process for rainfall occurrence simulation. But, the MC parameters were modified by using an additional factor to incorporate the multi-year variability. Generally, the additional factor is analytically derived from the rainfall over a pre-specified past periods (e.g. last 30, 180, or 360 days). They used a non-parametric kernel density process to simulate the wet day rainfall depths. In this study, we have compared the performance of our hierarchical MC model with the semi-parametric model in preserving rainfall variability in daily, monthly, and multi-year scales. To calibrate the parameters of both models and assess their ability to preserve observed statistics, we have used ground based data from 15 raingauge stations around Australia, which consist a wide range of climate zones including coastal, monsoonal, and arid climate characteristics. In preliminary results, both models show comparative performances in preserving the multi-year variability of rainfall depth and occurrence. However, the semi-parametric model shows a tendency of overestimating the mean rainfall depth, while our model shows a tendency of overestimating the number of wet days. We will discuss further the relative merits of the both models for hydrology simulation in the presentation.
Multi-year predictability of climate, drought, and wildfire in southwestern North America.
Chikamoto, Yoshimitsu; Timmermann, Axel; Widlansky, Matthew J; Balmaseda, Magdalena A; Stott, Lowell
2017-07-26
Past severe droughts over North America have led to massive water shortages and increases in wildfire frequency. Triggering sources for multi-year droughts in this region include randomly occurring atmospheric blocking patterns, ocean impacts on atmospheric circulation, and climate's response to anthropogenic radiative forcings. A combination of these sources translates into a difficulty to predict the onset and length of such droughts on multi-year timescales. Here we present results from a new multi-year dynamical prediction system that exhibits a high degree of skill in forecasting wildfire probabilities and drought for 10-23 and 10-45 months lead time, which extends far beyond the current seasonal prediction activities for southwestern North America. Using a state-of-the-art earth system model along with 3-dimensional ocean data assimilation and by prescribing the external radiative forcings, this system simulates the observed low-frequency variability of precipitation, soil water, and wildfire probabilities in close agreement with observational records and reanalysis data. The underlying source of multi-year predictability can be traced back to variations of the Atlantic/Pacific sea surface temperature gradient, external radiative forcings, and the low-pass filtering characteristics of soils.
Sampling Methods in Cardiovascular Nursing Research: An Overview.
Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie
2014-01-01
Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.
Reduction of Racial Disparities in Prostate Cancer
2006-12-01
hyperplasia, interstitial cystitis/chronic pelvic pain, prostatitis, hypogonadism/androgen deficiency, erectile dysfunction , female sexual...socioeconomically diverse, community-based sample of adults aged 30–79 years in Boston, Massachusetts. This report gives estimates from the 2301 men in...survey is designed to estimate the prevalence of symptoms of urological disorders in a multi-ethnic, community-based sample of adults aged 30−79 years
Detection of Sea Ice and Open Water from RADARSAT-2 Images for Data Assimilation
NASA Astrophysics Data System (ADS)
Komarov, A.; Buehner, M.
2016-12-01
Automated detection of sea ice and open water from SAR data is very important for further assimilation into coupled ocean-sea ice-atmosphere numerical models, such as the Regional Ice-Ocean Prediction System being implemented at the Environment and Climate Change Canada. Conventional classification approaches based on various learning techniques are found to be limited by the fact that they typically do not indicate the level of confidence for ice and water retrievals. Meanwhile, only ice/water retrievals with a very high level of confidence are allowed to be assimilated into the sea ice model to avoid propagating and magnifying errors into the numerical prediction system. In this study we developed a new technique for ice and water detection from dual-polarization RADARSAT-2 HH-HV images which provides the probability of ice/water at a given location. We collected many hundreds of thousands of SAR signatures over various sea ice types (i.e. new, grey, first-year, and multi-year ice) and open water from all available RADARSAT-2 images and the corresponding Canadian Ice Service Image Analysis products over the period from November 2010 to May 2016. Our analysis of the dataset revealed that ice/water separation can be effectively performed in the space of SAR-based variables independent of the incidence angle and noise floor (such as texture measures) and auxiliary Global Environmental Multiscale Model parameters (such as surface wind speed). Choice of the parameters will be specifically discussed in the presentation. An ice probability empirical model as a function of the selected predictors was built in a form of logistic regression, based on the training dataset from 2012 to 2016. The developed ice probability model showed very good performance on the independent testing subset (year 2011). With the ice/water probability threshold of 0.95 reflecting a very high level of confidence, 79% of the testing ice and water samples were classified with the accuracy of 99%. These results are particularly important in light of the upcoming RADARSAT Constellation mission which will drastically increase the amount of SAR data over the Arctic region.
Teaching Probability to Pre-Service Teachers with Argumentation Based Science Learning Approach
ERIC Educational Resources Information Center
Can, Ömer Sinan; Isleyen, Tevfik
2016-01-01
The aim of this study is to explore the effects of the argumentation based science learning (ABSL) approach on the teaching probability to pre-service teachers. The sample of the study included 41 students studying at the Department of Elementary School Mathematics Education in a public university during the 2014-2015 academic years. The study is…
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
Interference Information Based Power Control for Cognitive Radio with Multi-Hop Cooperative Sensing
NASA Astrophysics Data System (ADS)
Yu, Youngjin; Murata, Hidekazu; Yamamoto, Koji; Yoshida, Susumu
Reliable detection of other radio systems is crucial for systems that share the same frequency band. In wireless communication channels, there is uncertainty in the received signal level due to multipath fading and shadowing. Cooperative sensing techniques in which radio stations share their sensing information can improve the detection probability of other systems. In this paper, a new cooperative sensing scheme that reduces the false detection probability while maintaining the outage probability of other systems is investigated. In the proposed system, sensing information is collected using multi-hop transmission from all sensing stations that detect other systems, and transmission decisions are based on the received sensing information. The proposed system also controls the transmit power based on the received CINRs from the sensing stations. Simulation results reveal that the proposed system can reduce the outage probability of other systems, or improve its link success probability.
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005
Xu, Jason; Minin, Vladimir N
2015-07-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.
Xu, Jason; Minin, Vladimir N.
2016-01-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377
O’Brien, Kathryn; Edwards, Adrian; Hood, Kerenza; Butler, Christopher C
2013-01-01
Background Urinary tract infection (UTI) in children may be associated with long-term complications that could be prevented by prompt treatment. Aim To determine the prevalence of UTI in acutely ill children ≤ 5 years presenting in general practice and to explore patterns of presenting symptoms and urine sampling strategies. Design and setting Prospective observational study with systematic urine sampling, in general practices in Wales, UK. Method In total, 1003 children were recruited from 13 general practices between March 2008 and July 2010. The prevalence of UTI was determined and multivariable analysis performed to determine the probability of UTI. Result Out of 597 (60.0%) children who provided urine samples within 2 days, the prevalence of UTI was 5.9% (95% confidence interval [CI] = 4.3% to 8.0%) overall, 7.3% in those < 3 years and 3.2% in 3–5 year olds. Neither a history of fever nor the absence of an alternative source of infection was associated with UTI (P = 0.64; P = 0.69, respectively). The probability of UTI in children aged ≥3 years without increased urinary frequency or dysuria was 2%. The probability of UTI was ≥5% in all other groups. Urine sampling based purely on GP suspicion would have missed 80% of UTIs, while a sampling strategy based on current guidelines would have missed 50%. Conclusion Approximately 6% of acutely unwell children presenting to UK general practice met the criteria for a laboratory diagnosis of UTI. This higher than previously recognised prior probability of UTI warrants raised awareness of the condition and suggests clinicians should lower their threshold for urine sampling in young children. The absence of fever or presence of an alternative source of infection, as emphasised in current guidelines, may not rule out UTI in young children with adequate certainty. PMID:23561695
Are ranger patrols effective in reducing poaching-related threats within protected areas?
Moore, Jennnifer F.; Mulindahabi, Felix; Masozera, Michel K.; Nichols, James; Hines, James; Turikunkiko, Ezechiel; Oli, Madan K.
2018-01-01
Poaching is one of the greatest threats to wildlife conservation world-wide. However, the spatial and temporal patterns of poaching activities within protected areas, and the effectiveness of ranger patrols and ranger posts in mitigating these threats, are relatively unknown.We used 10 years (2006–2015) of ranger-based monitoring data and dynamic multi-season occupancy models to quantify poaching-related threats, to examine factors influencing the spatio-temporal dynamics of these threats and to test the efficiency of management actions to combat poaching in Nyungwe National Park (NNP), Rwanda.The probability of occurrence of poaching-related threats was highest at lower elevations (1,801–2,200 m), especially in areas that were close to roads and tourist trails; conversely, occurrence probability was lowest at high elevation sites (2,601–3,000 m), and near the park boundary and ranger posts. The number of ranger patrols substantially increased the probability that poaching-related threats disappear at a site if threats were originally present (i.e. probability of extinction of threats). Without ranger visits, the annual probability of extinction of poaching-related threats was an estimated 7%; this probability would increase to 20% and 57% with 20 and 50 ranger visits per year, respectively.Our results suggest that poaching-related threats can be effectively reduced in NNP by adding ranger posts in areas where they do not currently exist, and by increasing the number of patrols to sites where the probability of poaching activities is high.Synthesis and applications. Our application of dynamic occupancy models to predict the probability of presence of poaching-related threats is novel, and explicitly considers imperfect detection of illegal activities. Based on the modelled relationships, we identify areas that are most vulnerable to poaching, and offer insights regarding how ranger patrols can be optimally deployed to reduce poaching-related threats and other illegal activites, while taking into account potential sampling biases. We show that poaching can be effectively reduced by increasing ranger patrols to areas under high risk of poaching activities, and by adding ranger patrols near these sites. These findings are broadly applicable to national parks and protected areas experiencing a high degree of poaching and other illegal activities.
NASA Astrophysics Data System (ADS)
Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki
To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.
2012-07-01
In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less
Multi-Level Anomaly Detection on Time-Varying Graph Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A; Collins, John P; Ferragut, Erik M
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating probabilities at finer levels, and these closely related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, thismore » multi-scale analysis facilitates intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. To illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
Larkin, J D; Publicover, N G; Sutko, J L
2011-01-01
In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
eDNAoccupancy: An R package for multi-scale occupancy modeling of environmental DNA data
Dorazio, Robert; Erickson, Richard A.
2017-01-01
In this article we describe eDNAoccupancy, an R package for fitting Bayesian, multi-scale occupancy models. These models are appropriate for occupancy surveys that include three, nested levels of sampling: primary sample units within a study area, secondary sample units collected from each primary unit, and replicates of each secondary sample unit. This design is commonly used in occupancy surveys of environmental DNA (eDNA). eDNAoccupancy allows users to specify and fit multi-scale occupancy models with or without covariates, to estimate posterior summaries of occurrence and detection probabilities, and to compare different models using Bayesian model-selection criteria. We illustrate these features by analyzing two published data sets: eDNA surveys of a fungal pathogen of amphibians and eDNA surveys of an endangered fish species.
Evaluating multi-level models to test occupancy state responses of Plethodontid salamanders
Kroll, Andrew J.; Garcia, Tiffany S.; Jones, Jay E.; Dugger, Catherine; Murden, Blake; Johnson, Josh; Peerman, Summer; Brintz, Ben; Rochelle, Michael
2015-01-01
Plethodontid salamanders are diverse and widely distributed taxa and play critical roles in ecosystem processes. Due to salamander use of structurally complex habitats, and because only a portion of a population is available for sampling, evaluation of sampling designs and estimators is critical to provide strong inference about Plethodontid ecology and responses to conservation and management activities. We conducted a simulation study to evaluate the effectiveness of multi-scale and hierarchical single-scale occupancy models in the context of a Before-After Control-Impact (BACI) experimental design with multiple levels of sampling. Also, we fit the hierarchical single-scale model to empirical data collected for Oregon slender and Ensatina salamanders across two years on 66 forest stands in the Cascade Range, Oregon, USA. All models were fit within a Bayesian framework. Estimator precision in both models improved with increasing numbers of primary and secondary sampling units, underscoring the potential gains accrued when adding secondary sampling units. Both models showed evidence of estimator bias at low detection probabilities and low sample sizes; this problem was particularly acute for the multi-scale model. Our results suggested that sufficient sample sizes at both the primary and secondary sampling levels could ameliorate this issue. Empirical data indicated Oregon slender salamander occupancy was associated strongly with the amount of coarse woody debris (posterior mean = 0.74; SD = 0.24); Ensatina occupancy was not associated with amount of coarse woody debris (posterior mean = -0.01; SD = 0.29). Our simulation results indicate that either model is suitable for use in an experimental study of Plethodontid salamanders provided that sample sizes are sufficiently large. However, hierarchical single-scale and multi-scale models describe different processes and estimate different parameters. As a result, we recommend careful consideration of study questions and objectives prior to sampling data and fitting models.
Optimal pattern synthesis for speech recognition based on principal component analysis
NASA Astrophysics Data System (ADS)
Korsun, O. N.; Poliyev, A. V.
2018-02-01
The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.
Storm-based Cloud-to-Ground Lightning Probabilities and Warnings
NASA Astrophysics Data System (ADS)
Calhoun, K. M.; Meyer, T.; Kingfield, D.
2017-12-01
A new cloud-to-ground (CG) lightning probability algorithm has been developed using machine-learning methods. With storm-based inputs of Earth Networks' in-cloud lightning, Vaisala's CG lightning, multi-radar/multi-sensor (MRMS) radar derived products including the Maximum Expected Size of Hail (MESH) and Vertically Integrated Liquid (VIL), and near storm environmental data including lapse rate and CAPE, a random forest algorithm was trained to produce probabilities of CG lightning up to one-hour in advance. As part of the Prototype Probabilistic Hazard Information experiment in the Hazardous Weather Testbed in 2016 and 2017, National Weather Service forecasters were asked to use this CG lightning probability guidance to create rapidly updating probability grids and warnings for the threat of CG lightning for 0-60 minutes. The output from forecasters was shared with end-users, including emergency managers and broadcast meteorologists, as part of an integrated warning team.
Houston, J Brian; Spialek, Matthew L; Stevens, Jordan; First, Jennifer; Mieseler, Vicky L; Pfefferbaum, Betty
2015-10-26
Introduction. On May 22, 2011 the deadliest tornado in the United States since 1947 struck Joplin, Missouri killing 161 people, injuring approximately 1,150 individuals, and causing approximately $2.8 billion in economic losses. Methods. This study examined the mental health effects of this event through a random digit dialing sample (N = 380) of Joplin adults at approximately 6 months post-disaster (Survey 1) and a purposive convenience sample (N = 438) of Joplin adults at approximately 2.5 years post-disaster (Survey 2). For both surveys we assessed tornado experience, posttraumatic stress, depression, mental health service utilization, and sociodemographics. For Survey 2 we also assessed social support and parent report of child strengths and difficulties. Results. Probable PTSD relevance was 12.63% at Survey 1 and 26.74% at Survey 2, while current depression prevalence was 20.82% at Survey 1 and 13.33% at Survey 2. Less education and more tornado experience was generally related to greater likelihood of experiencing probable PTSD and current depression for both surveys. Men and younger participants were more likely to report current depression at Survey 1. Low levels of social support (assessed only at Survey 2) were related to more probable PTSD and current depression. For both surveys, we observed low rates of mental health service utilization, and these rates were also low for participants reporting probable PTSD and current depression. At Survey 2 we assessed parent report of child (ages 4 to 17) strengths and difficulties and found that child difficulties were more frequent for younger children (ages 4 to 10) than older children (ages 11 to 17), and that parents reporting probable PTSD reported a greater frequency of children with borderline or abnormal difficulties. Discussion. Overall our results indicate that long-term (multi-year) community disaster mental health monitoring, assessment, referral, outreach, and services are needed following a major disaster like the 2011 Joplin tornado.
Houston, J. Brian; Spialek, Matthew L.; Stevens, Jordan; First, Jennifer; Mieseler, Vicky L.; Pfefferbaum, Betty
2015-01-01
Introduction. On May 22, 2011 the deadliest tornado in the United States since 1947 struck Joplin, Missouri killing 161 people, injuring approximately 1,150 individuals, and causing approximately $2.8 billion in economic losses. Methods. This study examined the mental health effects of this event through a random digit dialing sample (N = 380) of Joplin adults at approximately 6 months post-disaster (Survey 1) and a purposive convenience sample (N = 438) of Joplin adults at approximately 2.5 years post-disaster (Survey 2). For both surveys we assessed tornado experience, posttraumatic stress, depression, mental health service utilization, and sociodemographics. For Survey 2 we also assessed social support and parent report of child strengths and difficulties. Results. Probable PTSD relevance was 12.63% at Survey 1 and 26.74% at Survey 2, while current depression prevalence was 20.82% at Survey 1 and 13.33% at Survey 2. Less education and more tornado experience was generally related to greater likelihood of experiencing probable PTSD and current depression for both surveys. Men and younger participants were more likely to report current depression at Survey 1. Low levels of social support (assessed only at Survey 2) were related to more probable PTSD and current depression. For both surveys, we observed low rates of mental health service utilization, and these rates were also low for participants reporting probable PTSD and current depression. At Survey 2 we assessed parent report of child (ages 4 to 17) strengths and difficulties and found that child difficulties were more frequent for younger children (ages 4 to 10) than older children (ages 11 to 17), and that parents reporting probable PTSD reported a greater frequency of children with borderline or abnormal difficulties. Discussion. Overall our results indicate that long-term (multi-year) community disaster mental health monitoring, assessment, referral, outreach, and services are needed following a major disaster like the 2011 Joplin tornado. PMID:26579331
NASA Astrophysics Data System (ADS)
Bandte, Oliver
It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.
Dodd, C. Kenneth; Barichivich, William J.; Johnson, Steve A.; Gunzburger Aresco, Margaret; Staiger, Jennifer S.
2018-01-01
recorded 23 amphibian species, 19 frogs and 4 salamanders. Species richness was lower than in other areas of the coastal Big Bend region to the north, perhaps due to a combination of proximity to the limits of species’ ranges, sampling techniques, times of year when sampling occurred, and variation in detection probabilities among years and regions. Amphibians occupied a wide variety of habitats and appeared tolerant of the generally acidic conditions of many of the wetlands. Small streams and the Suwannee River were less acidic and had greater conductivities and mineral concentrations than isolated ponds; concentrations of heavy metals varied and mercury was not detected. Although additional species may yet be found in LSNWR, this survey provides a historic baseline for assessing future status and trends of amphibian populations as areas adjacent to the refuge are disturbed and as restoration and multi-use management continue within its boundaries.
Public attitudes toward stuttering in Turkey: probability versus convenience sampling.
Ozdemir, R Sertan; St Louis, Kenneth O; Topbaş, Seyhun
2011-12-01
A Turkish translation of the Public Opinion Survey of Human Attributes-Stuttering (POSHA-S) was used to compare probability versus convenience sampling to measure public attitudes toward stuttering. A convenience sample of adults in Eskişehir, Turkey was compared with two replicates of a school-based, probability cluster sampling scheme. The two replicates of the probability sampling scheme yielded similar demographic samples, both of which were different from the convenience sample. Components of subscores on the POSHA-S were significantly different in more than half of the comparisons between convenience and probability samples, indicating important differences in public attitudes. If POSHA-S users intend to generalize to specific geographic areas, results of this study indicate that probability sampling is a better research strategy than convenience sampling. The reader will be able to: (1) discuss the difference between convenience sampling and probability sampling; (2) describe a school-based probability sampling scheme; and (3) describe differences in POSHA-S results from convenience sampling versus probability sampling. Copyright © 2011 Elsevier Inc. All rights reserved.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2015-01-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475
Probabilistic Open Set Recognition
NASA Astrophysics Data System (ADS)
Jain, Lalit Prithviraj
Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.
Ghinai, Isaac; Cook, Jackie; Hla, Teddy Tun Win; Htet, Hein Myat Thu; Hall, Tom; Lubis, Inke Nd; Ghinai, Rosanna; Hesketh, Therese; Naung, Ye; Lwin, Mya Mya; Latt, Tint Swe; Heymann, David L; Sutherland, Colin J; Drakeley, Chris; Field, Nigel
2017-01-05
The spread of artemisinin-resistant Plasmodium falciparum is a global health concern. Myanmar stands at the frontier of artemisinin-resistant P. falciparum. Myanmar also has the highest reported malaria burden in Southeast Asia; it is integral in the World Health Organization's plan to eliminate malaria in Southeast Asia, yet few epidemiological data exist for the general population in Myanmar. This cross-sectional, probability household survey was conducted in Phyu township, Bago Region (central Myanmar), during the wet season of 2013. Interviewers collected clinical and behavioural data, recorded tympanic temperature and obtained dried blood spots for malaria PCR and serology. Plasmodium falciparum positive samples were tested for genetic mutations in the K13 region that may confer artemisinin resistance. Estimated type-specific malaria PCR prevalence and seroprevalence were calculated, with regression analysis to identify risk factors for seropositivity to P. falciparum. Data were weighted to account for unequal selection probabilities. 1638 participants were sampled (500 households). Weighted PCR prevalence was low (n = 41, 2.5%) and most cases were afebrile (93%). Plasmodium falciparum was the most common species (n = 19. 1.1%) and five (26%) P. falciparum samples harboured K13 mutations. Plasmodium knowlesi was detected in 1.0% (n = 16) and Plasmodium vivax was detected in 0.4% (n = 7). Seroprevalence was 9.4% for P. falciparum and 3.1% for P. vivax. Seroconversion to P. falciparum was 0.003/year in the whole population, but 16-fold higher in men over 23 years old (LR test p = 0.016). This is the first population-based seroprevalence study from central Myanmar. Low overall prevalence was discovered. However, these data suggest endemic transmission continues, probably associated with behavioural risk factors amongst working-age men. Genetic mutations associated with P. falciparum artemisinin resistance, the presence of P. knowlesi and discrete demographic risk groups present opportunities and challenges for malaria control. Responses targeted to working-age men, capable of detecting sub-clinical infections, and considering all species will facilitate malaria elimination in this setting.
NASA Astrophysics Data System (ADS)
Li, Jia; Wang, Qiang; Yan, Wenjie; Shen, Yi
2015-12-01
Cooperative spectrum sensing exploits the spatial diversity to improve the detection of occupied channels in cognitive radio networks (CRNs). Cooperative compressive spectrum sensing (CCSS) utilizing the sparsity of channel occupancy further improves the efficiency by reducing the number of reports without degrading detection performance. In this paper, we firstly and mainly propose the referred multi-candidate orthogonal matrix matching pursuit (MOMMP) algorithms to efficiently and effectively detect occupied channels at fusion center (FC), where multi-candidate identification and orthogonal projection are utilized to respectively reduce the number of required iterations and improve the probability of exact identification. Secondly, two common but different approaches based on threshold and Gaussian distribution are introduced to realize the multi-candidate identification. Moreover, to improve the detection accuracy and energy efficiency, we propose the matrix construction based on shrinkage and gradient descent (MCSGD) algorithm to provide a deterministic filter coefficient matrix of low t-average coherence. Finally, several numerical simulations validate that our proposals provide satisfactory performance with higher probability of detection, lower probability of false alarm and less detection time.
... use a complex, stratified, multistage probability cluster sampling design. NHANES data collection is based on a nationally ... conjunction with the 2012 NHANES and the survey design was based on the design for NHANES, with ...
Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.
2008-01-01
Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1991-01-01
In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
Use of Mental Health Services in Transition Age Youth with Bipolar Disorder
Hower, Heather; Case, Brady G.; Hoeppner, Bettina; Yen, Shirley; Goldstein, Tina; Goldstein, Benjamin; Birmaher, Boris; Weinstock, Lauren; Topor, David; Hunt, Jeffrey; Strober, Michael; Ryan, Neal; Axelson, David; Gill, Mary Kay; Keller, Martin B.
2013-01-01
Objectives There is concern that treatment of serious mental illness in the United States declines precipitously following legal emancipation at age 18 years and transition from specialty youth clinical settings. We examined age transition effects on treatment utilization in a sample of youth with bipolar disorder. Methods Youth with bipolar disorder (N = 413) 7–18 years of age were assessed approximately twice per year (mean interval 8.2 months) for at least 4 years. Annual use of any individual, group, and family therapy, psychopharmacology visits, and hospitalization at each year of age, and monthly use from ages 17 through 19 years, were examined. The effect of age transition to 18 years on monthly visit probability was tested in the subsample with observed transitions (n = 204). Putative sociodemographic moderators and the influence of clinical course were assessed. Results Visit probabilities for the most common modalities—psychopharmacology, individual psychotherapy, and home-based care— generally fell from childhood to young adulthood. For example, the annual probability of at least one psychopharmacology visit was 97% at age 8, 75% at age 17, 60% at age 19, and 46% by age 22. Treatment probabilities fell in transition-age youth from age 17 through 19, but a specific transition effect at age 18 was not found. Declines did not vary based on sociodemographic characteristics and were not explained by changing severity of the bipolar illness or functioning. Conclusions Mental health treatment declined with age in this sample of youth with bipolar disorder, but reductions were not concentrated during or after the transition to age 18 years. Declines were unrelated to symptom severity or impairment. PMID:24241500
A multi-level anomaly detection algorithm for time-varying graph data with interactive visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A.; Collins, John P.; Ferragut, Erik M.
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating node probabilities, and these related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, this multi-scale analysis facilitatesmore » intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. Furthermore, to illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
A multi-level anomaly detection algorithm for time-varying graph data with interactive visualization
Bridges, Robert A.; Collins, John P.; Ferragut, Erik M.; ...
2016-01-01
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating node probabilities, and these related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, this multi-scale analysis facilitatesmore » intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. Furthermore, to illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
McGrady, Michael J.; Hines, James; Rollie, Chris; Smith, George D.; Morton, Elise R.; Moore, Jennifer F.; Mearns, Richard M.; Newton, Ian; Murillo-Garcia, Oscar E.; Oli, Madan K.
2017-01-01
Organochlorine pesticides disrupted reproduction and killed many raptorial birds, and contributed to population declines during the 1940s to 1970s. We sought to discern whether and to what extent territory occupancy and breeding success changed from the pesticide era to recent years in a resident population of Peregrine Falcons Falco peregrinus in southern Scotland using long-term (1964–2015) field data and multi-state, multi-season occupancy models. Peregrine territories that were occupied with successful reproduction in one year were much more likely to be occupied and experience reproductive success in the following year, compared with those that were unoccupied or occupied by unsuccessful breeders in the previous year. Probability of territory occupancy differed between territories in the eastern and western parts of the study area, and varied over time. The probability of occupancy of territories that were unoccupied and those that were occupied with successful reproduction during the previous breeding season generally increased over time, whereas the probability of occupancy of territories that were occupied after failed reproduction decreased. The probability of reproductive success (conditional on occupancy) in territories that were occupied during the previous breeding season increased over time. Specifically, for territories that had been successful in the previous year, the probability of occupancy as well as reproductive success increased steadily over time; these probabilities were substantially higher in recent years than earlier, when the population was still exposed to direct or residual effects of organochlorine pesticides. These results are consistent with the hypothesis that progressive reduction, followed by a complete ban, in the use of organochlorine pesticides improved reproductive success of Peregrines in southern Scotland. Differences in the temporal pattern of probability of reproductive success between south-eastern and south-western Scotland suggest that the effect of organochlorine pesticides on Peregrine reproductive success and/or the recovery from pesticide effects varied geographically and was possibly affected by other factors such as persecution.
Shirani, Kiana; Ataei, Behrouz; Roshandel, Fardad
2016-01-01
Background: One of the most common causes of hospital-acquired secondary infections in hospitalized patients is Pseudomonas aeruginosa. The aim of this study is to evaluate the expression of IMP and VIM in Pseudomonas aeruginosa strains (carbapenem resistant and producer MBL enzyme) in patients with secondary immunodeficiency. Materials and Methods: In a cross sectional study, 96 patients with secondary immunodeficiency hospitalized in the Al-Zahra hospital were selected. Carbapenem resistant strains isolated and modified Hodge test was performed in order to confirm the presence of the metallo carbapenemase enzyme. Under the standard conditions they were sent to the central laboratory for investigating nosocomial infection Multiplex PCR. Results: Of 96 samples 28.1% were IMP positive, 5.2% VIM positive and 3.1% both VIM and IMP positive. The prevalence of multidrug resistance in the IMP and/or VIM negative samples was 29%, while all 5 VIM positive samples have had multidrug resistance. Also the prevalence of multi-drug resistance in IMP positive samples were 96.3% and in IMP and VIM positive samples were 100%. According to Fisher’s test, the prevalence of multi-drug resistance based on gene expression has significant difference (P < 0.001). Conclusion: Based on the results of this study it can be concluded that, a significant percentage of patients with secondary immunodeficiency that suffer nosocomial infections with multidrug resistance, especially Pseudomonas aeruginosa, are probably MBL-producing gene positive. Therefore the cause of infection should be considered in the hospital care system to identify their features, the presence of genes involved in the development of multi-drug resistance and antibiotic therapy. PMID:27563634
Shirani, Kiana; Ataei, Behrouz; Roshandel, Fardad
2016-01-01
One of the most common causes of hospital-acquired secondary infections in hospitalized patients is Pseudomonas aeruginosa. The aim of this study is to evaluate the expression of IMP and VIM in Pseudomonas aeruginosa strains (carbapenem resistant and producer MBL enzyme) in patients with secondary immunodeficiency. In a cross sectional study, 96 patients with secondary immunodeficiency hospitalized in the Al-Zahra hospital were selected. Carbapenem resistant strains isolated and modified Hodge test was performed in order to confirm the presence of the metallo carbapenemase enzyme. Under the standard conditions they were sent to the central laboratory for investigating nosocomial infection Multiplex PCR. Of 96 samples 28.1% were IMP positive, 5.2% VIM positive and 3.1% both VIM and IMP positive. The prevalence of multidrug resistance in the IMP and/or VIM negative samples was 29%, while all 5 VIM positive samples have had multidrug resistance. Also the prevalence of multi-drug resistance in IMP positive samples were 96.3% and in IMP and VIM positive samples were 100%. According to Fisher's test, the prevalence of multi-drug resistance based on gene expression has significant difference (P < 0.001). Based on the results of this study it can be concluded that, a significant percentage of patients with secondary immunodeficiency that suffer nosocomial infections with multidrug resistance, especially Pseudomonas aeruginosa, are probably MBL-producing gene positive. Therefore the cause of infection should be considered in the hospital care system to identify their features, the presence of genes involved in the development of multi-drug resistance and antibiotic therapy.
Reliability-Based Control Design for Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.
A country-wide probability sample of public attitudes toward stuttering in Portugal.
Valente, Ana Rita S; St Louis, Kenneth O; Leahy, Margaret; Hall, Andreia; Jesus, Luis M T
2017-06-01
Negative public attitudes toward stuttering have been widely reported, although differences among countries and regions exist. Clear reasons for these differences remain obscure. Published research is unavailable on public attitudes toward stuttering in Portugal as well as a representative sample that explores stuttering attitudes in an entire country. This study sought to (a) determine the feasibility of a country-wide probability sampling scheme to measure public stuttering attitudes in Portugal using a standard instrument (the Public Opinion Survey of Human Attributes-Stuttering [POSHA-S]) and (b) identify demographic variables that predict Portuguese attitudes. The POSHA-S was translated to European Portuguese through a five-step process. Thereafter, a local administrative office-based, three-stage, cluster, probability sampling scheme was carried out to obtain 311 adult respondents who filled out the questionnaire. The Portuguese population held stuttering attitudes that were generally within the average range of those observed from numerous previous POSHA-S samples. Demographic variables that predicted more versus less positive stuttering attitudes were respondents' age, region of the country, years of school completed, working situation, and number of languages spoken. Non-predicting variables were respondents' sex, marital status, and parental status. A local administrative office-based, probability sampling scheme generated a respondent profile similar to census data and indicated that Portuguese attitudes are generally typical. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Witteveen, Jeroen A. S.; Bijl, Hester
2009-10-01
The Unsteady Adaptive Stochastic Finite Elements (UASFE) method resolves the effect of randomness in numerical simulations of single-mode aeroelastic responses with a constant accuracy in time for a constant number of samples. In this paper, the UASFE framework is extended to multi-frequency responses and continuous structures by employing a wavelet decomposition pre-processing step to decompose the sampled multi-frequency signals into single-frequency components. The effect of the randomness on the multi-frequency response is then obtained by summing the results of the UASFE interpolation at constant phase for the different frequency components. Results for multi-frequency responses and continuous structures show a three orders of magnitude reduction of computational costs compared to crude Monte Carlo simulations in a harmonically forced oscillator, a flutter panel problem, and the three-dimensional transonic AGARD 445.6 wing aeroelastic benchmark subject to random fields and random parameters with various probability distributions.
Ecological Condition of Streams in Eastern and Southern NevadaEPA R-EMAP Muddy-Virgin River Project
The report presents data collected during a one year study period beginning in May of 2000. Sampling sites were selected using a probability-based design (as opposed to subjectively selected sites) using the USEPA River Reach File version 3 (RF3). About 37 sites were sampled. ...
Rosero, Eric B; Peshock, Ronald M; Khera, Amit; Clagett, Patrick; Lo, Hao; Timaran, Carlos H
2011-04-01
Reference values and age-related changes of the wall thickness of the abdominal aorta have not been described in the general population. We characterized age-, race-, and gender-specific distributions, and yearly rates of change of mean aortic wall thickness (MAWT), and associations between MAWT and cardiovascular risk factors in a multi-ethnic population-based probability sample. Magnetic resonance imaging measurements of MAWT were performed on 2466 free-living white, black, and Hispanic adult subjects. MAWT race/ethnicity- and gender-specific percentile values across age were estimated using regression analyses. MAWT was greater in men than in women and increased linearly with age in all the groups and across all the percentiles. Hispanic women had the thinnest and black men the thickest aortas. Black men had the highest and white women the lowest age-related MAWT increase. Age, gender, ethnicity, smoking status, systolic blood pressure, low-density lipoprotein-cholesterol levels, high-density lipoprotein-cholesterol levels, and fasting glucose levels were independent predictors of MAWT. Age, gender, and racial/ethnic differences in MAWT distributions exist in the general population. Such differences should be considered in future investigations assessing aortic atherosclerosis and the effects of anti-atherosclerotic therapies. Published by Mosby, Inc.
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Yeh, Cheng-Ta
2013-05-01
From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.
Skill of Ensemble Seasonal Probability Forecasts
NASA Astrophysics Data System (ADS)
Smith, Leonard A.; Binter, Roman; Du, Hailiang; Niehoerster, Falk
2010-05-01
In operational forecasting, the computational complexity of large simulation models is, ideally, justified by enhanced performance over simpler models. We will consider probability forecasts and contrast the skill of ENSEMBLES-based seasonal probability forecasts of interest to the finance sector (specifically temperature forecasts for Nino 3.4 and the Atlantic Main Development Region (MDR)). The ENSEMBLES model simulations will be contrasted against forecasts from statistical models based on the observations (climatological distributions) and empirical dynamics based on the observations but conditioned on the current state (dynamical climatology). For some start dates, individual ENSEMBLES models yield significant skill even at a lead-time of 14 months. The nature of this skill is discussed, and chances of application are noted. Questions surrounding the interpretation of probability forecasts based on these multi-model ensemble simulations are then considered; the distributions considered are formed by kernel dressing the ensemble and blending with the climatology. The sources of apparent (RMS) skill in distributions based on multi-model simulations is discussed, and it is demonstrated that the inclusion of "zero-skill" models in the long range can improve Root-Mean-Square-Error scores, casting some doubt on the common justification for the claim that all models should be included in forming an operational probability forecast. It is argued that the rational response varies with lead time.
Dim target detection method based on salient graph fusion
NASA Astrophysics Data System (ADS)
Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun
2018-02-01
Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.
Nonprobability and probability-based sampling strategies in sexual science.
Catania, Joseph A; Dolcini, M Margaret; Orellana, Roberto; Narayanan, Vasudah
2015-01-01
With few exceptions, much of sexual science builds upon data from opportunistic nonprobability samples of limited generalizability. Although probability-based studies are considered the gold standard in terms of generalizability, they are costly to apply to many of the hard-to-reach populations of interest to sexologists. The present article discusses recent conclusions by sampling experts that have relevance to sexual science that advocates for nonprobability methods. In this regard, we provide an overview of Internet sampling as a useful, cost-efficient, nonprobability sampling method of value to sex researchers conducting modeling work or clinical trials. We also argue that probability-based sampling methods may be more readily applied in sex research with hard-to-reach populations than is typically thought. In this context, we provide three case studies that utilize qualitative and quantitative techniques directed at reducing limitations in applying probability-based sampling to hard-to-reach populations: indigenous Peruvians, African American youth, and urban men who have sex with men (MSM). Recommendations are made with regard to presampling studies, adaptive and disproportionate sampling methods, and strategies that may be utilized in evaluating nonprobability and probability-based sampling methods.
A risk-based multi-objective model for optimal placement of sensors in water distribution system
NASA Astrophysics Data System (ADS)
Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein
2018-02-01
In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value of losses in WDS.
Ellison, Laura E.; Lukacs, Paul M.
2014-01-01
Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.
Protein Inference from the Integration of Tandem MS Data and Interactome Networks.
Zhong, Jiancheng; Wang, Jianxing; Ding, Xiaojun; Zhang, Zhen; Li, Min; Wu, Fang-Xiang; Pan, Yi
2017-01-01
Since proteins are digested into a mixture of peptides in the preprocessing step of tandem mass spectrometry (MS), it is difficult to determine which specific protein a shared peptide belongs to. In recent studies, besides tandem MS data and peptide identification information, some other information is exploited to infer proteins. Different from the methods which first use only tandem MS data to infer proteins and then use network information to refine them, this study proposes a protein inference method named TMSIN, which uses interactome networks directly. As two interacting proteins should co-exist, it is reasonable to assume that if one of the interacting proteins is confidently inferred in a sample, its interacting partners should have a high probability in the same sample, too. Therefore, we can use the neighborhood information of a protein in an interactome network to adjust the probability that the shared peptide belongs to the protein. In TMSIN, a multi-weighted graph is constructed by incorporating the bipartite graph with interactome network information, where the bipartite graph is built with the peptide identification information. Based on multi-weighted graphs, TMSIN adopts an iterative workflow to infer proteins. At each iterative step, the probability that a shared peptide belongs to a specific protein is calculated by using the Bayes' law based on the neighbor protein support scores of each protein which are mapped by the shared peptides. We carried out experiments on yeast data and human data to evaluate the performance of TMSIN in terms of ROC, q-value, and accuracy. The experimental results show that AUC scores yielded by TMSIN are 0.742 and 0.874 in yeast dataset and human dataset, respectively, and TMSIN yields the maximum number of true positives when q-value less than or equal to 0.05. The overlap analysis shows that TMSIN is an effective complementary approach for protein inference.
Combining band recovery data and Pollock's robust design to model temporary and permanent emigration
Lindberg, M.S.; Kendall, W.L.; Hines, J.E.; Anderson, M.G.
2001-01-01
Capture-recapture models are widely used to estimate demographic parameters of marked populations. Recently, this statistical theory has been extended to modeling dispersal of open populations. Multistate models can be used to estimate movement probabilities among subdivided populations if multiple sites are sampled. Frequently, however, sampling is limited to a single site. Models described by Burnham (1993, in Marked Individuals in the Study of Bird Populations, 199-213), which combined open population capture-recapture and band-recovery models, can be used to estimate permanent emigration when sampling is limited to a single population. Similarly, Kendall, Nichols, and Hines (1997, Ecology 51, 563-578) developed models to estimate temporary emigration under Pollock's (1982, Journal of Wildlife Management 46, 757-760) robust design. We describe a likelihood-based approach to simultaneously estimate temporary and permanent emigration when sampling is limited to a single population. We use a sampling design that combines the robust design and recoveries of individuals obtained immediately following each sampling period. We present a general form for our model where temporary emigration is a first-order Markov process, and we discuss more restrictive models. We illustrate these models with analysis of data on marked Canvasback ducks. Our analysis indicates that probability of permanent emigration for adult female Canvasbacks was 0.193 (SE = 0.082) and that birds that were present at the study area in year i - 1 had a higher probability of presence in year i than birds that were not present in year i - 1.
van der Hoop, Julie M; Vanderlaan, Angelia S M; Taggart, Christopher T
2012-10-01
Vessel strikes are the primary source of known mortality for the endangered North Atlantic right whale (Eubalaena glacialis). Multi-institutional efforts to reduce mortality associated with vessel strikes include vessel-routing amendments such as the International Maritime Organization voluntary "area to be avoided" (ATBA) in the Roseway Basin right whale feeding habitat on the southwestern Scotian Shelf. Though relative probabilities of lethal vessel strikes have been estimated and published, absolute probabilities remain unknown. We used a modeling approach to determine the regional effect of the ATBA, by estimating reductions in the expected number of lethal vessel strikes. This analysis differs from others in that it explicitly includes a spatiotemporal analysis of real-time transits of vessels through a population of simulated, swimming right whales. Combining automatic identification system (AIS) vessel navigation data and an observationally based whale movement model allowed us to determine the spatial and temporal intersection of vessels and whales, from which various probability estimates of lethal vessel strikes are derived. We estimate one lethal vessel strike every 0.775-2.07 years prior to ATBA implementation, consistent with and more constrained than previous estimates of every 2-16 years. Following implementation, a lethal vessel strike is expected every 41 years. When whale abundance is held constant across years, we estimate that voluntary vessel compliance with the ATBA results in an 82% reduction in the per capita rate of lethal strikes; very similar to a previously published estimate of 82% reduction in the relative risk of a lethal vessel strike. The models we developed can inform decision-making and policy design, based on their ability to provide absolute, population-corrected, time-varying estimates of lethal vessel strikes, and they are easily transported to other regions and situations.
She, Yunlang; Zhao, Lilan; Dai, Chenyang; Ren, Yijiu; Jiang, Gening; Xie, Huikang; Zhu, Huiyuan; Sun, Xiwen; Yang, Ping; Chen, Yongbing; Shi, Shunbin; Shi, Weirong; Yu, Bing; Xie, Dong; Chen, Chang
2017-11-01
To develop and validate a nomogram to estimate the pretest probability of malignancy in Chinese patients with solid solitary pulmonary nodule (SPN). A primary cohort of 1798 patients with pathologically confirmed solid SPNs after surgery was retrospectively studied at five institutions from January 2014 to December 2015. A nomogram based on independent prediction factors of malignant solid SPN was developed. Predictive performance also was evaluated using the calibration curve and the area under the receiver operating characteristic curve (AUC). The mean age of the cohort was 58.9 ± 10.7 years. In univariate and multivariate analysis, age; history of cancer; the log base 10 transformations of serum carcinoembryonic antigen value; nodule diameter; the presence of spiculation, pleural indentation, and calcification remained the predictive factors of malignancy. A nomogram was developed, and the AUC value (0.85; 95%CI, 0.83-0.88) was significantly higher than other three models. The calibration cure showed optimal agreement between the malignant probability as predicted by nomogram and the actual probability. We developed and validated a nomogram that can estimate the pretest probability of malignant solid SPNs, which can assist clinical physicians to select and interpret the results of subsequent diagnostic tests. © 2017 Wiley Periodicals, Inc.
Eash, David A.
2015-01-01
An examination was conducted to understand why the 1987 single-variable RREs seem to provide better accuracy and less bias than either of the 2013 multi- or single-variable RREs. A comparison of 1-percent annual exceedance-probability regression lines for hydrologic regions 1-4 from the 1987 single-variable RREs and for flood regions 1-3 from the 2013 single-variable RREs indicates that the 1987 single-variable regional-regression lines generally have steeper slopes and lower discharges when compared to 2013 single-variable regional-regression lines for corresponding areas of Iowa. The combination of the definition of hydrologic regions, the lower discharges, and the steeper slopes of regression lines associated with the 1987 single-variable RREs seem to provide better accuracy and less bias when compared to the 2013 multi- or single-variable RREs; better accuracy and less bias was determined particularly for drainage areas less than 2 mi2, and also for some drainage areas between 2 and 20 mi2. The 2013 multi- and single-variable RREs are considered to provide better accuracy and less bias for larger drainage areas. Results of this study indicate that additional research is needed to address the curvilinear relation between drainage area and AEPDs for areas of Iowa.
Selections from 2017: Computers Help Us Map Our Home
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-12-01
Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.Machine-Learned Identification of RR Lyrae Stars from Sparse, Multi-Band Data: The PS1 SamplePublished April2017Main takeaway:A sample of RR Lyrae variable stars was built from thePan-STARRS1 (PS1) survey by a team led byBranimir Sesar (Max Planck Institute for Astronomy, Germany). The sample of45,000 starsrepresentsthe widest (three-fourthsof the sky) and deepest (reaching 120 kpc) sample of RR Lyrae stars to date.Why its interesting:Its challengingto understand the overall shape and behaviorof our galaxy because were stuck on the inside of it. RR Lyrae stars are a useful tool for this purpose: they can be used as tracers to map out the Milky Ways halo. The authors large sample of RR Lyrae stars from PS1 combined withproper-motion measurements from Gaia and radial-velocity measurements from multi-object spectroscopic surveys could become thepremier source for studying the structure, kinematics, and the gravitational potential of our galaxys outskirts.How they were found:The black dots show the distribution of the 45,000 probable RR Lyrae stars in the authors sample. [Sesar et al. 2017]The 45,000 stars in this sample were selected not by humans, but by computer.The authors used machine-learning algorithms to examine the light curvesin the Pan-STARRS1 sample and identify the characteristic brightness variations of RR Lyrae stars lying in the galactic halo. These techniques resulted in a very pure and complete sample, and the authors suggest that this approachmay translate well to othersparse,multi-band data sets such asthat from the upcomingLarge Synoptic Survey Telescope (LSST) galactic plane sub-survey.CitationBranimir Sesar et al 2017 AJ 153 204. doi:10.3847/1538-3881/aa661b
A Bayesian Framework of Uncertainties Integration in 3D Geological Model
NASA Astrophysics Data System (ADS)
Liang, D.; Liu, X.
2017-12-01
3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.
Methods and Model Dependency of Extreme Event Attribution: The 2015 European Drought
NASA Astrophysics Data System (ADS)
Hauser, Mathias; Gudmundsson, Lukas; Orth, René; Jézéquel, Aglaé; Haustein, Karsten; Vautard, Robert; van Oldenborgh, Geert J.; Wilcox, Laura; Seneviratne, Sonia I.
2017-10-01
Science on the role of anthropogenic influence on extreme weather events, such as heatwaves or droughts, has evolved rapidly in the past years. The approach of "event attribution" compares the occurrence-probability of an event in the present, factual climate with its probability in a hypothetical, counterfactual climate without human-induced climate change. Several methods can be used for event attribution, based on climate model simulations and observations, and usually researchers only assess a subset of methods and data sources. Here, we explore the role of methodological choices for the attribution of the 2015 meteorological summer drought in Europe. We present contradicting conclusions on the relevance of human influence as a function of the chosen data source and event attribution methodology. Assessments using the maximum number of models and counterfactual climates with pre-industrial greenhouse gas concentrations point to an enhanced drought risk in Europe. However, other evaluations show contradictory evidence. These results highlight the need for a multi-model and multi-method framework in event attribution research, especially for events with a low signal-to-noise ratio and high model dependency such as regional droughts.
Conditional, Time-Dependent Probabilities for Segmented Type-A Faults in the WGCEP UCERF 2
Field, Edward H.; Gupta, Vipin
2008-01-01
This appendix presents elastic-rebound-theory (ERT) motivated time-dependent probabilities, conditioned on the date of last earthquake, for the segmented type-A fault models of the 2007 Working Group on California Earthquake Probabilities (WGCEP). These probabilities are included as one option in the WGCEP?s Uniform California Earthquake Rupture Forecast 2 (UCERF 2), with the other options being time-independent Poisson probabilities and an ?Empirical? model based on observed seismicity rate changes. A more general discussion of the pros and cons of all methods for computing time-dependent probabilities, as well as the justification of those chosen for UCERF 2, are given in the main body of this report (and the 'Empirical' model is also discussed in Appendix M). What this appendix addresses is the computation of conditional, time-dependent probabilities when both single- and multi-segment ruptures are included in the model. Computing conditional probabilities is relatively straightforward when a fault is assumed to obey strict segmentation in the sense that no multi-segment ruptures occur (e.g., WGCEP (1988, 1990) or see Field (2007) for a review of all previous WGCEPs; from here we assume basic familiarity with conditional probability calculations). However, and as we?ll see below, the calculation is not straightforward when multi-segment ruptures are included, in essence because we are attempting to apply a point-process model to a non point process. The next section gives a review and evaluation of the single- and multi-segment rupture probability-calculation methods used in the most recent statewide forecast for California (WGCEP UCERF 1; Petersen et al., 2007). We then present results for the methodology adopted here for UCERF 2. We finish with a discussion of issues and possible alternative approaches that could be explored and perhaps applied in the future. A fault-by-fault comparison of UCERF 2 probabilities with those of previous studies is given in the main part of this report.
NASA Astrophysics Data System (ADS)
Wang, Rui; Zhang, Jiquan; Guo, Enliang; Alu, Si; Li, Danjun; Ha, Si; Dong, Zhenhua
2018-02-01
Along with global warming, drought disasters are occurring more frequently and are seriously affecting normal life and food security in China. Drought risk assessments are necessary to provide support for local governments. This study aimed to establish an integrated drought risk model based on the relation curve of drought joint probabilities and drought losses of multi-hazard-affected bodies. First, drought characteristics, including duration and severity, were classified using the 1953-2010 precipitation anomaly in the Taoerhe Basin based on run theory, and their marginal distributions were identified by exponential and Gamma distributions, respectively. Then, drought duration and severity were related to construct a joint probability distribution based on the copula function. We used the EPIC (Environmental Policy Integrated Climate) model to simulate maize yield and historical data to calculate the loss rates of agriculture, industry, and animal husbandry in the study area. Next, we constructed vulnerability curves. Finally, the spatial distributions of drought risk for 10-, 20-, and 50-year return periods were expressed using inverse distance weighting. Our results indicate that the spatial distributions of the three return periods are consistent. The highest drought risk is in Ulanhot, and the duration and severity there were both highest. This means that higher drought risk corresponds to longer drought duration and larger drought severity, thus providing useful information for drought and water resource management. For 10-, 20-, and 50-year return periods, the drought risk values ranged from 0.41 to 0.53, 0.45 to 0.59, and 0.50 to 0.67, respectively. Therefore, when the return period increases, the drought risk increases.
Random-Forest Classification of High-Resolution Remote Sensing Images and Ndsm Over Urban Areas
NASA Astrophysics Data System (ADS)
Sun, X. F.; Lin, X. G.
2017-09-01
As an intermediate step between raw remote sensing data and digital urban maps, remote sensing data classification has been a challenging and long-standing research problem in the community of remote sensing. In this work, an effective classification method is proposed for classifying high-resolution remote sensing data over urban areas. Starting from high resolution multi-spectral images and 3D geometry data, our method proceeds in three main stages: feature extraction, classification, and classified result refinement. First, we extract color, vegetation index and texture features from the multi-spectral image and compute the height, elevation texture and differential morphological profile (DMP) features from the 3D geometry data. Then in the classification stage, multiple random forest (RF) classifiers are trained separately, then combined to form a RF ensemble to estimate each sample's category probabilities. Finally the probabilities along with the feature importance indicator outputted by RF ensemble are used to construct a fully connected conditional random field (FCCRF) graph model, by which the classification results are refined through mean-field based statistical inference. Experiments on the ISPRS Semantic Labeling Contest dataset show that our proposed 3-stage method achieves 86.9% overall accuracy on the test data.
León-Ortega, Mario; Jiménez-Franco, María V; Martínez, José E; Calvo, José F
2017-01-01
Modelling territorial occupancy and reproductive success is a key issue for better understanding the population dynamics of territorial species. This study aimed to investigate these ecological processes in a Eurasian Eagle-owl (Bubo bubo) population in south-eastern Spain during a seven-year period. A multi-season, multi-state modelling approach was followed to estimate the probabilities of occupancy and reproductive success in relation to previous state, time and habitat covariates, and accounting for imperfect detection. The best estimated models showed past breeding success in the territories to be the most important factor determining a high probability of reoccupation and reproductive success in the following year. In addition, alternative occupancy models suggested the positive influence of crops on the probability of territory occupation. By contrast, the best reproductive model revealed strong interannual variations in the rates of breeding success, which may be related to changes in the abundance of the European Rabbit, the main prey of the Eurasian Eagle-owl. Our models also estimated the probabilities of detecting the presence of owls in a given territory and the probability of detecting evidence of successful reproduction. Estimated detection probabilities were high throughout the breeding season, decreasing in time for unsuccessful breeders but increasing for successful breeders. The probability of detecting reproductive success increased with time, being close to one in the last survey. These results suggest that reproduction failure in the early stages of the breeding season is a determinant factor in the probability of detecting occupancy and reproductive success.
León-Ortega, Mario; Jiménez-Franco, María V.; Martínez, José E.
2017-01-01
Modelling territorial occupancy and reproductive success is a key issue for better understanding the population dynamics of territorial species. This study aimed to investigate these ecological processes in a Eurasian Eagle-owl (Bubo bubo) population in south-eastern Spain during a seven-year period. A multi-season, multi-state modelling approach was followed to estimate the probabilities of occupancy and reproductive success in relation to previous state, time and habitat covariates, and accounting for imperfect detection. The best estimated models showed past breeding success in the territories to be the most important factor determining a high probability of reoccupation and reproductive success in the following year. In addition, alternative occupancy models suggested the positive influence of crops on the probability of territory occupation. By contrast, the best reproductive model revealed strong interannual variations in the rates of breeding success, which may be related to changes in the abundance of the European Rabbit, the main prey of the Eurasian Eagle-owl. Our models also estimated the probabilities of detecting the presence of owls in a given territory and the probability of detecting evidence of successful reproduction. Estimated detection probabilities were high throughout the breeding season, decreasing in time for unsuccessful breeders but increasing for successful breeders. The probability of detecting reproductive success increased with time, being close to one in the last survey. These results suggest that reproduction failure in the early stages of the breeding season is a determinant factor in the probability of detecting occupancy and reproductive success. PMID:28399175
Capture-recapture analysis for estimating manatee reproductive rates
Kendall, W.L.; Langtimm, C.A.; Beck, C.A.; Runge, M.C.
2004-01-01
Modeling the life history of the endangered Florida manatee (Trichechus manatus latirostris) is an important step toward understanding its population dynamics and predicting its response to management actions. We developed a multi-state mark-resighting model for data collected under Pollock's robust design. This model estimates breeding probability conditional on a female's breeding state in the previous year; assumes sighting probability depends on breeding state; and corrects for misclassification of a cow with first-year calf, by estimating conditional sighting probability for the calf. The model is also appropriate for estimating survival and unconditional breeding probabilities when the study area is closed to temporary emigration across years. We applied this model to photo-identification data for the Northwest and Atlantic Coast populations of manatees, for years 1982?2000. With rare exceptions, manatees do not reproduce in two consecutive years. For those without a first-year calf in the previous year, the best-fitting model included constant probabilities of producing a calf for the Northwest (0.43, SE = 0.057) and Atlantic (0.38, SE = 0.045) populations. The approach we present to adjust for misclassification of breeding state could be applicable to a large number of marine mammal populations.
Need States Based on Eating Occasions Experienced by Midlife Women
ERIC Educational Resources Information Center
Vue, Houa; Degeneffe, Dennis; Reicks, Marla
2008-01-01
Objective: To identify a comprehensive set of distinct "need states" based on the eating occasions experienced by midlife women. Design: Series of 7 focus group interviews. Setting: Meeting room on a university campus. Participants: A convenience sample of 34 multi-ethnic women (mean age = 46 years). Phenomenon of Interest: Descriptions of eating…
ERIC Educational Resources Information Center
Munoz, Eric; And Others
The health conditions and health status of Hispanic Americans will assume increased importance as their population increases. The goal of this book of charts is to present data from the Hispanic Health and Nutrition Examination Survey (HHANES) on Puerto Ricans. The Puerto Rican HHANES sampling procedure is a multi-stage probability sample of…
Multi-focus image fusion and robust encryption algorithm based on compressive sensing
NASA Astrophysics Data System (ADS)
Xiao, Di; Wang, Lan; Xiang, Tao; Wang, Yong
2017-06-01
Multi-focus image fusion schemes have been studied in recent years. However, little work has been done in multi-focus image transmission security. This paper proposes a scheme that can reduce data transmission volume and resist various attacks. First, multi-focus image fusion based on wavelet decomposition can generate complete scene images and optimize the perception of the human eye. The fused images are sparsely represented with DCT and sampled with structurally random matrix (SRM), which reduces the data volume and realizes the initial encryption. Then the obtained measurements are further encrypted to resist noise and crop attack through combining permutation and diffusion stages. At the receiver, the cipher images can be jointly decrypted and reconstructed. Simulation results demonstrate the security and robustness of the proposed scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lin; Dai, Zhenxue; Gong, Huili
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
Simulation-Based Model Checking for Nondeterministic Systems and Rare Events
2016-03-24
year, we have investigated AO* search and Monte Carlo Tree Search algorithms to complement and enhance CMU’s SMCMDP. 1 Final Report, March 14... tree , so we can use it to find the probability of reachability for a property in PRISM’s Probabilistic LTL. By finding the maximum probability of...savings, particularly when handling very large models. 2.3 Monte Carlo Tree Search The Monte Carlo sampling process in SMCMDP can take a long time to
NASA Astrophysics Data System (ADS)
Xiao Yong, Zhao; Xin, Ji Yong; Shuang Ying, Zuo
2018-03-01
In order to effectively classify the surrounding rock types of tunnels, a multi-factor tunnel surrounding rock classification method based on GPR and probability theory is proposed. Geological radar was used to identify the geology of the surrounding rock in front of the face and to evaluate the quality of the rock face. According to the previous survey data, the rock uniaxial compressive strength, integrity index, fissure and groundwater were selected for classification. The related theories combine them into a multi-factor classification method, and divide the surrounding rocks according to the great probability. Using this method to classify the surrounding rock of the Ma’anshan tunnel, the surrounding rock types obtained are basically the same as those of the actual surrounding rock, which proves that this method is a simple, efficient and practical rock classification method, which can be used for tunnel construction.
Public perceptions toward mental illness in Japan.
Kasahara-Kiritani, Mami; Matoba, Tomoko; Kikuzawa, Saeko; Sakano, Junko; Sugiyama, Katsumi; Yamaki, Chikako; Mochizuki, Mieko; Yamazaki, Yoshihiko
2018-05-16
The purpose was to characterize public perceptions in Japan of mental illness and how they related to stigma-related attitudes for the same. Data were obtained using a vignette survey conducted as a part of the Stigma in Global Context - Mental Health Study and contained a nationally representative sample (n = 994). The survey was conducted using a multi-mode approach (face-to-face interviews, the drop-off-and-pick-up, postal collection) from September to December 2006, with a multi-stage probability sample of Japanese residents aged 18-64 years. Respondents were randomly assigned one of four vignette conditions that described psychiatric disorders meeting the diagnostic criteria for schizophrenia and major depressive disorder (one vignette for each gender exhibiting each diagnosis). We compared respondents' stigma-related attitudes and perceptions toward mental illness between vignettes. Over 80% of Japanese participants believed that depressive disorder or schizophrenia could be cured via treatment. However, Japanese people still had relatively strong vigilance and denial of competency toward schizophrenia. Participants expressed the belief that mental illnesses are curable, but stigma toward people with schizophrenia was still relatively strong. Copyright © 2018 Elsevier B.V. All rights reserved.
Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan M
2017-02-01
Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.
Sampling design for long-term regional trends in marine rocky intertidal communities
Irvine, Gail V.; Shelley, Alice
2013-01-01
Probability-based designs reduce bias and allow inference of results to the pool of sites from which they were chosen. We developed and tested probability-based designs for monitoring marine rocky intertidal assemblages at Glacier Bay National Park and Preserve (GLBA), Alaska. A multilevel design was used that varied in scale and inference. The levels included aerial surveys, extensive sampling of 25 sites, and more intensive sampling of 6 sites. Aerial surveys of a subset of intertidal habitat indicated that the original target habitat of bedrock-dominated sites with slope ≤30° was rare. This unexpected finding illustrated one value of probability-based surveys and led to a shift in the target habitat type to include steeper, more mixed rocky habitat. Subsequently, we evaluated the statistical power of different sampling methods and sampling strategies to detect changes in the abundances of the predominant sessile intertidal taxa: barnacles Balanomorpha, the mussel Mytilus trossulus, and the rockweed Fucus distichus subsp. evanescens. There was greatest power to detect trends in Mytilus and lesser power for barnacles and Fucus. Because of its greater power, the extensive, coarse-grained sampling scheme was adopted in subsequent years over the intensive, fine-grained scheme. The sampling attributes that had the largest effects on power included sampling of “vertical” line transects (vs. horizontal line transects or quadrats) and increasing the number of sites. We also evaluated the power of several management-set parameters. Given equal sampling effort, sampling more sites fewer times had greater power. The information gained through intertidal monitoring is likely to be useful in assessing changes due to climate, including ocean acidification; invasive species; trampling effects; and oil spills.
ERIC Educational Resources Information Center
Maher, Nicole; Muir, Tracey
2014-01-01
This paper reports on one aspect of a wider study that investigated a selection of final year pre-service primary teachers' responses to four probability tasks. The tasks focused on foundational ideas of probability including sample space, independence, variation and expectation. Responses suggested that strongly held intuitions appeared to…
Modeling interpopulation dispersal by banner-tailed kangaroo rats
Skvarla, J.L.; Nichols, J.D.; Hines, J.E.; Waser, P.M.
2004-01-01
Many metapopulation models assume rules of population connectivity that are implicitly based on what we know about within-population dispersal, but especially for vertebrates, few data exist to assess whether interpopulation dispersal is just within-population dispersal "scaled up." We extended existing multi-stratum mark-release-recapture models to incorporate the robust design, allowing us to compare patterns of within- and between-population movement in the banner-tailed kangaroo rat (Dipodomys spectabilis). Movement was rare among eight populations separated by only a few hundred meters: seven years of twice-annual sampling captured >1200 individuals but only 26 interpopulation dispersers. We developed a program that implemented models with parameters for capture, survival, and interpopulation movement probability and that evaluated competing hypotheses in a model selection framework. We evaluated variants of the island, stepping-stone, and isolation-by-distance models of interpopulation movement, incorporating effects of age, season, and habitat (short or tall grass). For both sexes, QAICc values clearly favored isolation-by-distance models, or models combining the effects of isolation by distance and habitat. Models with probability of dispersal expressed as linear-logistic functions of distance and as negative exponentials of distance fit the data equally well. Interpopulation movement probabilities were similar among sexes (perhaps slightly biased toward females), greater for juveniles than adults (especially for females), and greater before than during the breeding season (especially for females). These patterns resemble those previously described for within-population dispersal in this species, which we interpret as indicating that the same processes initiate both within- and between-population dispersal.
Multi-state Markov model for disability: A case of Malaysia Social Security (SOCSO)
NASA Astrophysics Data System (ADS)
Samsuddin, Shamshimah; Ismail, Noriszura
2016-06-01
Studies of SOCSO's contributor outcomes like disability are usually restricted to a single outcome. In this respect, the study has focused on the approach of multi-state Markov model for estimating the transition probabilities among SOCSO's contributor in Malaysia between states: work, temporary disability, permanent disability and death at yearly intervals on age, gender, year and disability category; ignoring duration and past disability experience which is not consider of how or when someone arrived in that category. These outcomes represent different states which depend on health status among the workers.
Deviney, Frank A.; Rice, Karen; Brown, Donald E.
2012-01-01
Natural resource managers require information concerning the frequency, duration, and long-term probability of occurrence of water-quality indicator (WQI) violations of defined thresholds. The timing of these threshold crossings often is hidden from the observer, who is restricted to relatively infrequent observations. Here, a model for the hidden process is linked with a model for the observations, and the parameters describing duration, return period, and long-term probability of occurrence are estimated using Bayesian methods. A simulation experiment is performed to evaluate the approach under scenarios based on the equivalent of a total monitoring period of 5-30 years and an observation frequency of 1-50 observations per year. Given constant threshold crossing rate, accuracy and precision of parameter estimates increased with longer total monitoring period and more-frequent observations. Given fixed monitoring period and observation frequency, accuracy and precision of parameter estimates increased with longer times between threshold crossings. For most cases where the long-term probability of being in violation is greater than 0.10, it was determined that at least 600 observations are needed to achieve precise estimates. An application of the approach is presented using 22 years of quasi-weekly observations of acid-neutralizing capacity from Deep Run, a stream in Shenandoah National Park, Virginia. The time series also was sub-sampled to simulate monthly and semi-monthly sampling protocols. Estimates of the long-term probability of violation were unbiased despite sampling frequency; however, the expected duration and return period were over-estimated using the sub-sampled time series with respect to the full quasi-weekly time series.
Long-term multi-hazard assessment for El Misti volcano (Peru)
NASA Astrophysics Data System (ADS)
Sandri, Laura; Thouret, Jean-Claude; Constantinescu, Robert; Biass, Sébastien; Tonini, Roberto
2014-02-01
We propose a long-term probabilistic multi-hazard assessment for El Misti Volcano, a composite cone located <20 km from Arequipa. The second largest Peruvian city is a rapidly expanding economic centre and is classified by UNESCO as World Heritage. We apply the Bayesian Event Tree code for Volcanic Hazard (BET_VH) to produce probabilistic hazard maps for the predominant volcanic phenomena that may affect c.900,000 people living around the volcano. The methodology accounts for the natural variability displayed by volcanoes in their eruptive behaviour, such as different types/sizes of eruptions and possible vent locations. For this purpose, we treat probabilistically several model runs for some of the main hazardous phenomena (lahars, pyroclastic density currents (PDCs), tephra fall and ballistic ejecta) and data from past eruptions at El Misti (tephra fall, PDCs and lahars) and at other volcanoes (PDCs). The hazard maps, although neglecting possible interactions among phenomena or cascade effects, have been produced with a homogeneous method and refer to a common time window of 1 year. The probability maps reveal that only the north and east suburbs of Arequipa are exposed to all volcanic threats except for ballistic ejecta, which are limited to the uninhabited but touristic summit cone. The probability for pyroclastic density currents reaching recently expanding urban areas and the city along ravines is around 0.05 %/year, similar to the probability obtained for roof-critical tephra loading during the rainy season. Lahars represent by far the most probable threat (around 10 %/year) because at least four radial drainage channels can convey them approximately 20 km away from the volcano across the entire city area in heavy rain episodes, even without eruption. The Río Chili Valley represents the major concern to city safety owing to the probable cascading effect of combined threats: PDCs and rockslides, dammed lake break-outs and subsequent lahars or floods. Although this study does not intend to replace the current El Misti hazard map, the quantitative results of this probabilistic multi-hazard assessment can be incorporated into a multi-risk analysis, to support decision makers in any future improvement of the current hazard evaluation, such as further land-use planning and possible emergency management.
Intensity of Territorial Marking Predicts Wolf Reproduction: Implications for Wolf Monitoring
García, Emilio J.
2014-01-01
Background The implementation of intensive and complex approaches to monitor large carnivores is resource demanding, restricted to endangered species, small populations, or small distribution ranges. Wolf monitoring over large spatial scales is difficult, but the management of such contentious species requires regular estimations of abundance to guide decision-makers. The integration of wolf marking behaviour with simple sign counts may offer a cost-effective alternative to monitor the status of wolf populations over large spatial scales. Methodology/Principal Findings We used a multi-sampling approach, based on the collection of visual and scent wolf marks (faeces and ground scratching) and the assessment of wolf reproduction using howling and observation points, to test whether the intensity of marking behaviour around the pup-rearing period (summer-autumn) could reflect wolf reproduction. Between 1994 and 2007 we collected 1,964 wolf marks in a total of 1,877 km surveyed and we searched for the pups' presence (1,497 howling and 307 observations points) in 42 sampling sites with a regular presence of wolves (120 sampling sites/year). The number of wolf marks was ca. 3 times higher in sites with a confirmed presence of pups (20.3 vs. 7.2 marks). We found a significant relationship between the number of wolf marks (mean and maximum relative abundance index) and the probability of wolf reproduction. Conclusions/Significance This research establishes a real-time relationship between the intensity of wolf marking behaviour and wolf reproduction. We suggest a conservative cutting point of 0.60 for the probability of wolf reproduction to monitor wolves on a regional scale combined with the use of the mean relative abundance index of wolf marks in a given area. We show how the integration of wolf behaviour with simple sampling procedures permit rapid, real-time, and cost-effective assessments of the breeding status of wolf packs with substantial implications to monitor wolves at large spatial scales. PMID:24663068
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao
1991-01-01
Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
The fuzzy cube and causal efficacy: representation of concomitant mechanisms in stroke.
Jobe, Thomas H.; Helgason, Cathy M.
1998-04-01
Twentieth century medical science has embraced nineteenth century Boolean probability theory based upon two-valued Aristotelian logic. With the later addition of bit-based, von Neumann structured computational architectures, an epistemology based on randomness has led to a bivalent epidemiological methodology that dominates medical decision making. In contrast, fuzzy logic, based on twentieth century multi-valued logic, and computational structures that are content addressed and adaptively modified, has advanced a new scientific paradigm for the twenty-first century. Diseases such as stroke involve multiple concomitant causal factors that are difficult to represent using conventional statistical methods. We tested which paradigm best represented this complex multi-causal clinical phenomenon-stroke. We show that the fuzzy logic paradigm better represented clinical complexity in cerebrovascular disease than current probability theory based methodology. We believe this finding is generalizable to all of clinical science since multiple concomitant causal factors are involved in nearly all known pathological processes.
O'Shea, T.J.; Ellison, L.E.; Neubaum, D.J.; Neubaum, M.A.; Reynolds, C.A.; Bowen, R.A.
2010-01-01
We used markrecapture estimation techniques and radiography to test hypotheses about 3 important aspects of recruitment in big brown bats (Eptesicus fuscus) in Fort Collins, Colorado: adult breeding probabilities, litter size, and 1st-year survival of young. We marked 2,968 females with passive integrated transponder (PIT) tags at multiple sites during 2001-2005 and based our assessments on direct recaptures (breeding probabilities) and passive detection with automated PIT tag readers (1st-year survival). We interpreted our data in relation to hypotheses regarding demographic influences of bat age, roost, and effects of years with unusual environmental conditions: extreme drought (2002) and arrival of a West Nile virus epizootic (2003). Conditional breeding probabilities at 6 roosts sampled in 2002-2005 were estimated as 0.64 (95% confidence interval [95% CI] = 0.530.73) in 1-year-old females, but were consistently high (95% CI = 0.940.96) and did not vary by roost, year, or prior year breeding status in older adults. Mean litter size was 1.11 (95% CI = 1.051.17), based on examination of 112 pregnant females by radiography. Litter size was not higher in older or larger females and was similar to results of other studies in western North America despite wide variation in latitude. First-year survival was estimated as 0.67 (95% CI = 0.610.73) for weaned females at 5 maternity roosts over 5 consecutive years, was lower than adult survival (0.79; 95% CI = 0.770.81), and varied by roost. Based on model selection criteria, strong evidence exists for complex roost and year effects on 1st-year survival. First-year survival was lowest in bats born during the drought year. Juvenile females that did not return to roosts as 1-year-olds had lower body condition indices in late summer of their natal year than those known to survive. ?? 2009 American Society of Mammalogists.
Sedinger, J.S.; Chelgren, N.D.
2007-01-01
We examined the relationship between mass late in the first summer and survival and return to the natal breeding colony for 12 cohorts (1986-1997) of female Black Brant (Branta bernicla nigricans). We used Cormack-Jolly-Seber methods and the program MARK to analyze capture-recapture data. Models included two kinds of residuals from regressions of mass on days after peak of hatch when goslings were measured; one based on the entire sample (12 cohorts) and the other based only on individuals in the same cohort. Some models contained date of peak of hatch (a group covariate related to lateness of nesting in that year) and mean cohort residual mass. Finally, models allowed survival to vary among cohorts. The best model of encounter probability included an effect of residual mass on encounter probability and allowed encounter probability to vary among age classes and across years. All competitive models contained an effect of one of the estimates of residual mass; relatively larger goslings survived their first year at higher rates. Goslings in cohorts from later years in the analysis tended to have lower first-year survival, after controlling for residual mass, which reflected the generally smaller mean masses for these cohorts but was potentially also a result of population-density effects additional to those on growth. Variation among cohorts in mean mass accounted for 56% of variation among cohorts in first-year survival. Encounter probabilities, which were correlated with breeding probability, increased with relative mass, which suggests that larger goslings not only survived at higher rates but also bred at higher rates. Although our findings support the well-established linkage between gosling mass and fitness, they suggest that additional environmental factors also influence first-year survival.
Developing Mathematical Thinking: Changing Teachers' Knowledge and Instruction
ERIC Educational Resources Information Center
Brendefur, Jonathan L.; Thiede, Keith; Strother, Sam; Bunning, Kim; Peck, Duane
2013-01-01
In the present research, we evaluated the effectiveness of a multi-year professional development program in mathematics for elementary teachers. Each year the program focused on a different domain of mathematics. We found the program increased teachers' knowledge of (a) number and operations, (b) measurement and geometry, and (c) probability and…
A high-resolution Holocene Asian Monsoon record from a Tibetan lake-Peiku Co
NASA Astrophysics Data System (ADS)
Du, M.; Ricketts, R. D.; Colman, S.; Werne, J. P.
2010-12-01
Recent studies on Tibetan lakes have demonstrated the great potential of lake sediments as archives of climate variations in this region. We present a high-resolution multi-proxy record from a closed-basin Tibetan lake—Peiku Co (4595m a.s.l., 28°55’ N, 85°35’E). A 5.5-meter-long UwiTec core (PC07-1B) provides a record extending back to ~22,000 cal years B.P., based on 14C AMS dating. Multi-proxy analyses, including high-resolution magnetic susceptibility, bulk density, elemental composition (ITRAX X-ray Fluorescence Core Scanner), and carbonate content have been carried out to compare to other paleoenvironmental records from the Tibetan Plateau. Furthermore, microbial lipids have been measured to test the applicability of GDGT-based temperature reconstructions (TEX86 and MBT/CBT). The record from Peiku Co captures the climate transition out of the last glacial period. A significant transition to warmer and wetter condition is indicated around 14,500 cal years B.P., possibly attributed to the strengthening of the summer monsoon, which is consistent with the monsoon records from Lake Qinghai. The switch to colder conditions between 12,500 and 11,500 cal years B.P. could be correlated with the Younger Dryas. The early and mid-Holocene is marked by an increase in monsoon precipitation, yet the overall trend is interrupted by two short periods of decreasing precipitation around 7000 and 5000 cal years B.P., as seen in other published records across the Asian monsoon areas. The GDGT indices are employed for temperature reconstruction. The samples from Peiku Co varied widely in BIT indices with values ranging from 0.23 to 0.88, with an average of 0.65. The high BIT values suggest this lake received significant terrestrial organic matter input, which probably respond to rainfall variations. The MBT/CBT-based temperature from the core-top is -3.2 °C, slightly higher than the measured MAAT (-4°C) on the Tibetan Plateau, but statistically the same within error of the current calibration. The core top sample yields a CBT-derived pH of 8.7, which broadly agrees with soil pH values measured on the Tibetan Plateau. Additional 210Pb and 14C dates and compound-specific isotope analyses will also be used to provide further information on the vegetation history and hydrological conditions in this area.
NASA Astrophysics Data System (ADS)
Kanjilal, Oindrila; Manohar, C. S.
2017-07-01
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.
Independent Component Analysis of Textures
NASA Technical Reports Server (NTRS)
Manduchi, Roberto; Portilla, Javier
2000-01-01
A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.
NASA Astrophysics Data System (ADS)
Barbetta, Silvia; Coccia, Gabriele; Moramarco, Tommaso; Brocca, Luca; Todini, Ezio
2017-08-01
This work extends the multi-temporal approach of the Model Conditional Processor (MCP-MT) to the multi-model case and to the four Truncated Normal Distributions (TNDs) approach, demonstrating the improvement on the single-temporal one. The study is framed in the context of probabilistic Bayesian decision-making that is appropriate to take rational decisions on uncertain future outcomes. As opposed to the direct use of deterministic forecasts, the probabilistic forecast identifies a predictive probability density function that represents a fundamental knowledge on future occurrences. The added value of MCP-MT is the identification of the probability that a critical situation will happen within the forecast lead-time and when, more likely, it will occur. MCP-MT is thoroughly tested for both single-model and multi-model configurations at a gauged site on the Tiber River, central Italy. The stages forecasted by two operative deterministic models, STAFOM-RCM and MISDc, are considered for the study. The dataset used for the analysis consists of hourly data from 34 flood events selected on a time series of six years. MCP-MT improves over the original models' forecasts: the peak overestimation and the rising limb delayed forecast, characterizing MISDc and STAFOM-RCM respectively, are significantly mitigated, with a reduced mean error on peak stage from 45 to 5 cm and an increased coefficient of persistence from 0.53 up to 0.75. The results show that MCP-MT outperforms the single-temporal approach and is potentially useful for supporting decision-making because the exceedance probability of hydrometric thresholds within a forecast horizon and the most probable flooding time can be estimated.
Hung, Kristin J; Awtrey, Christopher S; Tsai, Alexander C
2014-04-01
To estimate the association between urinary incontinence (UI) and probable depression, work disability, and workforce exit. The analytic sample consisted of 4,511 women enrolled in the population-based Health and Retirement Study cohort. The analysis baseline was 1996, the year that questions about UI were added to the survey instrument, and at which time study participants were 54-65 years of age. Women were followed-up with biennial interviews until 2010-2011. Outcomes of interest were onset of probable depression, work disability, and workforce exit. Urinary incontinence was specified in different ways based on questions about experience and frequency of urine loss. We fit Cox proportional hazards regression models to the data, adjusting the estimates for baseline sociodemographic and health status variables previously found to confound the association between UI and the outcomes of interest. At baseline, 727 participants (survey-weighted prevalence, 16.6%; 95% confidence interval [CI] 15.4-18.0) reported any UI, of which 212 (survey-weighted prevalence, 29.2%; 95% CI 25.4-33.3) reported urine loss on more than 15 days in the past month; and 1,052 participants were categorized as having probable depression (survey-weighted prevalence, 21.6%; 95% CI 19.8-23.6). Urinary incontinence was associated with increased risks for probable depression (adjusted hazard ratio, 1.43; 95% CI 1.27-1.62) and work disability (adjusted hazard ratio, 1.21; 95% CI 1.01-1.45), but not workforce exit (adjusted hazard ratio, 1.06; 95% CI 0.93-1.21). In a population-based cohort of women between ages 54 and 65 years, UI was associated with increased risks for probable depression and work disability. Improved diagnosis and management of UI may yield significant economic and psychosocial benefits.
Guo, P; Huang, G H
2010-03-01
In this study, an interval-parameter semi-infinite fuzzy-chance-constrained mixed-integer linear programming (ISIFCIP) approach is developed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing interval-parameter semi-infinite programming (ISIP) and fuzzy-chance-constrained programming (FCCP) by incorporating uncertainties expressed as dual uncertainties of functional intervals and multiple uncertainties of distributions with fuzzy-interval admissible probability of violating constraint within a general optimization framework. The binary-variable solutions represent the decisions of waste-management-facility expansion, and the continuous ones are related to decisions of waste-flow allocation. The interval solutions can help decision-makers to obtain multiple decision alternatives, as well as provide bases for further analyses of tradeoffs between waste-management cost and system-failure risk. In the application to the City of Regina, Canada, two scenarios are considered. In Scenario 1, the City's waste-management practices would be based on the existing policy over the next 25 years. The total diversion rate for the residential waste would be approximately 14%. Scenario 2 is associated with a policy for waste minimization and diversion, where 35% diversion of residential waste should be achieved within 15 years, and 50% diversion over 25 years. In this scenario, not only landfill would be expanded, but also CF and MRF would be expanded. Through the scenario analyses, useful decision support for the City's solid-waste managers and decision-makers has been generated. Three special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it is useful for tackling multiple uncertainties expressed as intervals, functional intervals, probability distributions, fuzzy sets, and their combinations; secondly, it has capability in addressing the temporal variations of the functional intervals; thirdly, it can facilitate dynamic analysis for decisions of facility-expansion planning and waste-flow allocation within a multi-facility, multi-period and multi-option context. Copyright 2009 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Simonson, W.; Ruiz-Benito, P.; Valladares, F.; Coomes, D.
2015-09-01
Woodlands represent highly significant carbon sinks globally, though could lose this function under future climatic change. Effective large-scale monitoring of these woodlands has a critical role to play in mitigating for, and adapting to, climate change. Mediterranean woodlands have low carbon densities, but represent important global carbon stocks due to their extensiveness and are particularly vulnerable because the region is predicted to become much hotter and drier over the coming century. Airborne lidar is already recognized as an excellent approach for high-fidelity carbon mapping, but few studies have used multi-temporal lidar surveys to measure carbon fluxes in forests and none have worked with Mediterranean woodlands. We use a multi-temporal (five year interval) airborne lidar dataset for a region of central Spain to estimate above-ground biomass (AGB) and carbon dynamics in typical mixed broadleaved/coniferous Mediterranean woodlands. Field calibration of the lidar data enabled the generation of grid-based maps of AGB for 2006 and 2011, and the resulting AGB change were estimated. There was a close agreement between the lidar-based AGB growth estimate (1.22 Mg ha-1 year-1) and those derived from two independent sources: the Spanish National Forest Inventory, and a~tree-ring based analysis (1.19 and 1.13 Mg ha-1 year-1, respectively). We parameterised a simple simulator of forest dynamics using the lidar carbon flux measurements, and used it to explore four scenarios of fire occurrence. Under undisturbed conditions (no fire occurrence) an accelerating accumulation of biomass and carbon is evident over the next 100 years with an average carbon sequestration rate of 1.95 Mg C ha-1 year-1. This rate reduces by almost a third when fire probability is increased to 0.01, as has been predicted under climate change. Our work shows the power of multi-temporal lidar surveying to map woodland carbon fluxes and provide parameters for carbon dynamics models. Space deployment of lidar instruments in the near future could open the way for rolling out wide-scale forest carbon stock monitoring to inform management and governance responses to future environmental change.
Improved Quantitative Analysis of Ion Mobility Spectrometry by Chemometric Multivariate Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fraga, Carlos G.; Kerr, Dayle; Atkinson, David A.
2009-09-01
Traditional peak-area calibration and the multivariate calibration methods of principle component regression (PCR) and partial least squares (PLS), including unfolded PLS (U-PLS) and multi-way PLS (N-PLS), were evaluated for the quantification of 2,4,6-trinitrotoluene (TNT) and cyclo-1,3,5-trimethylene-2,4,6-trinitramine (RDX) in Composition B samples analyzed by temperature step desorption ion mobility spectrometry (TSD-IMS). The true TNT and RDX concentrations of eight Composition B samples were determined by high performance liquid chromatography with UV absorbance detection. Most of the Composition B samples were found to have distinct TNT and RDX concentrations. Applying PCR and PLS on the exact same IMS spectra used for themore » peak-area study improved quantitative accuracy and precision approximately 3 to 5 fold and 2 to 4 fold, respectively. This in turn improved the probability of correctly identifying Composition B samples based upon the estimated RDX and TNT concentrations from 11% with peak area to 44% and 89% with PLS. This improvement increases the potential of obtaining forensic information from IMS analyzers by providing some ability to differentiate or match Composition B samples based on their TNT and RDX concentrations.« less
Beyond the swab: ecosystem sampling to understand the persistence of an amphibian pathogen.
Mosher, Brittany A; Huyvaert, Kathryn P; Bailey, Larissa L
2018-06-02
Understanding the ecosystem-level persistence of pathogens is essential for predicting and measuring host-pathogen dynamics. However, this process is often masked, in part due to a reliance on host-based pathogen detection methods. The amphibian pathogens Batrachochytrium dendrobatidis (Bd) and B. salamandrivorans (Bsal) are pathogens of global conservation concern. Despite having free-living life stages, little is known about the distribution and persistence of these pathogens outside of their amphibian hosts. We combine historic amphibian monitoring data with contemporary host- and environment-based pathogen detection data to obtain estimates of Bd occurrence independent of amphibian host distributions. We also evaluate differences in filter- and swab-based detection probability and assess inferential differences arising from using different decision criteria used to classify samples as positive or negative. Water filtration-based detection probabilities were lower than those from swabs but were > 10%, and swab-based detection probabilities varied seasonally, declining in the early fall. The decision criterion used to classify samples as positive or negative was important; using a more liberal criterion yielded higher estimates of Bd occurrence than when a conservative criterion was used. Different covariates were important when using the liberal or conservative criterion in modeling Bd detection. We found evidence of long-term Bd persistence for several years after an amphibian host species of conservation concern, the boreal toad (Anaxyrus boreas boreas), was last detected. Our work provides evidence of long-term Bd persistence in the ecosystem, and underscores the importance of environmental samples for understanding and mitigating disease-related threats to amphibian biodiversity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Jianbo, E-mail: jianbocui@lsec.cc.ac.cn; Hong, Jialin, E-mail: hjl@lsec.cc.ac.cn; Liu, Zhihui, E-mail: liuzhihui@lsec.cc.ac.cn
We indicate that the nonlinear Schrödinger equation with white noise dispersion possesses stochastic symplectic and multi-symplectic structures. Based on these structures, we propose the stochastic symplectic and multi-symplectic methods, which preserve the continuous and discrete charge conservation laws, respectively. Moreover, we show that the proposed methods are convergent with temporal order one in probability. Numerical experiments are presented to verify our theoretical results.
Extended Importance Sampling for Reliability Analysis under Evidence Theory
NASA Astrophysics Data System (ADS)
Yuan, X. K.; Chen, B.; Zhang, B. Q.
2018-05-01
In early engineering practice, the lack of data and information makes uncertainty difficult to deal with. However, evidence theory has been proposed to handle uncertainty with limited information as an alternative way to traditional probability theory. In this contribution, a simulation-based approach, called ‘Extended importance sampling’, is proposed based on evidence theory to handle problems with epistemic uncertainty. The proposed approach stems from the traditional importance sampling for reliability analysis under probability theory, and is developed to handle the problem with epistemic uncertainty. It first introduces a nominal instrumental probability density function (PDF) for every epistemic uncertainty variable, and thus an ‘equivalent’ reliability problem under probability theory is obtained. Then the samples of these variables are generated in a way of importance sampling. Based on these samples, the plausibility and belief (upper and lower bounds of probability) can be estimated. It is more efficient than direct Monte Carlo simulation. Numerical and engineering examples are given to illustrate the efficiency and feasible of the proposed approach.
[Effects of companion animals on owner's subjective well-being and social networks in Japan].
Kaneko, Megumi
2006-04-01
A multi-method approach was used to examine whether and how companion animals (CA) affect subjective well-being and social networks of Japanese people. In Study 1, a mail survey with a probability sample of 1250 Japanese adults over 40 years old showed that (1) female owners' attachment to CA negatively correlated with subjective well-being, and (2) although younger (under 65) CA owners had more close friends than non-owners, this tendency was reversed for those over 65. In Study 2, in-depth interviews with 27 adults showed that (1) female CA owners reported lower subjective well-being than non-owners, (2) although CA owners were generally successful in interacting with strangers through CA-related behaviors such as dog-walking, those relationships were unlikely to become close, and (3) in contrast to the owners' tendency to portray themselves in positive ways, most non-owners described CA owners negatively, such as being lonely or bad-mannered. Based on the present findings, which sharply contradict those of previous studies in the western societies, future issues are discussed.
NASA Astrophysics Data System (ADS)
Hou, Zhenlong; Huang, Danian
2017-09-01
In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.
NASA Astrophysics Data System (ADS)
Song, X. P.; Potapov, P.; Adusei, B.; King, L.; Khan, A.; Krylov, A.; Di Bella, C. M.; Pickens, A. H.; Stehman, S. V.; Hansen, M.
2016-12-01
Reliable and timely information on agricultural production is essential for ensuring world food security. Freely available medium-resolution satellite data (e.g. Landsat, Sentinel) offer the possibility of improved global agriculture monitoring. Here we develop and test a method for estimating in-season crop acreage using a probability sample of field visits and producing wall-to-wall crop type maps at national scales. The method is first illustrated for soybean cultivated area in the US for 2015. A stratified, two-stage cluster sampling design was used to collect field data to estimate national soybean area. The field-based estimate employed historical soybean extent maps from the U.S. Department of Agriculture (USDA) Cropland Data Layer to delineate and stratify U.S. soybean growing regions. The estimated 2015 U.S. soybean cultivated area based on the field sample was 341,000 km2 with a standard error of 23,000 km2. This result is 1.0% lower than USDA's 2015 June survey estimate and 1.9% higher than USDA's 2016 January estimate. Our area estimate was derived in early September, about 2 months ahead of harvest. To map soybean cover, the Landsat image archive for the year 2015 growing season was processed using an active learning approach. Overall accuracy of the soybean map was 84%. The field-based sample estimated area was then used to calibrate the map such that the soybean acreage of the map derived through pixel counting matched the sample-based area estimate. The strength of the sample-based area estimation lies in the stratified design that takes advantage of the spatially explicit cropland layers to construct the strata. The success of the mapping was built upon an automated system which transforms Landsat images into standardized time-series metrics. The developed method produces reliable and timely information on soybean area in a cost-effective way and could be implemented in an operational mode. The approach has also been applied for other crops in other regions, such as winter wheat in Pakistan, soybean in Argentina and soybean in the entire South America. Similar levels of accuracy and timeliness were achieved as in the US.
New approaches for sampling and modeling native and exotic plant species richness
Chong, G.W.; Reich, R.M.; Kalkhan, M.A.; Stohlgren, T.J.
2001-01-01
We demonstrate new multi-phase, multi-scale approaches for sampling and modeling native and exotic plant species to predict the spread of invasive species and aid in control efforts. Our test site is a 54,000-ha portion of Rocky Mountain National Park, Colorado, USA. This work is based on previous research wherein we developed vegetation sampling techniques to identify hot spots of diversity, important rare habitats, and locations of invasive plant species. Here we demonstrate statistical modeling tools to rapidly assess current patterns of native and exotic plant species to determine which habitats are most vulnerable to invasion by exotic species. We use stepwise multiple regression and modified residual kriging to estimate numbers of native species and exotic species, as well as probability of observing an exotic species in 30 × 30-m cells. Final models accounted for 62% of the variability observed in number of native species, 51% of the variability observed in number of exotic species, and 47% of the variability associated with observing an exotic species. Important independent variables used in developing the models include geographical location, elevation, slope, aspect, and Landsat TM bands 1-7. These models can direct resource managers to areas in need of further inventory, monitoring, and exotic species control efforts.
Predictions of malaria vector distribution in Belize based on multispectral satellite data.
Roberts, D R; Paris, J F; Manguin, S; Harbach, R E; Woodruff, R; Rejmankova, E; Polanco, J; Wullschleger, B; Legters, L J
1996-03-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
Predictions of malaria vector distribution in Belize based on multispectral satellite data
NASA Technical Reports Server (NTRS)
Roberts, D. R.; Paris, J. F.; Manguin, S.; Harbach, R. E.; Woodruff, R.; Rejmankova, E.; Polanco, J.; Wullschleger, B.; Legters, L. J.
1996-01-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
Metocean design parameter estimation for fixed platform based on copula functions
NASA Astrophysics Data System (ADS)
Zhai, Jinjin; Yin, Qilin; Dong, Sheng
2017-08-01
Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.
Day-time identification of summer hailstorm cells from MSG data
NASA Astrophysics Data System (ADS)
Merino, A.; López, L.; Sánchez, J. L.; García-Ortega, E.; Cattani, E.; Levizzani, V.
2013-10-01
Identifying deep convection is of paramount importance, as it may be associated with extreme weather that has significant impact on the environment, property and the population. A new method, the Hail Detection Tool (HDT), is described for identifying hail-bearing storms using multi-spectral Meteosat Second Generation (MSG) data. HDT was conceived as a two-phase method, in which the first step is the Convective Mask (CM) algorithm devised for detection of deep convection, and the second a Hail Detection algorithm (HD) for the identification of hail-bearing clouds among cumulonimbus systems detected by CM. Both CM and HD are based on logistic regression models trained with multi-spectral MSG data-sets comprised of summer convective events in the middle Ebro Valley between 2006-2010, and detected by the RGB visualization technique (CM) or C-band weather radar system of the University of León. By means of the logistic regression approach, the probability of identifying a cumulonimbus event with CM or a hail event with HD are computed by exploiting a proper selection of MSG wavelengths or their combination. A number of cloud physical properties (liquid water path, optical thickness and effective cloud drop radius) were used to physically interpret results of statistical models from a meteorological perspective, using a method based on these "ingredients." Finally, the HDT was applied to a new validation sample consisting of events during summer 2011. The overall Probability of Detection (POD) was 76.9% and False Alarm Ratio 16.7%.
Lander and rover exploration on the lunar surface: A study for SELENE-B mission
NASA Astrophysics Data System (ADS)
Selene-B Rover Science Group; Sasaki, S.; Sugihara, T.; Saiki, K.; Akiyama, H.; Ohtake, M.; Takeda, H.; Hasebe, N.; Kobayashi, M.; Haruyama, J.; Shirai, K.; Kato, M.; Kubota, T.; Kunii, Y.; Kuroda, Y.
The SELENE-B, a lunar landing mission, has been studied in Japan, where a scientific investigation plan is proposed using a robotic rover and a static lander. The main theme to be investigated is to clarify the lunar origin and evolution, especially for early crustal formation process probably from the ancient magma ocean. The highest priority is placed on a direct in situ geology at a crater central peak, “a window to the interior”, where subcrustal materials are exposed and directly accessed without drilling. As a preliminary study was introduced by Sasaki et al. [Sasaki, S., Kubota, T., Okada, T. et al. Scientific exploration of lunar surface using a rover in Japanse future lunar mission. Adv. Space Res. 30, 1921 1926, 2002.], the rover and lander are jointly used, where detailed analyses of the samples collected by the rover are conducted at the lander. Primary scientific instruments are a multi-band stereo imager, a gamma-ray spectrometer, and a sampling tool on the rover, and a multi-spectral telescopic imager, a sampling system, and a sample analysis package with an X-ray spectrometer/diffractometer, a multi-band microscope as well as a sample cleaning and grinding device on the lander.
GROUND WATER MONITORING AND SAMPLING: MULTI-LEVEL VERSUS TRADITIONAL METHODS WHATS WHAT?
After years of research and many publications, the question still remains: What is the best method to collect representative ground water samples from monitoring wells? Numerous systems and devices are currently available for obtaining both multi-level samples as well as traditi...
Multi-wavelength Radio Continuum Emission Studies of Dust-free Red Giants
NASA Technical Reports Server (NTRS)
O'Gorman, Eamon; Harper, Graham M.; Brown, Alexander; Dranke, Stephen; Richards, Anita M. S.
2013-01-01
Multi-wavelength centimeter continuum observations of non-dusty, non-pulsating K spectral-type red giants directly sample their chromospheres and wind acceleration zones. Such stars are feeble emitters at these wavelengths, however, and previous observations have provided only a small number of modest signal-to-noise measurements slowly accumulated over three decades. We present multi-wavelength Karl G. Jansky Very Large Array thermal continuum observations of the wind acceleration zones of two dust-free red giants, Arcturus (alpha Boo: K2 III) and Aldebaran (alpha Tau: K5 III). Importantly, most of our observations of each star were carried out over just a few days, so that we obtained a snapshot of the different stellar atmospheric layers sampled at different wavelengths, independent of any long-term variability. We report the first detections at several wavelengths for each star including a detection at 10 cm (3.0 GHz: S band) for both stars and a 20 cm (1.5 GHz: L band) detection for alpha Boo. This is the first time single (non-binary) luminosity class III red giants have been detected at these continuum wavelengths. Our long-wavelength data sample the outer layers of alpha Boo's atmosphere where its wind velocity is approaching (or possibly has reached) its terminal value and the ionization balance is becoming frozen-in. For alpha Tau, however, our long-wavelength data are still sampling its inner atmosphere, where the wind is still accelerating probably due to its lower mass-loss rate. We compare our data with published semi-empirical models based on ultraviolet data, and the marked deviations highlight the need for new atmospheric models to be developed. Spectral indices are used to discuss the possible properties of the stellar atmospheres, and we find evidence for a rapidly cooling wind in the case of alpha Boo. Finally, we develop a simple analytical wind model for alpha Boo based on our new long-wavelength flux measurements.
MULTI-WAVELENGTH RADIO CONTINUUM EMISSION STUDIES OF DUST-FREE RED GIANTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Gorman, Eamon; Harper, Graham M.; Brown, Alexander
2013-10-01
Multi-wavelength centimeter continuum observations of non-dusty, non-pulsating K spectral-type red giants directly sample their chromospheres and wind acceleration zones. Such stars are feeble emitters at these wavelengths, however, and previous observations have provided only a small number of modest signal-to-noise measurements slowly accumulated over three decades. We present multi-wavelength Karl G. Jansky Very Large Array thermal continuum observations of the wind acceleration zones of two dust-free red giants, Arcturus (α Boo: K2 III) and Aldebaran (α Tau: K5 III). Importantly, most of our observations of each star were carried out over just a few days, so that we obtained amore » snapshot of the different stellar atmospheric layers sampled at different wavelengths, independent of any long-term variability. We report the first detections at several wavelengths for each star including a detection at 10 cm (3.0 GHz: S band) for both stars and a 20 cm (1.5 GHz: L band) detection for α Boo. This is the first time single (non-binary) luminosity class III red giants have been detected at these continuum wavelengths. Our long-wavelength data sample the outer layers of α Boo's atmosphere where its wind velocity is approaching (or possibly has reached) its terminal value and the ionization balance is becoming frozen-in. For α Tau, however, our long-wavelength data are still sampling its inner atmosphere, where the wind is still accelerating probably due to its lower mass-loss rate. We compare our data with published semi-empirical models based on ultraviolet data, and the marked deviations highlight the need for new atmospheric models to be developed. Spectral indices are used to discuss the possible properties of the stellar atmospheres, and we find evidence for a rapidly cooling wind in the case of α Boo. Finally, we develop a simple analytical wind model for α Boo based on our new long-wavelength flux measurements.« less
Rew, Mary Beth; Robbins, Jooke; Mattila, David; Palsbøll, Per J; Bérube, Martine
2011-04-01
Genetic identification of individuals is now commonplace, enabling the application of tagging methods to elusive species or species that cannot be tagged by traditional methods. A key aspect is determining the number of loci required to ensure that different individuals have non-matching multi-locus genotypes. Closely related individuals are of particular concern because of elevated matching probabilities caused by their recent co-ancestry. This issue may be addressed by increasing the number of loci to a level where full siblings (the relatedness category with the highest matching probability) are expected to have non-matching multi-locus genotypes. However, increasing the number of loci to meet this "full-sib criterion" greatly increases the laboratory effort, which in turn may increase the genotyping error rate resulting in an upward-biased mark-recapture estimate of abundance as recaptures are missed due to genotyping errors. We assessed the contribution of false matches from close relatives among 425 maternally related humpback whales, each genotyped at 20 microsatellite loci. We observed a very low (0.5-4%) contribution to falsely matching samples from pairs of first-order relatives (i.e., parent and offspring or full siblings). The main contribution to falsely matching individuals from close relatives originated from second-order relatives (e.g., half siblings), which was estimated at 9%. In our study, the total number of observed matches agreed well with expectations based upon the matching probability estimated for unrelated individuals, suggesting that the full-sib criterion is overly conservative, and would have required a 280% relative increase in effort. We suggest that, under most circumstances, the overall contribution to falsely matching samples from close relatives is likely to be low, and hence applying the full-sib criterion is unnecessary. In those cases where close relatives may present a significant issue, such as unrepresentative sampling, we propose three different genotyping strategies requiring only a modest increase in effort, which will greatly reduce the number of false matches due to the presence of related individuals.
Large Scale Crop Classification in Ukraine using Multi-temporal Landsat-8 Images with Missing Data
NASA Astrophysics Data System (ADS)
Kussul, N.; Skakun, S.; Shelestov, A.; Lavreniuk, M. S.
2014-12-01
At present, there are no globally available Earth observation (EO) derived products on crop maps. This issue is being addressed within the Sentinel-2 for Agriculture initiative where a number of test sites (including from JECAM) participate to provide coherent protocols and best practices for various global agriculture systems, and subsequently crop maps from Sentinel-2. One of the problems in dealing with optical images for large territories (more than 10,000 sq. km) is the presence of clouds and shadows that result in having missing values in data sets. In this abstract, a new approach to classification of multi-temporal optical satellite imagery with missing data due to clouds and shadows is proposed. First, self-organizing Kohonen maps (SOMs) are used to restore missing pixel values in a time series of satellite imagery. SOMs are trained for each spectral band separately using non-missing values. Missing values are restored through a special procedure that substitutes input sample's missing components with neuron's weight coefficients. After missing data restoration, a supervised classification is performed for multi-temporal satellite images. For this, an ensemble of neural networks, in particular multilayer perceptrons (MLPs), is proposed. Ensembling of neural networks is done by the technique of average committee, i.e. to calculate the average class probability over classifiers and select the class with the highest average posterior probability for the given input sample. The proposed approach is applied for large scale crop classification using multi temporal Landsat-8 images for the JECAM test site in Ukraine [1-2]. It is shown that ensemble of MLPs provides better performance than a single neural network in terms of overall classification accuracy and kappa coefficient. The obtained classification map is also validated through estimated crop and forest areas and comparison to official statistics. 1. A.Yu. Shelestov et al., "Geospatial information system for agricultural monitoring," Cybernetics Syst. Anal., vol. 49, no. 1, pp. 124-132, 2013. 2. J. Gallego et al., "Efficiency Assessment of Different Approaches to Crop Classification Based on Satellite and Ground Observations," J. Autom. Inform. Scie., vol. 44, no. 5, pp. 67-80, 2012.
Adewuyi, Emmanuel O; Zhao, Yun
2017-02-01
Significant reduction in the global burden of neonatal mortality was achieved through the millennium development goals. In Nigeria, however, only a marginal reduction was realized. This study assesses the rural-urban differences in neonatal mortality rate (NMR) and the associated risk factors in Nigeria. The dataset from the 2013 Nigeria demographic and health survey (NDHS), disaggregated by rural-urban residence (n = 20 449 and 9935, respectively), was explored using univariate, bivariate, and multivariable analysis. Complex samples analysis was applied to adjust for the unequal selection probabilities due to the multi-stage cluster sampling method used in the 2013 NDHS. The adjusted relationship between the outcome and predictor variables was assessed on multi-level logistic regression analysis. NMR for rural and urban populations was 36 and 28 deaths per 1000 live births, respectively. Risk factors in urban residence were lack of electricity access (adjusted OR [AOR], 1.555; 95%CI: 1.089-2.220), small birth size (as a proxy for low birthweight; AOR, 3.048; 95%CI: 2.047-4.537), and male gender (AOR, 1.666; 95%CI: 1.215-2.284). Risk factors in rural residence were small birth size (a proxy for low birthweight; AOR, 2.118; 95%CI: 1.600-2.804), and birth interval <2 years (AOR, 2.149; 95%CI: 1.760-2.624). Cesarean delivery was a risk factor both in rural (AOR, 5.038; 95%CI: 2.617-9.700) and urban Nigeria (AOR, 2.632; 95%CI: 1.543-4.489). Determinants of neonatal mortality were different in rural and urban Nigeria, and rural neonates had greater risk of mortality than their urban counterparts. © 2016 Japan Pediatric Society.
Jacob, Louis; Uvarova, Maria; Boulet, Sandrine; Begaj, Inva; Chevret, Sylvie
2016-06-02
Multi-Arm Multi-Stage designs aim at comparing several new treatments to a common reference, in order to select or drop any treatment arm to move forward when such evidence already exists based on interim analyses. We redesigned a Bayesian adaptive design initially proposed for dose-finding, focusing our interest in the comparison of multiple experimental drugs to a control on a binary criterion measure. We redesigned a phase II clinical trial that randomly allocates patients across three (one control and two experimental) treatment arms to assess dropping decision rules. We were interested in dropping any arm due to futility, either based on historical control rate (first rule) or comparison across arms (second rule), and in stopping experimental arm due to its ability to reach a sufficient response rate (third rule), using the difference of response probabilities in Bayes binomial trials between the treated and control as a measure of treatment benefit. Simulations were then conducted to investigate the decision operating characteristics under a variety of plausible scenarios, as a function of the decision thresholds. Our findings suggest that one experimental treatment was less efficient than the control and could have been dropped from the trial based on a sample of approximately 20 instead of 40 patients. In the simulation study, stopping decisions were reached sooner for the first rule than for the second rule, with close mean estimates of response rates and small bias. According to the decision threshold, the mean sample size to detect the required 0.15 absolute benefit ranged from 63 to 70 (rule 3) with false negative rates of less than 2 % (rule 1) up to 6 % (rule 2). In contrast, detecting a 0.15 inferiority in response rates required a sample size ranging on average from 23 to 35 (rules 1 and 2, respectively) with a false positive rate ranging from 3.6 to 0.6 % (rule 3). Adaptive trial design is a good way to improve clinical trials. It allows removing ineffective drugs and reducing the trial sample size, while maintaining unbiased estimates. Decision thresholds can be set according to predefined fixed error decision rates. ClinicalTrials.gov Identifier: NCT01342692 .
Secondary School Students' Reasoning about Conditional Probability, Samples, and Sampling Procedures
ERIC Educational Resources Information Center
Prodromou, Theodosia
2016-01-01
In the Australian mathematics curriculum, Year 12 students (aged 16-17) are asked to solve conditional probability problems that involve the representation of the problem situation with two-way tables or three-dimensional diagrams and consider sampling procedures that result in different correct answers. In a small exploratory study, we…
Grizzly Bear Noninvasive Genetic Tagging Surveys: Estimating the Magnitude of Missed Detections.
Fisher, Jason T; Heim, Nicole; Code, Sandra; Paczkowski, John
2016-01-01
Sound wildlife conservation decisions require sound information, and scientists increasingly rely on remotely collected data over large spatial scales, such as noninvasive genetic tagging (NGT). Grizzly bears (Ursus arctos), for example, are difficult to study at population scales except with noninvasive data, and NGT via hair trapping informs management over much of grizzly bears' range. Considerable statistical effort has gone into estimating sources of heterogeneity, but detection error-arising when a visiting bear fails to leave a hair sample-has not been independently estimated. We used camera traps to survey grizzly bear occurrence at fixed hair traps and multi-method hierarchical occupancy models to estimate the probability that a visiting bear actually leaves a hair sample with viable DNA. We surveyed grizzly bears via hair trapping and camera trapping for 8 monthly surveys at 50 (2012) and 76 (2013) sites in the Rocky Mountains of Alberta, Canada. We used multi-method occupancy models to estimate site occupancy, probability of detection, and conditional occupancy at a hair trap. We tested the prediction that detection error in NGT studies could be induced by temporal variability within season, leading to underestimation of occupancy. NGT via hair trapping consistently underestimated grizzly bear occupancy at a site when compared to camera trapping. At best occupancy was underestimated by 50%; at worst, by 95%. Probability of false absence was reduced through successive surveys, but this mainly accounts for error imparted by movement among repeated surveys, not necessarily missed detections by extant bears. The implications of missed detections and biased occupancy estimates for density estimation-which form the crux of management plans-require consideration. We suggest hair-trap NGT studies should estimate and correct detection error using independent survey methods such as cameras, to ensure the reliability of the data upon which species management and conservation actions are based.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-15
... contact the Census Bureau's Social, Economic and Housing Statistics Division at (301) 763- 3243. Under the... the use of probability sampling to create the sample. For additional information about the accuracy of... consists of the error that arises from the use of probability sampling to create the sample. \\2\\ These...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-12
... Household Economic Statistics Division at (301) 763-3243. Under the advice of the Census Bureau, HHS..., which consists of the error that arises from the use of probability sampling to create the sample. For...) Sampling Error, which consists of the error that arises from the use of probability sampling to create the...
Exposure to pesticides residues from consumption of Italian blood oranges.
Fallico, B; D'Urso, M G; Chiappara, E
2009-07-01
This paper reports the results of a 5-year study to evaluate pesticide levels, derived from orchard activities, on Italy's most common orange cultivar (Citrus sinensis, L. Osbeck, cv. Tarocco). Using a Bayesian approach, the study allowed both the qualitative (number) and quantitative distributions (amount) of pesticides to be determined with its own probability value. Multi-residue analyses of 460 samples highlighted the presence of ethyl and methyl chlorpyrifos, dicofol, etofenprox, fenazaquin, fenitrothion, imazalil, malathion and metalaxil-m. A total of 30.5% of samples contained just one pesticide, 2.16% two pesticides and 0.65% of samples had three pesticides present simultaneously. The most common residue was ethyl chlorpyrifos followed by methyl chlorpyrifos. Estimated daily intake (EDI) values for ethyl and methyl chlorpyrifos, as well as the distance from the safety level (non-observed adverse effect level, NOAEL), were calculated. The risk was differentiated (1) to take account of the period of actual citrus consumption (180 days) and (2) to discriminate the risk derived from eating oranges containing a certain level of chlorpyrifos from unspecified pesticides. The most likely EDI values for ethyl chlorpyrifos derived from Italian blood orange consumption are 0.01 and 0.006 mg/day calculated for 180 and 365 days, respectively. Considering the probability of the occurrence of ethyl chlorpyrifos, these EDI values are reduced to 2.6 x 10(-3) and 1.3 x 10(-3) mg/day, respectively. For methyl chlorpyrifos, the most likely EDI values are 0.09 and 0.04 mg/day, respectively; considering the probability of its occurrence, the EDI values decrease to 6.7 x 10(-3) and 3.4 x 10(-3) mg/day, respectively. The results confirmed that levels of pesticides in Italian Tarocco oranges derived from a known controlled chain of production are safe.
Mental health status and related characteristics of Chinese male rural-urban migrant workers.
Yang, Tingzhong; Xu, Xiaochao; Li, Mu; Rockett, Ian R H; Zhu, Waner; Ellison-Barnes, Alejandra
2012-06-01
To explore mental health status and related characteristics in a sample of Chinese male rural-urban migrants. Subjects were 1,595 male rural-urban migrant workers selected though a multi-stage sample survey conducted in two cities (Hangzhou and Guangzhou). Data were collected by means of a self-administered questionnaire. Both life and work stressors were examined. Stress and mental health status were measured by the Chinese Perceived Stress Scale (CPSS) and the Chinese Health Questionnaire (CHQ), respectively. Unconditional logistic regression analysis was performed to identify factors associated with probable mental disorders. There are approximately 120 million rural-urban migrants in China. The prevalence of probable mental disorders in the sample population was 24.4% (95% CI: 23.3-25.5%), which was higher than among urban residents (20.2%, 95% CI: 18.8-21.7%). Logistic regression analysis revealed that five characteristics were positively associated with risk for probable mental disorders: originating in the South (OR = 2.00; 95% CI = 1.02, 4.00), higher life stress (OR = 7.63; 95% CI = 5.88, 10.00), staying in the city for 5-9 months each year (OR = 2.56; 95% CI = 1.67, 3.85), higher work stress (OR = 2.56; 95% CI = 1.96, 3.33), and separation from wife (OR = 2.43; 95% CI = 1.61, 3.57). Employment in machinery and transportation (OR = 0.54; 95% CI = 0.36, 0.81) and higher self-worth (OR = 0.42; 95% CI = 0.28, 0.62) were negatively associated. Findings support an urgent need to develop specific policies and programs to address mental health problems among Chinese rural-urban migrants.
Height and Weight of Children: United States.
ERIC Educational Resources Information Center
Hamill, Peter V. V.; And Others
This report contains national estimates based on findings from the Health Examination Survey in 1963-65 on height and weight measurements of children 6- to 11-years-old. A nationwide probability sample of 7,119 children was selected to represent the noninstitutionalized children (about 24 million) in this age group. Height was obtained in stocking…
NASA Astrophysics Data System (ADS)
Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong
2016-08-01
A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.
Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method
NASA Astrophysics Data System (ADS)
Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi
In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.
Race/Ethnicity, Poverty, Urban Stressors and Telomere Length in a Detroit Community-Based Sample
Geronimus, Arline T.; Pearson, Jay A.; Linnenbringer, Erin; Schulz, Amy J.; Reyes, Angela G.; Epel, Elissa S.; Lin, Jue; Blackburn, Elizabeth H.
2015-01-01
Residents of distressed urban areas suffer early aging-related disease and excess mortality. Using a community-based participatory research approach in a collaboration between social researchers and cellular biologists, we collected a unique data set of 239 black, white, or Mexican adults from a stratified, multi-stage probability sample of three Detroit neighborhoods. We drew venous blood and measured Telomere Length (TL), an indicator of stress-mediated biological aging, linking respondents’ TL to their community survey responses. We regressed TL on socioeconomic, psychosocial, neighborhood, and behavioral stressors, hypothesizing and finding an interaction between poverty and racial/ethnic group. Poor whites had shorter TL than nonpoor whites; poor and nonpoor blacks had equivalent TL; poor Mexicans had longer TL than nonpoor Mexicans. Findings suggest unobserved heterogeneity bias is an important threat to the validity of estimates of TL differences by race/ethnicity. They point to health impacts of social identity as contingent, the products of structurally-rooted biopsychosocial processes. PMID:25930147
Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.
Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen
In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.
Haynes, Trevor B.; Rosenberger, Amanda E.; Lindberg, Mark S.; Whitman, Matthew; Schmutz, Joel A.
2013-01-01
Studies examining species occurrence often fail to account for false absences in field sampling. We investigate detection probabilities of five gear types for six fish species in a sample of lakes on the North Slope, Alaska. We used an occupancy modeling approach to provide estimates of detection probabilities for each method. Variation in gear- and species-specific detection probability was considerable. For example, detection probabilities for the fyke net ranged from 0.82 (SE = 0.05) for least cisco (Coregonus sardinella) to 0.04 (SE = 0.01) for slimy sculpin (Cottus cognatus). Detection probabilities were also affected by site-specific variables such as depth of the lake, year, day of sampling, and lake connection to a stream. With the exception of the dip net and shore minnow traps, each gear type provided the highest detection probability of at least one species. Results suggest that a multimethod approach may be most effective when attempting to sample the entire fish community of Arctic lakes. Detection probability estimates will be useful for designing optimal fish sampling and monitoring protocols in Arctic lakes.
Genotype Imputation with Millions of Reference Samples
Browning, Brian L.; Browning, Sharon R.
2016-01-01
We present a genotype imputation method that scales to millions of reference samples. The imputation method, based on the Li and Stephens model and implemented in Beagle v.4.1, is parallelized and memory efficient, making it well suited to multi-core computer processors. It achieves fast, accurate, and memory-efficient genotype imputation by restricting the probability model to markers that are genotyped in the target samples and by performing linear interpolation to impute ungenotyped variants. We compare Beagle v.4.1 with Impute2 and Minimac3 by using 1000 Genomes Project data, UK10K Project data, and simulated data. All three methods have similar accuracy but different memory requirements and different computation times. When imputing 10 Mb of sequence data from 50,000 reference samples, Beagle’s throughput was more than 100× greater than Impute2’s throughput on our computer servers. When imputing 10 Mb of sequence data from 200,000 reference samples in VCF format, Minimac3 consumed 26× more memory per computational thread and 15× more CPU time than Beagle. We demonstrate that Beagle v.4.1 scales to much larger reference panels by performing imputation from a simulated reference panel having 5 million samples and a mean marker density of one marker per four base pairs. PMID:26748515
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bansal, Artee; Asthagiri, D.; Cox, Kenneth R.
A mixture of solvent particles with short-range, directional interactions and solute particles with short-range, isotropic interactions that can bond multiple times is of fundamental interest in understanding liquids and colloidal mixtures. Because of multi-body correlations, predicting the structure and thermodynamics of such systems remains a challenge. Earlier Marshall and Chapman [J. Chem. Phys. 139, 104904 (2013)] developed a theory wherein association effects due to interactions multiply the partition function for clustering of particles in a reference hard-sphere system. The multi-body effects are incorporated in the clustering process, which in their work was obtained in the absence of the bulk medium.more » The bulk solvent effects were then modeled approximately within a second order perturbation approach. However, their approach is inadequate at high densities and for large association strengths. Based on the idea that the clustering of solvent in a defined coordination volume around the solute is related to occupancy statistics in that defined coordination volume, we develop an approach to incorporate the complete information about hard-sphere clustering in a bulk solvent at the density of interest. The occupancy probabilities are obtained from enhanced sampling simulations but we also develop a concise parametric form to model these probabilities using the quasichemical theory of solutions. We show that incorporating the complete reference information results in an approach that can predict the bonding state and thermodynamics of the colloidal solute for a wide range of system conditions.« less
Reither, Klaus; Manyama, Christina; Clowes, Petra; Rachow, Andrea; Mapamba, Daniel; Steiner, Andreas; Ross, Amanda; Mfinanga, Elirehema; Sasamalo, Mohamed; Nsubuga, Martin; Aloi, Francesco; Cirillo, Daniela; Jugheli, Levan; Lwilla, Fred
2015-04-01
Following endorsement by the World Health Organisation, the Xpert MTB/RIF assay has been widely incorporated into algorithms for the diagnosis of adult tuberculosis (TB). However, data on its performance in children remain scarce. This prospective, multi-centre study evaluated the performance of Xpert MTB/RIF to diagnose pulmonary tuberculosis in children. Children older than eight weeks and younger than 16 years with suspected pulmonary tuberculosis were enrolled at three TB endemic settings in Tanzania and Uganda, and assigned to five well-defined case definition categories: culture-confirmed TB, highly probable TB, probable TB, not TB, or indeterminate. The diagnostic accuracy of Xpert MTB/RIF was assessed using culture-confirmed TB cases as reference standard. In total, 451 children were enrolled. 37 (8%) had culture-confirmed TB, 48 (11%) highly probably TB and 62 probable TB (13%). The Xpert MTB/RIF assay had a sensitivity of 68% (95% CI, 50%-82%) and specificity of 100% (95% CI, 97%-100%); detecting 1.7 times more culture-confirmed cases than smear microscopy with a similar time to detection. Xpert MTB/RIF was positive in 2% (1/48) of highly probable and in 3% (2/62) of probable TB cases. Xpert MTB/RIF provided timely results with moderate sensitivity and excellent specificity compared to culture. Low yields in children with highly probable and probable TB remain problematic. Copyright © 2014 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
Gavett, Brandon E
2015-03-01
The base rates of abnormal test scores in cognitively normal samples have been a focus of recent research. The goal of the current study is to illustrate how Bayes' theorem uses these base rates--along with the same base rates in cognitively impaired samples and prevalence rates of cognitive impairment--to yield probability values that are more useful for making judgments about the absence or presence of cognitive impairment. Correlation matrices, means, and standard deviations were obtained from the Wechsler Memory Scale--4th Edition (WMS-IV) Technical and Interpretive Manual and used in Monte Carlo simulations to estimate the base rates of abnormal test scores in the standardization and special groups (mixed clinical) samples. Bayes' theorem was applied to these estimates to identify probabilities of normal cognition based on the number of abnormal test scores observed. Abnormal scores were common in the standardization sample (65.4% scoring below a scaled score of 7 on at least one subtest) and more common in the mixed clinical sample (85.6% scoring below a scaled score of 7 on at least one subtest). Probabilities varied according to the number of abnormal test scores, base rates of normal cognition, and cutoff scores. The results suggest that interpretation of base rates obtained from cognitively healthy samples must also account for data from cognitively impaired samples. Bayes' theorem can help neuropsychologists answer questions about the probability that an individual examinee is cognitively healthy based on the number of abnormal test scores observed.
Systematic sampling for suspended sediment
Robert B. Thomas
1991-01-01
Abstract - Because of high costs or complex logistics, scientific populations cannot be measured entirely and must be sampled. Accepted scientific practice holds that sample selection be based on statistical principles to assure objectivity when estimating totals and variances. Probability sampling--obtaining samples with known probabilities--is the only method that...
Multi-objective possibilistic model for portfolio selection with transaction cost
NASA Astrophysics Data System (ADS)
Jana, P.; Roy, T. K.; Mazumder, S. K.
2009-06-01
In this paper, we introduce the possibilistic mean value and variance of continuous distribution, rather than probability distributions. We propose a multi-objective Portfolio based model and added another entropy objective function to generate a well diversified asset portfolio within optimal asset allocation. For quantifying any potential return and risk, portfolio liquidity is taken into account and a multi-objective non-linear programming model for portfolio rebalancing with transaction cost is proposed. The models are illustrated with numerical examples.
The Sampling Design of the China Family Panel Studies (CFPS)
Xie, Yu; Lu, Ping
2018-01-01
The China Family Panel Studies (CFPS) is an on-going, nearly nationwide, comprehensive, longitudinal social survey that is intended to serve research needs on a large variety of social phenomena in contemporary China. In this paper, we describe the sampling design of the CFPS sample for its 2010 baseline survey and methods for constructing weights to adjust for sampling design and survey nonresponses. Specifically, the CFPS used a multi-stage probability strategy to reduce operation costs and implicit stratification to increase efficiency. Respondents were oversampled in five provinces or administrative equivalents for regional comparisons. We provide operation details for both sampling and weights construction. PMID:29854418
Prevalence of anxiety, depression and post-traumatic stress disorder in the Kashmir Valley
Lenglet, Annick; Ariti, Cono; Shah, Showkat; Shah, Helal; Ara, Shabnum; Viney, Kerri; Janes, Simon; Pintaldi, Giovanni
2017-01-01
Background Following the partition of India in 1947, the Kashmir Valley has been subject to continual political insecurity and ongoing conflict, the region remains highly militarised. We conducted a representative cross-sectional population-based survey of adults to estimate the prevalence and predictors of anxiety, depression and post-traumatic stress disorder (PTSD) in the 10 districts of the Kashmir Valley. Methods Between October and December 2015, we interviewed 5519 out of 5600 invited participants, ≥18 years of age, randomly sampled using a probability proportional to size cluster sampling design. We estimated the prevalence of a probable psychological disorder using the Hopkins Symptom Checklist (HSCL-25) and the Harvard Trauma Questionnaire (HTQ-16). Both screening instruments had been culturally adapted and translated. Data were weighted to account for the sampling design and multivariate logistic regression analysis was conducted to identify risk factors for developing symptoms of psychological distress. Findings The estimated prevalence of mental distress in adults in the Kashmir Valley was 45% (95% CI 42.6 to 47.0). We identified 41% (95% CI 39.2 to 43.4) of adults with probable depression, 26% (95% CI 23.8 to 27.5) with probable anxiety and 19% (95% CI 17.5 to 21.2) with probable PTSD. The three disorders were associated with the following characteristics: being female, over 55 years of age, having had no formal education, living in a rural area and being widowed/divorced or separated. A dose–response association was found between the number of traumatic events experienced or witnessed and all three mental disorders. Interpretation The implementation of mental health awareness programmes, interventions aimed at high risk groups and addressing trauma-related symptoms from all causes are needed in the Kashmir Valley. PMID:29082026
Andersen, Judith P; Blosnich, John
2013-01-01
Adverse childhood experiences (e.g., physical, sexual and emotional abuse, neglect, exposure to domestic violence, parental discord, familial mental illness, incarceration and substance abuse) constitute a major public health problem in the United States. The Adverse Childhood Experiences (ACE) scale is a standardized measure that captures multiple developmental risk factors beyond sexual, physical and emotional abuse. Lesbian, gay, and bisexual (i.e., sexual minority) individuals may experience disproportionately higher prevalence of adverse childhood experiences. To examine, using the ACE scale, prevalence of childhood physical, emotional, and sexual abuse and childhood household dysfunction among sexual minority and heterosexual adults. Analyses were conducted using a probability-based sample of data pooled from three U.S. states' Behavioral Risk Factor Surveillance System (BRFSS) surveys (Maine, Washington, Wisconsin) that administered the ACE scale and collected information on sexual identity (n = 22,071). Compared with heterosexual respondents, gay/lesbian and bisexual individuals experienced increased odds of six of eight and seven of eight adverse childhood experiences, respectively. Sexual minority persons had higher rates of adverse childhood experiences (IRR = 1.66 gay/lesbian; 1.58 bisexual) compared to their heterosexual peers. Sexual minority individuals have increased exposure to multiple developmental risk factors beyond physical, sexual and emotional abuse. We recommend the use of the Adverse Childhood Experiences scale in future research examining health disparities among this minority population.
a Cloud Boundary Detection Scheme Combined with Aslic and Cnn Using ZY-3, GF-1/2 Satellite Imagery
NASA Astrophysics Data System (ADS)
Guo, Z.; Li, C.; Wang, Z.; Kwok, E.; Wei, X.
2018-04-01
Remote sensing optical image cloud detection is one of the most important problems in remote sensing data processing. Aiming at the information loss caused by cloud cover, a cloud detection method based on convolution neural network (CNN) is presented in this paper. Firstly, a deep CNN network is used to extract the multi-level feature generation model of cloud from the training samples. Secondly, the adaptive simple linear iterative clustering (ASLIC) method is used to divide the detected images into superpixels. Finally, the probability of each superpixel belonging to the cloud region is predicted by the trained network model, thereby generating a cloud probability map. The typical region of GF-1/2 and ZY-3 were selected to carry out the cloud detection test, and compared with the traditional SLIC method. The experiment results show that the average accuracy of cloud detection is increased by more than 5 %, and it can detected thin-thick cloud and the whole cloud boundary well on different imaging platforms.
Methodology Series Module 5: Sampling Strategies.
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.
Influence of level of education on disability free life expectancy by sex: the ILSA study.
Minicuci, N; Noale, M
2005-12-01
To assess the effect of education on Disability Free Life Expectancy among older Italians, using a hierarchical model as indicator of disability, with estimates based on the multistate life table method and IMaCh software. Data were obtained from the Italian Longitudinal Study on Aging which considered a random sample of 5632 individuals. Total life expectancy ranged from 16.5 years for men aged 65 years to 6 years for men aged 80. The age range for women was 19.6 and 8.4 years, respectively. For both sexes, increasing age was associated with a lower probability of recovery from a mild state of disability, with a greater probability of worsening for all individuals presenting an independent state at baseline, and with a greater probability of dying except for women from a mild state of disability. A medium/high educational level was associated with a greater probability of recovery only in men with a mild state of disability at baseline, and with a lower probability of worsening in both sexes, except for men with a mild state of disability at baseline. The positive effects of high education are well established in most research work and, being a modifiable factor, strategies focused on increasing level of education and, hence strengthening access to information and use of health services would produce significant benefits.
Prevalence of dementia among population age over 45 years in Chiang Mai, Thailand.
Wangtongkum, Suparus; Sucharitkul, Phongsakorn; Silprasert, Nutcharut; Inthrachak, Rudeethawinl
2008-11-01
To determine the prevalence of dementia in Thai people with age 45 years and above. This project used a cross sectional research design to study the prevalence of dementia in Chiang Mai. Door-to-door technique was assigned in condition with multi-stage probability random sampling to obtain subjects representing the population of Chiang Mai between Oct 2004 and Sep 2005. The researchers collected the data from the subjects aged 45 years and above. All subjects were located from every Amphurs of Chiang Mai. They were first screened with Thai Mini Mental State Examination (TMSE) and Thai Beck Depression Inventory (BDI). The subjects whose TMSE was less than 24 were assessed and diagnosed by a neurologist. Subjects who were determined as having dementia might be laboratory analyzed and classified based on DSM-IV and NINDS-AIREN criteria. The authors enrolled 2,311 people and screened them with Batteries test. One thousand four hundred ninety two people qualified with 610 males and 882 females, whose mean age was 59.7 +/- 10.4 years. The authors found that among the 35 people with dementia, the mean age was 67.9 +/- 8.9 years (45-88 years). The prevalence of dementia among the study participants was 2.35%. In the present study, Alzheimer's disease was the most common type of dementia diagnosed (75.0%) and vascular dementia was the second most commonly diagnosed (12.5%). The prevalence of dementia in Chiang Mai was 2.35%, which does not differ from the previous study Alzheimer's disease was the most common type of dementia diagnosed.
NASA Technical Reports Server (NTRS)
Aldrich, R. C.; Dana, R. W.; Roberts, E. H. (Principal Investigator)
1977-01-01
The author has identified the following significant results. A stratified random sample using LANDSAT band 5 and 7 panchromatic prints resulted in estimates of water in counties with sampling errors less than + or - 9% (67% probability level). A forest inventory using a four band LANDSAT color composite resulted in estimates of forest area by counties that were within + or - 6.7% and + or - 3.7% respectively (67% probability level). Estimates of forest area for counties by computer assisted techniques were within + or - 21% of operational forest survey figures and for all counties the difference was only one percent. Correlations of airborne terrain reflectance measurements with LANDSAT radiance verified a linear atmospheric model with an additive (path radiance) term and multiplicative (transmittance) term. Coefficients of determination for 28 of the 32 modeling attempts, not adverseley affected by rain shower occurring between the times of LANDSAT passage and aircraft overflights, exceeded 0.83.
Larm, Peter; Åslund, Cecilia; Starrin, Bengt; Nilsson, K W
2016-07-01
This study examined whether social capital and a sense of coherence are associated with hazardous alcohol use in a large population-based Swedish sample. In particular, the objectives were (a) to examine which of five subdimensions of social capital is associated with hazardous alcohol use, (b) to investigate the moderating role of sense of coherence and (c) to examine possible sex differences. A postal survey was distributed to a sample of respondents (aged 18-84 years) from five Swedish counties that was stratified by sex, age and city; 40,674 (59.2%) participants responded, of which 45.5% were men and 54.5% were women with a mean±SD age of 53.8±17.9 years. Structural dimensions of social capital were associated with an increased probability of hazardous alcohol use among both men and women, whereas the increased probability associated with cognitive dimensions occurred mostly among women. Sense of coherence was robustly associated with a decreased probability of hazardous alcohol use among both men and women. There were few moderating effects of sense of coherence and sex differences emerged mainly for the cognitive dimension of social capital. CONCLUSIONS ASSOCIATIONS BETWEEN SOCIAL CAPITAL DIMENSIONS AND HAZARDOUS ALCOHOL USE WERE PARTLY SEX-SPECIFIC, WHEREAS THE BENEFITS OF A SENSE OF COHERENCE ACCRUED TO BOTH SEXES SOCIAL CAPITAL DIMENSIONS AND SENSE OF COHERENCE WERE GENERALLY UNRELATED TO EACH OTHER ONLY ASSOCIATIONS BETWEEN THE COGNITIVE DIMENSIONS OF SOCIAL CAPITAL AND HAZARDOUS ALCOHOL USE DIFFERED BY SEX. © 2016 the Nordic Societies of Public Health.
LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang
2015-03-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.
LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. PMID:25541188
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grippo, Mark A.; Hlohowskyj, Ihor; Fox, Laura
The U.S. Army Corps of Engineers (USACE) is conducting the Great Lakes and Mississippi River Interbasin Study (GLMRIS) to determine the aquatic nuisance species (ANS) currently established in either the Mississippi River Basin (MRB) or the Great Lakes Basin (GLB) that pose the greatest risk to the other basin. The GLRMIS study focuses specifically on ANS transfer through the Chicago Area Waterway System (CAWS), a multi-use waterway connecting the two basins. In support of GLMRIS, we conducted a qualitative risk assessment for 34 ANS in which we determined overall risk level for four time intervals over a 50-year period ofmore » analysis based on the probability of ANS establishing in a new basin and the environmental, economic, and sociopolitical consequences of their establishment. Probability of establishment and consequences of establishment were assigned qualitative ratings of high, medium, or low and establishment and consequence ratings were then combined into an overall risk rating. Over the 50-year period of analysis, seven species were characterized as posing a medium risk and two species as posing a high risk to the MRB. Three species were characterized as posing a medium risk to the GLB, but no high-risk species were identified for this basin. Based on the time frame in which these species were considered likely to establish in the new basin, risk increased over time for some ANS. Identifying and prioritizing ANS risk supported the development and evaluation of multiple control alternatives that could reduce the probability of interbasin ANS transfer. However, both species traits and the need to balance multiple uses of the CAWS make it difficult to design cost-efficient and socially acceptable controls to reduce the probability of ANS transfer between the two basins.« less
Pay attention to the study on active antiliver fibrosis components of Chinese herbal medicine.
Hu, Yi-Yang
2012-08-01
In this review, the researches on Chinese herb components with anti-hepatic fibrosis activity in China in the recent 20 years were generalized. Almost thirty active herb components attracted author's attention, especially, salvianolic acid B and oxymatrine which were investigated comprehensively. Moreover, the author considered that, in view of the complex pathogenesis and the multi-pathway and multi-target superiority of Chinese medicine formula, the effective components formula investigations deserve more attention and probably prompt a potential researching direction.
Methodology Series Module 5: Sampling Strategies
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438
Random Evolutionary Dynamics Driven by Fitness and House-of-Cards Mutations: Sampling Formulae
NASA Astrophysics Data System (ADS)
Huillet, Thierry E.
2017-07-01
We first revisit the multi-allelic mutation-fitness balance problem, especially when mutations obey a house of cards condition, where the discrete-time deterministic evolutionary dynamics of the allelic frequencies derives from a Shahshahani potential. We then consider multi-allelic Wright-Fisher stochastic models whose deviation to neutrality is from the Shahshahani mutation/selection potential. We next focus on the weak selection, weak mutation cases and, making use of a Gamma calculus, we compute the normalizing partition functions of the invariant probability densities appearing in their Wright-Fisher diffusive approximations. Using these results, generalized Ewens sampling formulae (ESF) from the equilibrium distributions are derived. We start treating the ESF in the mixed mutation/selection potential case and then we restrict ourselves to the ESF in the simpler house-of-cards mutations only situation. We also address some issues concerning sampling problems from infinitely-many alleles weak limits.
Retrieving Biome Types from Multi-angle Spectral Data
NASA Astrophysics Data System (ADS)
Schull, M. A.; Xu, L.; Latorre, P.; Samanta, A.; Myneni, R. B.; Knyazikhin, Y.
2009-12-01
Many studies have been conducted to demonstrate the ability of multi-angle spectral data to discriminate plant dominant species. Most have employed the use of empirically based techniques, which are site specific, requires some initial training based on characteristics of known leaf and/or canopy spectra and therefore may not be extendable to operational use or adapted to changing/unknown land cover. An ancillary objective of the MISR LAI/FPAR algorithm is classification of global vegetation into biome types. The algorithm is based on the 3D radiative transfer equation. Its performance suggests that is has valid LAI retrievals and correct biome identification in about 20% of the pixels. However with a probability of about 70%, uncertainties in LAI retrievals due to biome misclassification do not exceed uncertainties in the observations. In this poster we present an approach to improve reliability of the distribution of biomes and dominant species from multi angle spectral data. The radiative transfer theory of canopy spectral invariants underlies the approach, which facilitates parameterization of the canopy bidirectional reflectance factor in terms of the leaf spectrum and two spectrally invariant and structurally varying variables - recollision and directional escape probabilities. Theoretical and empirical analyses of ground and airborne data acquired by AVIRIS, AirMISR over two sites in New England and CHRIS/PROBA over BARAX site in Spain suggest that the canopy spectral invariants convey information about canopy structure at both the macro and micro scales. These properties allow for the natural separation of biome classes based on the location of points on the total escape probability vs the proportional escape ratio log-log plane.
Validating long-term satellite-derived disturbance products: the case of burned areas
NASA Astrophysics Data System (ADS)
Boschetti, L.; Roy, D. P.
2015-12-01
The potential research, policy and management applications of satellite products place a high priority on providing statements about their accuracy. A number of NASA, ESA and EU funded global and continental burned area products have been developed using coarse spatial resolution satellite data, and have the potential to become part of a long-term fire Climate Data Record. These products have usually been validated by comparison with reference burned area maps derived by visual interpretation of Landsat or similar spatial resolution data selected on an ad hoc basis. More optimally, a design-based validation method should be adopted that is characterized by the selection of reference data via a probability sampling that can subsequently be used to compute accuracy metrics, taking into account the sampling probability. Design based techniques have been used for annual land cover and land cover change product validation, but have not been widely used for burned area products, or for the validation of global products that are highly variable in time and space (e.g. snow, floods or other non-permanent phenomena). This has been due to the challenge of designing an appropriate sampling strategy, and to the cost of collecting independent reference data. We propose a tri-dimensional sampling grid that allows for probability sampling of Landsat data in time and in space. To sample the globe in the spatial domain with non-overlapping sampling units, the Thiessen Scene Area (TSA) tessellation of the Landsat WRS path/rows is used. The TSA grid is then combined with the 16-day Landsat acquisition calendar to provide tri-dimensonal elements (voxels). This allows the implementation of a sampling design where not only the location but also the time interval of the reference data is explicitly drawn by probability sampling. The proposed sampling design is a stratified random sampling, with two-level stratification of the voxels based on biomes and fire activity (Figure 1). The novel validation approach, used for the validation of the MODIS and forthcoming VIIRS global burned area products, is a general one, and could be used for the validation of other global products that are highly variable in space and time and is required to assess the accuracy of climate records. The approach is demonstrated using a 1 year dataset of MODIS fire products.
O’Neill, M S; Diez-Roux, A V; Auchincloss, A H; Franklin, T G; Jacobs, D R; Astor, B C; Dvonch, J T; Kaufman, J
2010-01-01
Objectives Understanding mechanistic pathways linking airborne particle exposure to cardiovascular health is important for causal inference and setting environmental standards. We evaluated whether urinary albumin excretion, a subclinical marker of microvascular function which predicts cardiovascular events, was associated with ambient particle exposure. Methods Urinary albumin and creatinine were measured among members of the Multi-Ethnic Study of Atherosclerosis at three visits during 2000–2004. Exposure to PM2.5 and PM10 (µg/m3) was estimated from ambient monitors for 1 month, 2 months and two decades before visit one. We regressed recent and chronic (20 year) particulate matter (PM) exposure on urinary albumin/creatinine ratio (UACR, mg/g) and microalbuminuria at first examination, controlling for age, race/ethnicity, sex, smoking, second-hand smoke exposure, body mass index and dietary protein (n = 3901). We also evaluated UACR changes and development of microalbuminuria between the first, and second and third visits which took place at 1.5- to 2-year intervals in relation to chronic PM exposure prior to baseline using mixed models. Results Chronic and recent particle exposures were not associated with current UACR or microalbuminuria (per 10 µg/m3 increment of chronic PM10 exposure, mean difference in log UACR = −0.02 (95% CI −0.07 to 0.03) and relative probability of having microalbuminuria = 0.92 (95% CI 0.77 to 1.08)) We found only weak evidence that albuminuria was accelerated among those chronically exposed to particles: each 10 µg/m3 increment in chronic PM10 exposure was associated with a 1.14 relative probability of developing microalbuminuria over 3–4 years, although 95% confidence intervals included the null (95% CI 0.96 to 1.36). Conclusions UACR is not a strong mechanistic marker for the possible influence of air pollution on cardiovascular health in this sample. PMID:18032533
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-15
....gov/acs/www/ or contact the Census Bureau's Social, Economic, and Housing Statistics Division at (301...) Sampling Error, which consists of the error that arises from the use of probability sampling to create the... direction; and (2) Sampling Error, which consists of the error that arises from the use of probability...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less
Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network
NASA Astrophysics Data System (ADS)
Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu
2018-04-01
This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.
Electrochemical Biosensors for Rapid Detection of Foodborne Salmonella: A Critical Overview
Cinti, Stefano; Volpe, Giulia; Piermarini, Silvia; Delibato, Elisabetta; Palleschi, Giuseppe
2017-01-01
Salmonella has represented the most common and primary cause of food poisoning in many countries for at least over 100 years. Its detection is still primarily based on traditional microbiological culture methods which are labor-intensive, extremely time consuming, and not suitable for testing a large number of samples. Accordingly, great efforts to develop rapid, sensitive and specific methods, easy to use, and suitable for multi-sample analysis, have been made and continue. Biosensor-based technology has all the potentialities to meet these requirements. In this paper, we review the features of the electrochemical immunosensors, genosensors, aptasensors and phagosensors developed in the last five years for Salmonella detection, focusing on the critical aspects of their application in food analysis. PMID:28820458
NASA Astrophysics Data System (ADS)
Kim, Ok-Yeon; Kim, Hye-Mi; Lee, Myong-In; Min, Young-Mi
2017-01-01
This study aims at predicting the seasonal number of typhoons (TY) over the western North Pacific with an Asia-Pacific Climate Center (APCC) multi-model ensemble (MME)-based dynamical-statistical hybrid model. The hybrid model uses the statistical relationship between the number of TY during the typhoon season (July-October) and the large-scale key predictors forecasted by APCC MME for the same season. The cross validation result from the MME hybrid model demonstrates high prediction skill, with a correlation of 0.67 between the hindcasts and observation for 1982-2008. The cross validation from the hybrid model with individual models participating in MME indicates that there is no single model which consistently outperforms the other models in predicting typhoon number. Although the forecast skill of MME is not always the highest compared to that of each individual model, the skill of MME presents rather higher averaged correlations and small variance of correlations. Given large set of ensemble members from multi-models, a relative operating characteristic score reveals an 82 % (above-) and 78 % (below-normal) improvement for the probabilistic prediction of the number of TY. It implies that there is 82 % (78 %) probability that the forecasts can successfully discriminate between above normal (below-normal) from other years. The forecast skill of the hybrid model for the past 7 years (2002-2008) is more skillful than the forecast from the Tropical Storm Risk consortium. Using large set of ensemble members from multi-models, the APCC MME could provide useful deterministic and probabilistic seasonal typhoon forecasts to the end-users in particular, the residents of tropical cyclone-prone areas in the Asia-Pacific region.
Gao, Xueping; Liu, Yinzhu; Sun, Bowen
2018-06-05
The risk of water shortage caused by uncertainties, such as frequent drought, varied precipitation, multiple water resources, and different water demands, brings new challenges to the water transfer projects. Uncertainties exist for transferring water and local surface water; therefore, the relationship between them should be thoroughly studied to prevent water shortage. For more effective water management, an uncertainty-based water shortage risk assessment model (UWSRAM) is developed to study the combined effect of multiple water resources and analyze the shortage degree under uncertainty. The UWSRAM combines copula-based Monte Carlo stochastic simulation and the chance-constrained programming-stochastic multiobjective optimization model, using the Lunan water-receiving area in China as an example. Statistical copula functions are employed to estimate the joint probability of available transferring water and local surface water and sampling from the multivariate probability distribution, which are used as inputs for the optimization model. The approach reveals the distribution of water shortage and is able to emphasize the importance of improving and updating transferring water and local surface water management, and examine their combined influence on water shortage risk assessment. The possible available water and shortages can be calculated applying the UWSRAM, also with the corresponding allocation measures under different water availability levels and violating probabilities. The UWSRAM is valuable for mastering the overall multi-water resource and water shortage degree, adapting to the uncertainty surrounding water resources, establishing effective water resource planning policies for managers and achieving sustainable development.
NASA Astrophysics Data System (ADS)
Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.
2017-12-01
Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.
Thematic accuracy of the 1992 National Land-Cover Data for the western United States
Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Yang, L.
2004-01-01
The MultiResolution Land Characteristics (MRLC) consortium sponsored production of the National Land Cover Data (NLCD) for the conterminous United States, using Landsat imagery collected on a target year of 1992 (1992 NLCD). Here we report the thematic accuracy of the 1992 NLCD for the six western mapping regions. Reference data were collected in each region for a probability sample of pixels stratified by map land-cover class. Results are reported for each of the six mapping regions with agreement defined as a match between the primary or alternate reference land-cover label and a mode class of the mapped 3×3 block of pixels centered on the sample pixel. Overall accuracy at Anderson Level II was low and variable across the regions, ranging from 38% for the Midwest to 70% for the Southwest. Overall accuracy at Anderson Level I was higher and more consistent across the regions, ranging from 82% to 85% for five of the six regions, but only 74% for the South-central region.
Protective resources and perceptions of stress in a multi-ethnic sample of school-age children.
Taxis, J Carole; Rew, Lynn; Jackson, Kate; Kouzekanani, Kamiar
2004-01-01
To investigate the relationship among protective resources of social connectedness, coping skills, and the perception of stress in 613 Hispanic and White school-aged children. A secondary analysis of data, part of a longitudinal cohort-sequential study designed to investigate health-risk behaviors in school-age children. Data were collected by computer-assisted self-interviewing from a non-probability sample of 8-12-year-olds in three independent school districts. Hierarchical multiple regression analysis indicated that social connectedness and the frequency of coping strategies used accounted for 18.8% of the variation in stress. "Feeling sick" was the primary stressor of the participants, while the two most frequently endorsed coping strategies were "watch TV or listen to music" and "draw, write, or read something." The findings are significant because nurses working with children are in a strategic position to assess risk factors and protective resources related to stress and intervene in a timely manner to assist children and families develop resiliency.
Butler, Troy; Wildey, Timothy
2018-01-01
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Troy; Wildey, Timothy
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
Liu, Fei; Zhang, Xi; Jia, Yan
2015-01-01
In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.
HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA
Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...
Identifying the Community Structure of the Food-Trade International Multi-Network
NASA Technical Reports Server (NTRS)
Torreggiani, S.; Mangioni, G.
2018-01-01
Achieving international food security requires improved understanding of how international trade networks connect countries around the world through the import-export flows of food commodities. The properties of international food trade networks are still poorly documented, especially from a multi-network perspective. In particular, nothing is known about the multi-network's community structure. Here we find that the individual crop-specific layers of the multi-network have densely connected trading groups, a consistent characteristic over the period 2001-2011. Further, the multi-network is characterized by low variability over this period but with substantial heterogeneity across layers in each year. In particular, the layers are mostly assortative: more-intensively connected countries tend to import from and export to countries that are themselves more connected. We also fit econometric models to identify social, economic and geographic factors explaining the probability that any two countries are co-present in the same community. Our estimates indicate that the probability of country pairs belonging to the same food trade community depends more on geopolitical and economic factors-such as geographical proximity and trade-agreement co-membership-than on country economic size and/or income. These community-structure findings of the multi-network are especially valuable for efforts to understand past and emerging dynamics in the global food system, especially those that examine potential 'shocks' to global food trade.
Identifying the community structure of the food-trade international multi-network
NASA Astrophysics Data System (ADS)
Torreggiani, S.; Mangioni, G.; Puma, M. J.; Fagiolo, G.
2018-05-01
Achieving international food security requires improved understanding of how international trade networks connect countries around the world through the import-export flows of food commodities. The properties of international food trade networks are still poorly documented, especially from a multi-network perspective. In particular, nothing is known about the multi-network’s community structure. Here we find that the individual crop-specific layers of the multi-network have densely connected trading groups, a consistent characteristic over the period 2001–2011. Further, the multi-network is characterized by low variability over this period but with substantial heterogeneity across layers in each year. In particular, the layers are mostly assortative: more-intensively connected countries tend to import from and export to countries that are themselves more connected. We also fit econometric models to identify social, economic and geographic factors explaining the probability that any two countries are co-present in the same community. Our estimates indicate that the probability of country pairs belonging to the same food trade community depends more on geopolitical and economic factors—such as geographical proximity and trade-agreement co-membership—than on country economic size and/or income. These community-structure findings of the multi-network are especially valuable for efforts to understand past and emerging dynamics in the global food system, especially those that examine potential ‘shocks’ to global food trade.
Multi-model ensembles for assessment of flood losses and associated uncertainty
NASA Astrophysics Data System (ADS)
Figueiredo, Rui; Schröter, Kai; Weiss-Motz, Alexander; Martina, Mario L. V.; Kreibich, Heidi
2018-05-01
Flood loss modelling is a crucial part of risk assessments. However, it is subject to large uncertainty that is often neglected. Most models available in the literature are deterministic, providing only single point estimates of flood loss, and large disparities tend to exist among them. Adopting any one such model in a risk assessment context is likely to lead to inaccurate loss estimates and sub-optimal decision-making. In this paper, we propose the use of multi-model ensembles to address these issues. This approach, which has been applied successfully in other scientific fields, is based on the combination of different model outputs with the aim of improving the skill and usefulness of predictions. We first propose a model rating framework to support ensemble construction, based on a probability tree of model properties, which establishes relative degrees of belief between candidate models. Using 20 flood loss models in two test cases, we then construct numerous multi-model ensembles, based both on the rating framework and on a stochastic method, differing in terms of participating members, ensemble size and model weights. We evaluate the performance of ensemble means, as well as their probabilistic skill and reliability. Our results demonstrate that well-designed multi-model ensembles represent a pragmatic approach to consistently obtain more accurate flood loss estimates and reliable probability distributions of model uncertainty.
Estimating nest detection probabilities for white-winged dove nest transects in Tamaulipas, Mexico
Nichols, J.D.; Tomlinson, R.E.; Waggerman, G.
1986-01-01
Nest transects in nesting colonies provide one source of information on White-winged Dove (Zenaida asiatica asiatica) population status and reproduction. Nests are counted along transects using standardized field methods each year in Texas and northeastern Mexico by personnel associated with Mexico's Office of Flora and Fauna, the Texas Parks and Wildlife Department, and the U.S. Fish and Wildlife Service. Nest counts on transects are combined with information on the size of nesting colonies to estimate total numbers of nests in sampled colonies. Historically, these estimates have been based on the actual nest counts on transects and thus have required the assumption that all nests lying within transect boundaries are detected (seen) with a probability of one. Our objectives were to test the hypothesis that nest detection probability is one and, if rejected, to estimate this probability.
Gao, Zhengguang; Liu, Hongzhan; Ma, Xiaoping; Lu, Wei
2016-11-10
Multi-hop parallel relaying is considered in a free-space optical (FSO) communication system deploying binary phase-shift keying (BPSK) modulation under the combined effects of a gamma-gamma (GG) distribution and misalignment fading. Based on the best path selection criterion, the cumulative distribution function (CDF) of this cooperative random variable is derived. Then the performance of this optical mesh network is analyzed in detail. A Monte Carlo simulation is also conducted to demonstrate the effectiveness of the results for the average bit error rate (ABER) and outage probability. The numerical result proves that it needs a smaller average transmitted optical power to achieve the same ABER and outage probability when using the multi-hop parallel network in FSO links. Furthermore, the system use of more number of hops and cooperative paths can improve the quality of the communication.
Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan Miguel
2013-01-01
Research biobanks are often composed by data from multiple sources. In some cases, these different subsets of data may present dissimilarities among their probability density functions (PDF) due to spatial shifts. This, may lead to wrong hypothesis when treating the data as a whole. Also, the overall quality of the data is diminished. With the purpose of developing a generic and comparable metric to assess the stability of multi-source datasets, we have studied the applicability and behaviour of several PDF distances over shifts on different conditions (such as uni- and multivariate, different types of variable, and multi-modality) which may appear in real biomedical data. From the studied distances, we found information-theoretic based and Earth Mover's Distance to be the most practical distances for most conditions. We discuss the properties and usefulness of each distance according to the possible requirements of a general stability metric.
Capturing rogue waves by multi-point statistics
NASA Astrophysics Data System (ADS)
Hadjihosseini, A.; Wächter, Matthias; Hoffmann, N. P.; Peinke, J.
2016-01-01
As an example of a complex system with extreme events, we investigate ocean wave states exhibiting rogue waves. We present a statistical method of data analysis based on multi-point statistics which for the first time allows the grasping of extreme rogue wave events in a highly satisfactory statistical manner. The key to the success of the approach is mapping the complexity of multi-point data onto the statistics of hierarchically ordered height increments for different time scales, for which we can show that a stochastic cascade process with Markov properties is governed by a Fokker-Planck equation. Conditional probabilities as well as the Fokker-Planck equation itself can be estimated directly from the available observational data. With this stochastic description surrogate data sets can in turn be generated, which makes it possible to work out arbitrary statistical features of the complex sea state in general, and extreme rogue wave events in particular. The results also open up new perspectives for forecasting the occurrence probability of extreme rogue wave events, and even for forecasting the occurrence of individual rogue waves based on precursory dynamics.
NASA Astrophysics Data System (ADS)
Chang Chien, Kuang-Che; Fetita, Catalin; Brillet, Pierre-Yves; Prêteux, Françoise; Chang, Ruey-Feng
2009-02-01
Multi-detector computed tomography (MDCT) has high accuracy and specificity on volumetrically capturing serial images of the lung. It increases the capability of computerized classification for lung tissue in medical research. This paper proposes a three-dimensional (3D) automated approach based on mathematical morphology and fuzzy logic for quantifying and classifying interstitial lung diseases (ILDs) and emphysema. The proposed methodology is composed of several stages: (1) an image multi-resolution decomposition scheme based on a 3D morphological filter is used to detect and analyze the different density patterns of the lung texture. Then, (2) for each pattern in the multi-resolution decomposition, six features are computed, for which fuzzy membership functions define a probability of association with a pathology class. Finally, (3) for each pathology class, the probabilities are combined up according to the weight assigned to each membership function and two threshold values are used to decide the final class of the pattern. The proposed approach was tested on 10 MDCT cases and the classification accuracy was: emphysema: 95%, fibrosis/honeycombing: 84% and ground glass: 97%.
Do Human-Figure Drawings of Children and Adolescents Mirror Their Cognitive Style and Self-Esteem?
ERIC Educational Resources Information Center
Dey, Anindita; Ghosh, Paromita
2016-01-01
The investigation probed relationships among human-figure drawing, field-dependent-independent cognitive style and self-esteem of 10-15 year olds. It also attempted to predict human-figure drawing scores of participants based on their field-dependence-independence and self-esteem. Area, stratified and multi-stage random sampling were used to…
Georgiadou, Elisavet; Stenström, Kristina Eriksson; Uvo, Cintia Bertacchi; Nilsson, Peter; Skog, Göran; Mattsson, Sören
2013-05-01
The (14)C content of 60 human blood serum samples from residents of Malmö (Sweden) in 1978, obtained from a biobank, has been measured to estimate the accuracy of (14)C bomb-pulse dating. The difference between the date estimated using the Calibomb software and sampling date varied between -3 ± 0.4 and +0.2 ± 0.5 years. The average age deviation of all samples was -1.5 ± 0.7 years, with the delay between production and consumption of foodstuffs being probably the dominating cause. The potential influence of food habits on the (14)C date has been evaluated using stable isotope δ(13)C and δ(15)N analysis and information about the dietary habits of the investigated individuals. Although the group consisting of lacto-ovo vegetarians and vegans (pooled group) was not completely separated from the omnivores in a stable isotopic trophic level diagram, this analysis proved to add valuable information on probable dietary habits. The age deviation of the sampling date from the respective Calibomb date was found strongly correlated with the δ(13)C values, probably due to influence from marine diet components. For the omnivore individuals, there were indications of seasonal effects on δ(13)C and the age deviation. No significant correlation was found between the age deviation and the δ(15)N values of any dietary group. No influence of sex or year of birth was found on neither the (14)C nor the δ(13)C and δ(15)N values of the serum samples. The data were also divided into two groups (omnivores and pooled group), based on the level of δ(15)N in the samples. The consumption of high δ(15)N-valued fish and birds can be responsible for this clustering.
Simulating future uncertainty to guide the selection of survey designs for long-term monitoring
Garman, Steven L.; Schweiger, E. William; Manier, Daniel J.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.
2012-01-01
A goal of environmental monitoring is to provide sound information on the status and trends of natural resources (Messer et al. 1991, Theobald et al. 2007, Fancy et al. 2009). When monitoring observations are acquired by measuring a subset of the population of interest, probability sampling as part of a well-constructed survey design provides the most reliable and legally defensible approach to achieve this goal (Cochran 1977, Olsen et al. 1999, Schreuder et al. 2004; see Chapters 2, 5, 6, 7). Previous works have described the fundamentals of sample surveys (e.g. Hansen et al. 1953, Kish 1965). Interest in survey designs and monitoring over the past 15 years has led to extensive evaluations and new developments of sample selection methods (Stevens and Olsen 2004), of strategies for allocating sample units in space and time (Urquhart et al. 1993, Overton and Stehman 1996, Urquhart and Kincaid 1999), and of estimation (Lesser and Overton 1994, Overton and Stehman 1995) and variance properties (Larsen et al. 1995, Stevens and Olsen 2003) of survey designs. Carefully planned, “scientific” (Chapter 5) survey designs have become a standard in contemporary monitoring of natural resources. Based on our experience with the long-term monitoring program of the US National Park Service (NPS; Fancy et al. 2009; Chapters 16, 22), operational survey designs tend to be selected using the following procedures. For a monitoring indicator (i.e. variable or response), a minimum detectable trend requirement is specified, based on the minimum level of change that would result in meaningful change (e.g. degradation). A probability of detecting this trend (statistical power) and an acceptable level of uncertainty (Type I error; see Chapter 2) within a specified time frame (e.g. 10 years) are specified to ensure timely detection. Explicit statements of the minimum detectable trend, the time frame for detecting the minimum trend, power, and acceptable probability of Type I error (α) collectively form the quantitative sampling objective.
Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems.
Ouyang, Fang-Yan; Zheng, Bo; Jiang, Xiong-Fei
2015-01-01
The empirical mode decomposition is applied to analyze the intrinsic multi-scale dynamic behaviors of complex financial systems. In this approach, the time series of the price returns of each stock is decomposed into a small number of intrinsic mode functions, which represent the price motion from high frequency to low frequency. These intrinsic mode functions are then grouped into three modes, i.e., the fast mode, medium mode and slow mode. The probability distribution of returns and auto-correlation of volatilities for the fast and medium modes exhibit similar behaviors as those of the full time series, i.e., these characteristics are rather robust in multi time scale. However, the cross-correlation between individual stocks and the return-volatility correlation are time scale dependent. The structure of business sectors is mainly governed by the fast mode when returns are sampled at a couple of days, while by the medium mode when returns are sampled at dozens of days. More importantly, the leverage and anti-leverage effects are dominated by the medium mode.
Jung, R.E.; Royle, J. Andrew; Sauer, J.R.; Addison, C.; Rau, R.D.; Shirk, J.L.; Whissel, J.C.
2005-01-01
Stream salamanders in the family Plethodontidae constitute a large biomass in and near headwater streams in the eastern United States and are promising indicators of stream ecosystem health. Many studies of stream salamanders have relied on population indices based on counts rather than population estimates based on techniques such as capture-recapture and removal. Application of estimation procedures allows the calculation of detection probabilities (the proportion of total animals present that are detected during a survey) and their associated sampling error, and may be essential for determining salamander population sizes and trends. In 1999, we conducted capture-recapture and removal population estimation methods for Desmognathus salamanders at six streams in Shenandoah National Park, Virginia, USA. Removal sampling appeared more efficient and detection probabilities from removal data were higher than those from capture-recapture. During 2001-2004, we used removal estimation at eight streams in the park to assess the usefulness of this technique for long-term monitoring of stream salamanders. Removal detection probabilities ranged from 0.39 to 0.96 for Desmognathus, 0.27 to 0.89 for Eurycea and 0.27 to 0.75 for northern spring (Gyrinophilus porphyriticus) and northern red (Pseudotriton ruber) salamanders across stream transects. Detection probabilities did not differ across years for Desmognathus and Eurycea, but did differ among streams for Desmognathus. Population estimates of Desmognathus decreased between 2001-2002 and 2003-2004 which may be related to changes in stream flow conditions. Removal-based procedures may be a feasible approach for population estimation of salamanders, but field methods should be designed to meet the assumptions of the sampling procedures. New approaches to estimating stream salamander populations are discussed.
Failure probability analysis of optical grid
NASA Astrophysics Data System (ADS)
Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng
2008-11-01
Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.
NASA Astrophysics Data System (ADS)
Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen
2018-01-01
Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.
Sustained sexual behavior change following acute HIV diagnosis in Malawi.
Rucinski, Katherine B; Rutstein, Sarah E; Powers, Kimberly A; Pasquale, Dana K; Dennis, Ann M; Phiri, Sam; Hosseinipour, Mina C; Kamanga, Gift; Nsona, Dominic; Massa, Cecilia; Hoffman, Irving F; Miller, William C; Pettifor, Audrey E
2018-06-05
Identification of acute HIV infection (AHI) allows important opportunities for HIV prevention through behavior change and biomedical intervention. Here, we evaluate changes in sexual risk behaviors among persons with AHI enrolled in a combined behavioral and biomedical intervention designed to reduce onward transmission of HIV. Participants were randomized to standard HIV counseling, a multi-session behavioral intervention, or a multi-session behavioral intervention plus antiretrovirals. Sexual behaviors were assessed periodically over one year. Four weeks after diagnosis, the predicted probability of reporting multiple sexual partners decreased from 24% to 9%, and the probability of reporting unprotected sex from 71% to 27%. These declines in sexual risk behaviors were sustained over follow-up irrespective of study arm. AHI diagnosis alone may be sufficient to achieve immediate and sustained behavior change during this highly infectious period.
ERIC Educational Resources Information Center
Rivilis, Irina; Liu, Jian; Cairney, John; Hay, John A.; Klentrou, Panagiota; Faught, Brent E.
2012-01-01
The purpose of this prospective cohort study was to assess how cardiorespiratory fitness (CRF) of children with probable developmental coordination disorder (DCD) changes over a period of 4.7 years relative to a group of typically developing controls. A school-based sample of children in a large region of Ontario, Canada with 75 out of a possible…
Community Support for the Public Schools in a Large Metropolitan Area. Final Report.
ERIC Educational Resources Information Center
Smith, Ralph V.; And Others
An extensive survey was conducted in 1965 by a team of white and Negro interviewers in an application of ecological theory to a study of the support relationship between the community and its school system. Findings are based upon interview data from a probability sample of 931 respondents selected from the population of persons 21 years of age…
A Population-Based Study of Childhood Sexual Contact in China: Prevalence and Long-Term Consequences
ERIC Educational Resources Information Center
Luo, Ye; Parish, William L.; Laumann, Edward O.
2008-01-01
Objectives: This study provides national estimates of the prevalence of childhood sexual contact and its association with sexual well-being and psychological distress among adults in China. Method: A national stratified probability sample of 1,519 women and 1,475 men aged 20-64 years in urban China completed a computer-administered survey in…
Towards a Multi-Resolution Model of Seismic Risk in Central Asia. Challenge and perspectives
NASA Astrophysics Data System (ADS)
Pittore, M.; Wieland, M.; Bindi, D.; Parolai, S.
2011-12-01
Assessing seismic risk, defined as the probability of occurrence of economical and social losses as consequence of an earthquake, both at regional and at local scale is a challenging, multi-disciplinary task. In order to provide a reliable estimate, diverse information must be gathered by seismologists, geologists, engineers and civil authorities, and carefully integrated keeping into account the different levels of uncertainty. The research towards an integrated methodology, able to seamlessly describe seismic risk at different spatial scales is challenging, but discloses new application perspectives, particularly in those countries which suffer from a relevant seismic hazard but do not have resources for a standard assessment. Central Asian countries in particular, which exhibit one of the highest seismic hazard in the world, are experiencing a steady demographic growth, often accompanied by informal settlement and urban sprawling. A reliable evaluation of how these factors affect the seismic risk, together with a realistic assessment of the assets exposed to seismic hazard and their structural vulnerability is of particular importance, in order to undertake proper mitigation actions and to promptly and efficiently react to a catastrophic event. New strategies are needed to efficiently cope with systematic lack of information and uncertainties. An original approach is presented to assess seismic risk based on integration of information coming from remote-sensing and ground-based panoramic imaging, in situ measurements, expert knowledge and already available data. Efficient sampling strategies based on freely available medium-resolution multi-spectral satellite images are adopted to optimize data collection and validation, in a multi-scale approach. Panoramic imaging is also considered as a valuable ground-based visual data collection technique, suitable both for manual and automatic analysis. A full-probabilistic framework based on Bayes Network is proposed to integrate available information taking into account both aleatory and epistemic uncertainties. An improved risk model for the capital of Kyrgyz Republic, Biskek, has been developed following this approach and tested based on different earthquake scenarios. Preliminary results will be presented and discussed.
NASA Astrophysics Data System (ADS)
Syafrina, A. H.; Zalina, M. D.; Juneng, L.
2014-09-01
A stochastic downscaling methodology known as the Advanced Weather Generator, AWE-GEN, has been tested at four stations in Peninsular Malaysia using observations available from 1975 to 2005. The methodology involves a stochastic downscaling procedure based on a Bayesian approach. Climate statistics from a multi-model ensemble of General Circulation Model (GCM) outputs were calculated and factors of change were derived to produce the probability distribution functions (PDF). New parameters were obtained to project future climate time series. A multi-model ensemble was used in this study. The projections of extreme precipitation were based on the RCP 6.0 scenario (2081-2100). The model was able to simulate both hourly and 24-h extreme precipitation, as well as wet spell durations quite well for almost all regions. However, the performance of GCM models varies significantly in all regions showing high variability of monthly precipitation for both observed and future periods. The extreme precipitation for both hourly and 24-h seems to increase in future, while extreme of wet spells remain unchanged, up to the return periods of 10-40 years.
A comparison of abundance estimates from extended batch-marking and Jolly–Seber-type experiments
Cowen, Laura L E; Besbeas, Panagiotis; Morgan, Byron J T; Schwarz, Carl J
2014-01-01
Little attention has been paid to the use of multi-sample batch-marking studies, as it is generally assumed that an individual's capture history is necessary for fully efficient estimates. However, recently, Huggins et al. (2010) present a pseudo-likelihood for a multi-sample batch-marking study where they used estimating equations to solve for survival and capture probabilities and then derived abundance estimates using a Horvitz–Thompson-type estimator. We have developed and maximized the likelihood for batch-marking studies. We use data simulated from a Jolly–Seber-type study and convert this to what would have been obtained from an extended batch-marking study. We compare our abundance estimates obtained from the Crosbie–Manly–Arnason–Schwarz (CMAS) model with those of the extended batch-marking model to determine the efficiency of collecting and analyzing batch-marking data. We found that estimates of abundance were similar for all three estimators: CMAS, Huggins, and our likelihood. Gains are made when using unique identifiers and employing the CMAS model in terms of precision; however, the likelihood typically had lower mean square error than the pseudo-likelihood method of Huggins et al. (2010). When faced with designing a batch-marking study, researchers can be confident in obtaining unbiased abundance estimators. Furthermore, they can design studies in order to reduce mean square error by manipulating capture probabilities and sample size. PMID:24558576
DOE Office of Scientific and Technical Information (OSTI.GOV)
Page, Jason S.; Kelly, Ryan T.; Camp, David G.
2008-09-01
Methods. To improve the detection of low abundance protein candidate biomarker discovery and validation, particularly in complex biological fluids such as blood plasma, increased sensitivity is desired using mass spectrometry (MS)-based instrumentation. A key current limitation on the sensitivity of electrospray ionization (ESI) MS is due to the fact that many sample molecules in solution are never ionized, and the vast majority of the ions that are created are lost during transmission from atmospheric pressure to the low pressure region of the mass analyzer. Two key technologies, multi-nanoelectrospray emitters and the electrodynamic ion funnel have recently been developed and refinedmore » at Pacific Northwest National Laboratory (PNNL) to greatly improve the ionization and transmission efficiency of ESI MS based analyses. Multi-emitter based ESI enables the flow from a single source (typically a liquid chromatography [LC] column) to be divided among an array of emitters (Figure 1). The flow rate delivered to each emitter is thus reduced, allowing the well-documented benefits of nanoelectrospray 1 for both sensitivity and quantitation to be realized for higher flow rate separations. To complement the increased ionization efficiency afforded by multi-ESI, tandem electrodynamic ion funnels have also been developed at PNNL, and shown to greatly improve ion transmission efficiency in the ion source interface.2, 3 These technologies have been integrated into a triple quadrupole mass spectrometer for multiple reaction monitoring (MRM) of probable biomarker candidates in blood plasma and show promise for the identification of new species even at low level concentrations.« less
Brain Tumor Segmentation Using Deep Belief Networks and Pathological Knowledge.
Zhan, Tianming; Chen, Yi; Hong, Xunning; Lu, Zhenyu; Chen, Yunjie
2017-01-01
In this paper, we propose an automatic brain tumor segmentation method based on Deep Belief Networks (DBNs) and pathological knowledge. The proposed method is targeted against gliomas (both low and high grade) obtained in multi-sequence magnetic resonance images (MRIs). Firstly, a novel deep architecture is proposed to combine the multi-sequences intensities feature extraction with classification to get the classification probabilities of each voxel. Then, graph cut based optimization is executed on the classification probabilities to strengthen the spatial relationships of voxels. At last, pathological knowledge of gliomas is applied to remove some false positives. Our method was validated in the Brain Tumor Segmentation Challenge 2012 and 2013 databases (BRATS 2012, 2013). The performance of segmentation results demonstrates our proposal providing a competitive solution with stateof- the-art methods. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
NASA Astrophysics Data System (ADS)
Chen, Chen; Hao, Huiyan; Jafari, Roozbeh; Kehtarnavaz, Nasser
2017-05-01
This paper presents an extension to our previously developed fusion framework [10] involving a depth camera and an inertial sensor in order to improve its view invariance aspect for real-time human action recognition applications. A computationally efficient view estimation based on skeleton joints is considered in order to select the most relevant depth training data when recognizing test samples. Two collaborative representation classifiers, one for depth features and one for inertial features, are appropriately weighted to generate a decision making probability. The experimental results applied to a multi-view human action dataset show that this weighted extension improves the recognition performance by about 5% over equally weighted fusion deployed in our previous fusion framework.
Real-Time Multi-Target Localization from Unmanned Aerial Vehicles
Wang, Xuan; Liu, Jinghong; Zhou, Qianfei
2016-01-01
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions. PMID:28029145
Real-Time Multi-Target Localization from Unmanned Aerial Vehicles.
Wang, Xuan; Liu, Jinghong; Zhou, Qianfei
2016-12-25
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions.
Lancaster, Timothy S; Schill, Matthew R; Greenberg, Jason W; Ruaengsri, Chawannuch; Schuessler, Richard B; Lawton, Jennifer S; Maniar, Hersh S; Pasque, Michael K; Moon, Marc R; Damiano, Ralph J; Melby, Spencer J
2018-05-01
The recently developed American College of Cardiology Foundation-Society of Thoracic Surgeons (STS) Collaboration on the Comparative Effectiveness of Revascularization Strategy (ASCERT) Long-Term Survival Probability Calculator is a valuable addition to existing short-term risk-prediction tools for cardiac surgical procedures but has yet to be externally validated. Institutional data of 654 patients aged 65 years or older undergoing isolated coronary artery bypass grafting between 2005 and 2010 were reviewed. Predicted survival probabilities were calculated using the ASCERT model. Survival data were collected using the Social Security Death Index and institutional medical records. Model calibration and discrimination were assessed for the overall sample and for risk-stratified subgroups based on (1) ASCERT 7-year survival probability and (2) the predicted risk of mortality (PROM) from the STS Short-Term Risk Calculator. Logistic regression analysis was performed to evaluate additional perioperative variables contributing to death. Overall survival was 92.1% (569 of 597) at 1 year and 50.5% (164 of 325) at 7 years. Calibration assessment found no significant differences between predicted and actual survival curves for the overall sample or for the risk-stratified subgroups, whether stratified by predicted 7-year survival or by PROM. Discriminative performance was comparable between the ASCERT and PROM models for 7-year survival prediction (p < 0.001 for both; C-statistic = 0.815 for ASCERT and 0.781 for PROM). Prolonged ventilation, stroke, and hospital length of stay were also predictive of long-term death. The ASCERT survival probability calculator was externally validated for prediction of long-term survival after coronary artery bypass grafting in all risk groups. The widely used STS PROM performed comparably as a predictor of long-term survival. Both tools provide important information for preoperative decision making and patient counseling about potential outcomes after coronary artery bypass grafting. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Wong, Linda; Hill, Beth L; Hunsberger, Benjamin C; Bagwell, C Bruce; Curtis, Adam D; Davis, Bruce H
2015-01-01
Leuko64™ (Trillium Diagnostics) is a flow cytometric assay that measures neutrophil CD64 expression and serves as an in vitro indicator of infection/sepsis or the presence of a systemic acute inflammatory response. Leuko64 assay currently utilizes QuantiCALC, a semiautomated software that employs cluster algorithms to define cell populations. The software reduces subjective gating decisions, resulting in interanalyst variability of <5%. We evaluated a completely automated approach to measuring neutrophil CD64 expression using GemStone™ (Verity Software House) and probability state modeling (PSM). Four hundred and fifty-seven human blood samples were processed using the Leuko64 assay. Samples were analyzed on four different flow cytometer models: BD FACSCanto II, BD FACScan, BC Gallios/Navios, and BC FC500. A probability state model was designed to identify calibration beads and three leukocyte subpopulations based on differences in intensity levels of several parameters. PSM automatically calculates CD64 index values for each cell population using equations programmed into the model. GemStone software uses PSM that requires no operator intervention, thus totally automating data analysis and internal quality control flagging. Expert analysis with the predicate method (QuantiCALC) was performed. Interanalyst precision was evaluated for both methods of data analysis. PSM with GemStone correlates well with the expert manual analysis, r(2) = 0.99675 for the neutrophil CD64 index values with no intermethod bias detected. The average interanalyst imprecision for the QuantiCALC method was 1.06% (range 0.00-7.94%), which was reduced to 0.00% with the GemStone PSM. The operator-to-operator agreement in GemStone was a perfect correlation, r(2) = 1.000. Automated quantification of CD64 index values produced results that strongly correlate with expert analysis using a standard gate-based data analysis method. PSM successfully evaluated flow cytometric data generated by multiple instruments across multiple lots of the Leuko64 kit in all 457 cases. The probability-based method provides greater objectivity, higher data analysis speed, and allows for greater precision for in vitro diagnostic flow cytometric assays. © 2015 International Clinical Cytometry Society.
Calibrating SALT: a sampling scheme to improve estimates of suspended sediment yield
Robert B. Thomas
1986-01-01
Abstract - SALT (Selection At List Time) is a variable probability sampling scheme that provides unbiased estimates of suspended sediment yield and its variance. SALT performs better than standard schemes which are estimate variance. Sampling probabilities are based on a sediment rating function which promotes greater sampling intensity during periods of high...
Fracture prediction and calibration of a Canadian FRAX® tool: a population-based report from CaMos
Fraser, L.-A.; Langsetmo, L.; Berger, C.; Ioannidis, G.; Goltzman, D.; Adachi, J. D.; Papaioannou, A.; Josse, R.; Kovacs, C. S.; Olszynski, W. P.; Towheed, T.; Hanley, D. A.; Kaiser, S. M.; Prior, J.; Jamal, S.; Kreiger, N.; Brown, J. P.; Johansson, H.; Oden, A.; McCloskey, E.; Kanis, J. A.
2016-01-01
Summary A new Canadian WHO fracture risk assessment (FRAX®) tool to predict 10-year fracture probability was compared with observed 10-year fracture outcomes in a large Canadian population-based study (CaMos). The Canadian FRAX tool showed good calibration and discrimination for both hip and major osteoporotic fractures. Introduction The purpose of this study was to validate a new Canadian WHO fracture risk assessment (FRAX®) tool in a prospective, population-based cohort, the Canadian Multi-centre Osteoporosis Study (CaMos). Methods A FRAX tool calibrated to the Canadian population was developed by the WHO Collaborating Centre for Metabolic Bone Diseases using national hip fracture and mortality data. Ten-year FRAX probabilities with and without bone mineral density (BMD) were derived for CaMos women (N=4,778) and men (N=1,919) and compared with observed fracture outcomes to 10 years (Kaplan–Meier method). Cox proportional hazard models were used to investigate the contribution of individual FRAX variables. Results Mean overall 10-year FRAX probability with BMD for major osteoporotic fractures was not significantly different from the observed value in men [predicted 5.4% vs. observed 6.4% (95%CI 5.2–7.5%)] and only slightly lower in women [predicted 10.8% vs. observed 12.0% (95%CI 11.0–12.9%)]. FRAX was well calibrated for hip fracture assessment in women [predicted 2.7% vs. observed 2.7% (95%CI 2.2–3.2%)] but underestimated risk in men [predicted 1.3% vs. observed 2.4% (95%CI 1.7–3.1%)]. FRAX with BMD showed better fracture discrimination than FRAX without BMD or BMD alone. Age, body mass index, prior fragility fracture and femoral neck BMD were significant independent predictors of major osteoporotic fractures; sex, age, prior fragility fracture and femoral neck BMD were significant independent predictors of hip fractures. Conclusion The Canadian FRAX tool provides predictions consistent with observed fracture rates in Canadian women and men, thereby providing a valuable tool for Canadian clinicians assessing patients at risk of fracture. PMID:21161508
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less
NASA Astrophysics Data System (ADS)
Simonson, W.; Ruiz-Benito, P.; Valladares, F.; Coomes, D.
2016-02-01
Woodlands represent highly significant carbon sinks globally, though could lose this function under future climatic change. Effective large-scale monitoring of these woodlands has a critical role to play in mitigating for, and adapting to, climate change. Mediterranean woodlands have low carbon densities, but represent important global carbon stocks due to their extensiveness and are particularly vulnerable because the region is predicted to become much hotter and drier over the coming century. Airborne lidar is already recognized as an excellent approach for high-fidelity carbon mapping, but few studies have used multi-temporal lidar surveys to measure carbon fluxes in forests and none have worked with Mediterranean woodlands. We use a multi-temporal (5-year interval) airborne lidar data set for a region of central Spain to estimate above-ground biomass (AGB) and carbon dynamics in typical mixed broadleaved and/or coniferous Mediterranean woodlands. Field calibration of the lidar data enabled the generation of grid-based maps of AGB for 2006 and 2011, and the resulting AGB change was estimated. There was a close agreement between the lidar-based AGB growth estimate (1.22 Mg ha-1 yr-1) and those derived from two independent sources: the Spanish National Forest Inventory, and a tree-ring based analysis (1.19 and 1.13 Mg ha-1 yr-1, respectively). We parameterised a simple simulator of forest dynamics using the lidar carbon flux measurements, and used it to explore four scenarios of fire occurrence. Under undisturbed conditions (no fire) an accelerating accumulation of biomass and carbon is evident over the next 100 years with an average carbon sequestration rate of 1.95 Mg C ha-1 yr-1. This rate reduces by almost a third when fire probability is increased to 0.01 (fire return rate of 100 years), as has been predicted under climate change. Our work shows the power of multi-temporal lidar surveying to map woodland carbon fluxes and provide parameters for carbon dynamics models. Space deployment of lidar instruments in the near future could open the way for rolling out wide-scale forest carbon stock monitoring to inform management and governance responses to future environmental change.
NASA Astrophysics Data System (ADS)
Lineweaver, Charles H.
2015-08-01
The Titius-Bode (TB) relation’s successful prediction of the period of Uranus was the main motivation that led to the search for another planet between Mars and Jupiter. This search led to the discovery of the asteroid Ceres and the rest of the asteroid belt. The TB relation can also provide useful hints about the periods of as-yet-undetected planets around other stars. In Bovaird & Lineweaver (2013) [1], we used a generalized TB relation to analyze 68 multi-planet systems with four or more detected exoplanets. We found that the majority of exoplanet systems in our sample adhered to the TB relation to a greater extent than the Solar System does. Thus, the TB relation can make useful predictions about the existence of as-yet-undetected planets in Kepler multi-planet systems. These predictions are one way to correct for the main obstacle preventing us from estimating the number of Earth-like planets in the universe. That obstacle is the incomplete sampling of planets of Earth-mass and smaller [2-5]. In [6], we use a generalized Titius-Bode relation to predict the periods of 228 additional planets in 151 of these Kepler multiples. These Titius-Bode-based predictions suggest that there are, on average, 2±1 planets in the habitable zone of each star. We also estimate the inclination of the invariable plane for each system and prioritize our planet predictions by their geometric probability to transit. We highlight a short list of 77 predicted planets in 40 systems with a high geometric probability to transit, resulting in an expected detection rate of ~15 per cent, ~3 times higher than the detection rate of our previous Titius-Bode-based predictions.References: [1] Bovaird, T. & Lineweaver, C.H (2013) MNRAS, 435, 1126-1138. [2] Dong S. & Zhu Z. (2013) ApJ, 778, 53 [3] Fressin F. et al. (2013) ApJ, 766, 81 [4] Petigura E. A. et al. (2013) PNAS, 110, 19273 [5] Silburt A. et al. (2014), ApJ (arXiv:1406.6048v2) [6] Bovaird, T., Lineweaver, C.H. & Jacobsen, S.K. (2015, in press) MNRAS, arXiv:14126230v3.
Genotype Imputation with Millions of Reference Samples.
Browning, Brian L; Browning, Sharon R
2016-01-07
We present a genotype imputation method that scales to millions of reference samples. The imputation method, based on the Li and Stephens model and implemented in Beagle v.4.1, is parallelized and memory efficient, making it well suited to multi-core computer processors. It achieves fast, accurate, and memory-efficient genotype imputation by restricting the probability model to markers that are genotyped in the target samples and by performing linear interpolation to impute ungenotyped variants. We compare Beagle v.4.1 with Impute2 and Minimac3 by using 1000 Genomes Project data, UK10K Project data, and simulated data. All three methods have similar accuracy but different memory requirements and different computation times. When imputing 10 Mb of sequence data from 50,000 reference samples, Beagle's throughput was more than 100× greater than Impute2's throughput on our computer servers. When imputing 10 Mb of sequence data from 200,000 reference samples in VCF format, Minimac3 consumed 26× more memory per computational thread and 15× more CPU time than Beagle. We demonstrate that Beagle v.4.1 scales to much larger reference panels by performing imputation from a simulated reference panel having 5 million samples and a mean marker density of one marker per four base pairs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Examination of multi-model ensemble seasonal prediction methods using a simple climate system
NASA Astrophysics Data System (ADS)
Kang, In-Sik; Yoo, Jin Ho
2006-02-01
A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.
NASA Astrophysics Data System (ADS)
Woelders, L.; Vellekoop, J.; Reichart, G. J.; de Nooijer, L. J.; Sluijs, A.; Peterse, F.; Claeys, P. F.; Speijer, R. P.
2015-12-01
Climate instability during the last million years of the Cretaceous (67-66 Ma) is still poorly documented and not well understood. One of the reasons for this is that in deep time, different proxies are likely to yield different temperatures. This is because the application of calibrations based on present day temperature proxy relationships is affected by source organism evolution, differences in ocean chemistry and non-analogue processes. Only by combining temperature estimates derived from different, independent proxies, the problems with individual proxies can be cancelled out. A quantitative, multi-proxy temperature record from the latest Cretaceous therefore may provide a better insight in climate changes across this time interval. For such a multi-proxy research, sediments are required that yield both well-preserved foraminiferal calcite as well as organic biomarkers. Very few sites are known to provide such sedimentary records, but ODP Leg 174AX Site Bass River (New Jersey Shelf) has proven to be an excellent archive for paleotemperature reconstructions for the Cretaceous and Paleogene. We here present a multi-proxy, quantitative paleotemperature reconstruction of the last million years of the Cretaceous of the Bass River core. Benthic and planktic foraminiferal Mg/Ca and δ18O were determined, as well as the organic geochemical sea surface temperature proxy TEX86. This resulted in a unique coupled surface and bottom water temperature record of the latest Cretaceous. Our data suggest a ~2-6 ˚C bottom water warming and a ~4-6 ˚C surface water warming approximately 300 kyr before the Cretaceous-Paleogene boundary, followed by a cooling trend across the boundary. This warming event appears to coincide with the main phase of the Deccan Traps eruptions and therefore probably represents a global event.
2016-01-01
Background Several approaches to reduce the incidence of invasive cervical cancers exist. The approach adopted should take into account contextual factors that influence the cost-effectiveness of the available options. Objective To determine the cost-effectiveness of screening strategies combined with a vaccination program for 10-year old girls for cervical cancer prevention in Vientiane, Lao PDR. Methods A population-based dynamic compartment model was constructed. The interventions consisted of a 10-year old girl vaccination program only, or this program combined with screening strategies, i.e., visual inspection with acetic acid (VIA), cytology-based screening, rapid human papillomavirus (HPV) DNA testing, or combined VIA and cytology testing. Simulations were run over 100 years. In base-case scenario analyses, we assumed a 70% vaccination coverage with lifelong protection and a 50% screening coverage. The outcome of interest was the incremental cost per Disability-Adjusted Life Year (DALY) averted. Results In base-case scenarios, compared to the next best strategy, the model predicted that VIA screening of women aged 30–65 years old every three years, combined with vaccination, was the most attractive option, costing 2 544 international dollars (I$) per DALY averted. Meanwhile, rapid HPV DNA testing was predicted to be more attractive than cytology-based screening or its combination with VIA. Among cytology-based screening options, combined VIA with conventional cytology testing was predicted to be the most attractive option. Multi-way sensitivity analyses did not change the results. Compared to rapid HPV DNA testing, VIA had a probability of cost-effectiveness of 73%. Compared to the vaccination only option, the probability that a program consisting of screening women every five years would be cost-effective was around 60% and 80% if the willingness-to-pay threshold is fixed at one and three GDP per capita, respectively. Conclusions A VIA screening program in addition to a girl vaccination program was predicted to be the most attractive option in the health care context of Lao PDR. When compared with other screening methods, VIA was the primary recommended method for combination with vaccination in Lao PDR. PMID:27631732
Chanthavilay, Phetsavanh; Reinharz, Daniel; Mayxay, Mayfong; Phongsavan, Keokedthong; Marsden, Donald E; Moore, Lynne; White, Lisa J
2016-01-01
Several approaches to reduce the incidence of invasive cervical cancers exist. The approach adopted should take into account contextual factors that influence the cost-effectiveness of the available options. To determine the cost-effectiveness of screening strategies combined with a vaccination program for 10-year old girls for cervical cancer prevention in Vientiane, Lao PDR. A population-based dynamic compartment model was constructed. The interventions consisted of a 10-year old girl vaccination program only, or this program combined with screening strategies, i.e., visual inspection with acetic acid (VIA), cytology-based screening, rapid human papillomavirus (HPV) DNA testing, or combined VIA and cytology testing. Simulations were run over 100 years. In base-case scenario analyses, we assumed a 70% vaccination coverage with lifelong protection and a 50% screening coverage. The outcome of interest was the incremental cost per Disability-Adjusted Life Year (DALY) averted. In base-case scenarios, compared to the next best strategy, the model predicted that VIA screening of women aged 30-65 years old every three years, combined with vaccination, was the most attractive option, costing 2 544 international dollars (I$) per DALY averted. Meanwhile, rapid HPV DNA testing was predicted to be more attractive than cytology-based screening or its combination with VIA. Among cytology-based screening options, combined VIA with conventional cytology testing was predicted to be the most attractive option. Multi-way sensitivity analyses did not change the results. Compared to rapid HPV DNA testing, VIA had a probability of cost-effectiveness of 73%. Compared to the vaccination only option, the probability that a program consisting of screening women every five years would be cost-effective was around 60% and 80% if the willingness-to-pay threshold is fixed at one and three GDP per capita, respectively. A VIA screening program in addition to a girl vaccination program was predicted to be the most attractive option in the health care context of Lao PDR. When compared with other screening methods, VIA was the primary recommended method for combination with vaccination in Lao PDR.
Landmark, Tormod; Dale, Ola; Romundstad, Pål; Woodhouse, Astrid; Kaasa, Stein; Borchgrevink, Petter C
2018-05-13
Epidemiological studies of chronic pain frequently report high prevalence estimates. However, there is little information about the development and natural course of chronic pain. We followed a random sample of participants from a population-based study (HUNT 3) with annual measures over four years. Among those without chronic pain at baseline, the probability of developing moderate to severe chronic pain (cumulative incidence) during the first year was 5%, a pain status that was maintained among 38% at the second follow-up. The probability of developing chronic pain diminished substantially for those who maintained a status of no chronic pain over several years. Subjects with moderate to severe chronic pain at baseline had an 8% probability of recovery into no chronic pain, a status that was maintained for 52% on the second follow-up. The probability of recovery diminished substantially as a status of chronic pain was prolonged for several years. Pain severity, widespread pain, pain catastrophizing, depression and sleep were significant predictors of future moderate to severe chronic pain, both among subjects with and without chronic pain at baseline. These findings suggest that the prognosis is fairly good after a new onset of chronic pain. When the pain has lasted for several years, the prognosis becomes poor. The same social and psychological factors predict new onset and the prognosis of chronic pain. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Pérez-Pedrogo, Coralee; Martínez-Taboas, Alfonso; González, Rafael A; Caraballo, José N; Albizu-García, Carmen E
2018-04-14
Latinos comprised 17.1% of the U.S. population and 33.1% of US prisoners, yet they are underrepresented in the psychopathology literature. Despite higher rates of trauma among incarcerated individuals than in the general population, most of the previous research in this area focused primarily on women samples, and very few studies examined sex differences in PTSD and traumatic experiences. In addition, there is a need for research assessing traumatic experiences and probable PTSD in men and women Latino inmates to inform culturally competent care and sex sensitive care for this vulnerable and underserved population. Our study examined whether men and women Latino inmates with probable Posttraumatic Stress Disorder (PTSD), based on the cut off 40 or more symptoms on the Davidson Trauma Scale (DTS), differed significantly by the number of event types experienced, the type of potentially traumatizing event, and in co-occurring psychiatric conditions. A multi-stage sample design was used to select a probabilistic sample of 1,331 inmates from 26 penal institutions in PR of which 1179 participated in the study. Bivariate associations were calculated for each type of traumatic event and probable PTSD. Mean number of types of potentially traumatizing event experienced was comparable for both sexes (F = 3.83, M = 3.74) yet sex differences were found in the nature of the event. Women with probable PTSD had higher rates of experiencing rape and sexual abuse. Men had higher rates of experiencing combat in war, a life-threatening accident, of witnessing violence, and being threatened with a weapon. Men with significant ADHD symptoms in childhood and with Generalized Anxiety Disorder (GAD) during adulthood were almost 5 and 7 times as likely to score above threshold on the DTS whereas women were >3 times as likely in the presence of ADHD symptoms in childhood or depression during adulthood. This study underscores the need to improve understanding of the clinical manifestations of trauma and co-occurring psychiatric conditions for appropriate sex sensitive interventions targeting Latinos living in prisons. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Katsuki, K.; Yang, D. Y.; Lim, J.; Nahm, W. H.; Nakanishi, T.; Seto, K.; Otsuka, M.; Kashima, K.
2014-12-01
There are lagoons in the northern east coast of the South Korea, which were formed during the transgression period in the early Holocene. These lagoons shrank about 5-30 % during the first half of 20 century due to terrestrial sediment input from soil erosion in reclamation lands. However, buried lagoonal sediments record Holocene climate change. In this study, multi-centennial scale paleo-climate and paleo-ecosystem change were investigated by analysis of this buried and present lagoon deposits. Based on the diatom assemblage analysis of the sediment in the lagoon Maeho where it is the east coast lagoons in Korea, this lagoon was formed about 8,400 years ago, and halophilic diatoms showed high peaks at three times within the last 8,400 years. Timings of these peaks were well coincident with the high-sea level periods reported in the western Japan. It is considered that sea-level of the east coast in Korea also showed high at three times during the mid-late Holocene, and then, salinity of the lagoon increased in these periods. Except for such sea-level dependent change, salinity of the lagoon Maeho showed the multi-centennial (200 or 400 years) scale periodic variation. Magnetic susceptibility (MS) also showed the clear 400 years periodicity in the mid-late Holocene. When the MS showed high value, oligohalobous diatoms showed high value. However, halophilic diatoms and number of total diatom valves increased when the MS showed low value. This correspondence probably indicates that magnetic minerals flew into the lagoon with river fresh water, and then volume of fresh water inflow has changed with 400 years cycles. Such MS cycle was also confirmed in the sediments of other lagoons. Change of fresh water inflow should be not local event, was a part of regional environmental change. These results probably indicate that the precipitation on the northeastern South Korea has changed by the 400 years cycle. On the basis of lagoon bottom sediment, it made clear that the change of diatom assemblage during the last 600 years has been well corresponded with the variation of Korean tree ring delta 14C. There is a high possibility that water quality and ecosystem in the Koran lagoons was controlled by 200-400 years periodical precipitation change, and they are further affected by the solar irradiance change may be via monsoon intensity change.
Ma, Wenkang; Kang, Dianmin; Song, Yapei; Wei, Chongyi; Marley, Gifty; Ma, Wei
2015-11-24
The increasing population of marriage-based migrant women is disproportionally affected by AIDS/STDs in China, and social support plays a critical role. This study aims to describe the social support level received by married migrant women in rural areas in Shandong province in comparison to non-migrant local women, identifies the relevant factors of this social support condition among married migrant women, and observes the correlation between social support level and infection status of AIDS and STDs among this group. A probability-based sample of 1,076 migrant and 1,195 local women were included in the study. A pre-tested field questionnaire was administered to participants through a direct face-to-face interview. Questionnaire contained questions on socio-demographic information, AIDS and STDs prevalence information and Social Support Rating Scale (SSRS) which measures objective support, subjective support, and utilization of social support. Compared to local women, married migrant women had lower levels of social support in most dimensions. Multi-variable analysis revealed that relationship with spouse, family average income, number of children, education, engagement and claimed reasons of moving have various correlations with one or all dimensions of social support scores. Higher social support is also related to awareness of infection status of HIV and STDs among this group. Our findings provide further evidence that married migrant women have lower levels of social support which may be related to some social characteristics and their awareness status of AIDS and STDs infection status and that targeted interventions need to be developed for this population.
NASA Astrophysics Data System (ADS)
Qin, Y.; Rana, A.; Moradkhani, H.
2014-12-01
The multi downscaled-scenario products allow us to better assess the uncertainty of the changes/variations of precipitation and temperature in the current and future periods. Joint Probability distribution functions (PDFs), of both the climatic variables, might help better understand the interdependence of the two, and thus in-turn help in accessing the future with confidence. Using the joint distribution of temperature and precipitation is also of significant importance in hydrological applications and climate change studies. In the present study, we have used multi-modelled statistically downscaled-scenario ensemble of precipitation and temperature variables using 2 different statistically downscaled climate dataset. The datasets used are, 10 Global Climate Models (GCMs) downscaled products from CMIP5 daily dataset, namely, those from the Bias Correction and Spatial Downscaling (BCSD) technique generated at Portland State University and from the Multivariate Adaptive Constructed Analogs (MACA) technique, generated at University of Idaho, leading to 2 ensemble time series from 20 GCM products. Thereafter the ensemble PDFs of both precipitation and temperature is evaluated for summer, winter, and yearly periods for all the 10 sub-basins across Columbia River Basin (CRB). Eventually, Copula is applied to establish the joint distribution of two variables enabling users to model the joint behavior of the variables with any level of correlation and dependency. Moreover, the probabilistic distribution helps remove the limitations on marginal distributions of variables in question. The joint distribution is then used to estimate the change trends of the joint precipitation and temperature in the current and future, along with estimation of the probabilities of the given change. Results have indicated towards varied change trends of the joint distribution of, summer, winter, and yearly time scale, respectively in all 10 sub-basins. Probabilities of changes, as estimated by the joint precipitation and temperature, will provide useful information/insights for hydrological and climate change predictions.
Belotti, Elisa; Weder, Nicole; Bufka, Luděk; Kaldhusdal, Arne; Küchenhoff, Helmut; Seibold, Heidi; Woelfing, Benno; Heurich, Marco
2015-01-01
In Central Europe, protected areas are too small to ensure survival of populations of large carnivores. In the surrounding areas, these species are often persecuted due to competition with game hunters. Therefore, understanding how predation intensity varies spatio-temporally across areas with different levels of protection is fundamental. We investigated the predation patterns of Eurasian lynx (Lynx lynx) on roe deer (Capreolus capreolus) and red deer (Cervus elaphus) in both protected areas and multi-use landscapes of the Bohemian Forest Ecosystem. Based on 359 roe and red deer killed by 10 GPS-collared lynx, we calculated the species-specific annual kill rates and tested for effects of season and lynx age, sex and reproductive status. Because roe and red deer in the study area concentrate in unprotected lowlands during winter, we modeled spatial distribution of kills separately for summer and winter and calculated-the probability of a deer killed by lynx and-the expected number of kills for areas with different levels of protection. Significantly more roe deer (46.05–74.71/year/individual lynx) were killed than red deer (1.57–9.63/year/individual lynx), more deer were killed in winter than in summer, and lynx family groups had higher annual kill rates than adult male, single adult female and subadult female lynx. In winter the probability of a deer killed and the expected number of kills were higher outside the most protected part of the study area than inside; in summer, this probability did not differ between areas, and the expected number of kills was slightly larger inside than outside the most protected part of the study area. This indicates that the intensity of lynx predation in the unprotected part of the Bohemian Forest Ecosystem increases in winter, thus mitigation of conflicts in these areas should be included as a priority in the lynx conservation strategy. PMID:26379142
A fast learning method for large scale and multi-class samples of SVM
NASA Astrophysics Data System (ADS)
Fan, Yu; Guo, Huiming
2017-06-01
A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.
Forestry inventory based on multistage sampling with probability proportional to size
NASA Technical Reports Server (NTRS)
Lee, D. C. L.; Hernandez, P., Jr.; Shimabukuro, Y. E.
1983-01-01
A multistage sampling technique, with probability proportional to size, is developed for a forest volume inventory using remote sensing data. The LANDSAT data, Panchromatic aerial photographs, and field data are collected. Based on age and homogeneity, pine and eucalyptus classes are identified. Selection of tertiary sampling units is made through aerial photographs to minimize field work. The sampling errors for eucalyptus and pine ranged from 8.34 to 21.89 percent and from 7.18 to 8.60 percent, respectively.
NASA Astrophysics Data System (ADS)
Komendantova, Nadejda; Patt, Anthony
2013-04-01
In December 2004, a multiple hazards event devastated the Tamil Nadu province of India. The Sumatra -Andaman earthquake with a magnitude of Mw=9.1-9.3 caused the Indian Ocean tsunami with wave heights up to 30 m, and flooding that reached up to two kilometers inland in some locations. More than 7,790 persons were killed in the province of Tamil Nadu, with 206 in its capital Chennai. The time lag between the earthquake and the tsunami's arrival in India was over an hour, therefore, if a suitable early warning system existed, a proper means of communicating the warning and shelters existing for people would exist, than while this would not have prevented the destruction of infrastructure, several thousands of human lives would have been saved. India has over forty years of experience in the construction of cyclone shelters. With additional efforts and investment, these shelters could be adapted to other types of hazards such as tsunamis and flooding, as well as the construction of new multi-hazard cyclone shelters (MPCS). It would therefore be possible to mitigate one hazard such as cyclones by the construction of a network of shelters while at the same time adapting these shelters to also deal with, for example, tsunamis, with some additional investment. In this historical case, the failure to consider multiple hazards caused significant human losses. The current paper investigates the patterns of the national decision-making process with regards to multiple hazards mitigation measures and how the presence of behavioral and cognitive biases influenced the perceptions of the probabilities of multiple hazards and the choices made for their mitigation by the national decision-makers. Our methodology was based on the analysis of existing reports from national and international organizations as well as available scientific literature on behavioral economics and natural hazards. The results identified several biases in the national decision-making process when the construction of cyclone shelters was being undertaken. The availability heuristics caused a perception of low probability of tsunami following an earthquake, as the last large similar event happened over a hundred years ago. Another led to a situation when decisions were taken on the basis of experience and not statistical evidence, namely, experience showed that the so-called "Ring of Fire" generates underground earthquakes and tsunamis in the Pacific Ocean. This knowledge made decision-makers to neglect the numerical estimations about probability of underground earthquake in the Indian Ocean even though seismologists were warning about probability of a large underground earthquake in the Indian Ocean. The bounded rationality bias led to misperception of signals from the early warning center in the Pacific Ocean. The resulting limited concern resulted in risk mitigation measures that considered cyclone risks, but much less about tsunami. Under loss aversion considerations, the decision-makers perceived the losses connected with the necessary additional investment as being greater than benefits from mitigating a less probable hazard.
Jayaraman, Sudha P; Jiang, Yushan; Resch, Stephen; Askari, Reza; Klompas, Michael
2016-10-01
Interventions to contain two multi-drug-resistant Acinetobacter (MDRA) outbreaks reduced the incidence of multi-drug-resistant (MDR) organisms, specifically methicillin-resistant Staphylococcus aureus, vancomycin-resistant Enterococcus, and Clostridium difficile in the general surgery intensive care unit (ICU) of our hospital. We therefore conducted a cost-effective analysis of a proactive model infection-control program to reduce transmission of MDR organisms based on the practices used to control the MDRA outbreak. We created a model of a proactive infection control program based on the 2011 MDRA outbreak response. We built a decision analysis model and performed univariable and probabilistic sensitivity analyses to evaluate the cost-effectiveness of the proposed program compared with standard infection control practices to reduce transmission of these MDR organisms. The cost of a proactive infection control program would be $68,509 per year. The incremental cost-effectiveness ratio (ICER) was calculated to be $3,804 per aversion of transmission of MDR organisms in a one-year period compared with standard infection control. On the basis of probabilistic sensitivity analysis, a willingness-to-pay (WTP) threshold of $14,000 per transmission averted would have a 42% probability of being cost-effective, rising to 100% at $22,000 per transmission averted. This analysis gives an estimated ICER for implementing a proactive program to prevent transmission of MDR organisms in the general surgery ICU. To better understand the causal relations between the critical steps in the program and the rate reductions, a randomized study of a package of interventions to prevent healthcare-associated infections should be considered.
Andersen, Judith P; Blosnich, John
2013-01-01
Background Adverse childhood experiences (e.g., physical, sexual and emotional abuse, neglect, exposure to domestic violence, parental discord, familial mental illness, incarceration and substance abuse) constitute a major public health problem in the United States. The Adverse Childhood Experiences (ACE) scale is a standardized measure that captures multiple developmental risk factors beyond sexual, physical and emotional abuse. Lesbian, gay, and bisexual (i.e., sexual minority) individuals may experience disproportionately higher prevalence of adverse childhood experiences. Purpose To examine, using the ACE scale, prevalence of childhood physical, emotional, and sexual abuse and childhood household dysfunction among sexual minority and heterosexual adults. Methods Analyses were conducted using a probability-based sample of data pooled from three U.S. states’ Behavioral Risk Factor Surveillance System (BRFSS) surveys (Maine, Washington, Wisconsin) that administered the ACE scale and collected information on sexual identity (n = 22,071). Results Compared with heterosexual respondents, gay/lesbian and bisexual individuals experienced increased odds of six of eight and seven of eight adverse childhood experiences, respectively. Sexual minority persons had higher rates of adverse childhood experiences (IRR = 1.66 gay/lesbian; 1.58 bisexual) compared to their heterosexual peers. Conclusions Sexual minority individuals have increased exposure to multiple developmental risk factors beyond physical, sexual and emotional abuse. We recommend the use of the Adverse Childhood Experiences scale in future research examining health disparities among this minority population. PMID:23372755
Charney, Noah D.; Kubel, Jacob E.; Eiseman, Charles S.
2015-01-01
Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds. PMID:25799224
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Li, Yajie; Wang, Xinbo; Chen, Bowen; Zhang, Jie
2016-09-01
A hierarchical software-defined networking (SDN) control architecture is designed for multi-domain optical networks with the Open Daylight (ODL) controller. The OpenFlow-based Control Virtual Network Interface (CVNI) protocol is deployed between the network orchestrator and the domain controllers. Then, a dynamic bandwidth on demand (BoD) provisioning solution is proposed based on time scheduling in software-defined multi-domain optical networks (SD-MDON). Shared Risk Link Groups (SRLG)-disjoint routing schemes are adopted to separate each tenant for reliability. The SD-MDON testbed is built based on the proposed hierarchical control architecture. Then the proposed time scheduling-based BoD (Ts-BoD) solution is experimentally demonstrated on the testbed. The performance of the Ts-BoD solution is evaluated with respect to blocking probability, resource utilization, and lightpath setup latency.
Vos-Vromans, Desirée; Evers, Silvia; Huijnen, Ivan; Köke, Albère; Hitters, Minou; Rijnders, Nieke; Pont, Menno; Knottnerus, André; Smeets, Rob
2017-01-01
A multi-centre RCT has shown that multidisciplinary rehabilitation treatment (MRT) is more effective in reducing fatigue over the long-term in comparison with cognitive behavioural therapy (CBT) for patients with chronic fatigue syndrome (CFS), but evidence on its cost-effectiveness is lacking. To compare the cost-effectiveness of MRT versus CBT for patients with CFS from a societal perspective. A multi-centre randomized controlled trial comparing MRT with CBT was conducted among 122 patients with CFS diagnosed using the 1994 criteria of the Centers for Disease Control and Prevention and aged between 18 and 60 years. The societal costs (healthcare costs, patient and family costs, and costs for loss of productivity), fatigue severity, quality of life, quality-adjusted life-year (QALY), and cost-effectiveness ratios (ICERs) were measured over a follow-up period of one year. The main outcome of the cost-effectiveness analysis was fatigue measured by the Checklist Individual Strength (CIS). The main outcome of the cost-utility analysis was the QALY based on the EuroQol-5D-3L utilities. Sensitivity analyses were performed, and uncertainty was calculated using the cost-effectiveness acceptability curves and cost-effectiveness planes. The data of 109 patients (57 MRT and 52 CBT) were analyzed. MRT was significantly more effective in reducing fatigue at 52 weeks. The mean difference in QALY between the treatments was not significant (0.09, 95% CI: -0.02 to 0.19). The total societal costs were significantly higher for patients allocated to MRT (a difference of €5,389, 95% CI: 2,488 to 8,091). MRT has a high probability of being the most cost effective, using fatigue as the primary outcome. The ICER is €856 per unit of the CIS fatigue subscale. The results of the cost-utility analysis, using the QALY, indicate that the CBT had a higher likelihood of being more cost-effective. The probability of being more cost-effective is higher for MRT when using fatigue as primary outcome variable. Using QALY as the primary outcome, CBT has the highest probability of being more cost-effective. ISRCTN77567702.
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network
Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu
2018-01-01
This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit. PMID:29765629
Pérez de Los Cobos, José; Siñol, Núria; Puerta, Carmen; Cantillano, Vanessa; López Zurita, Cristina; Trujols, Joan
2011-01-30
To characterize those patients with probable adult attention deficit hyperactivity disorder (ADHD) who ask for treatment of cocaine use disorders; to estimate the prevalence of probable adult ADHD among these patients. This is a cross-sectional and multi-center study performed at outpatient resources of 12 addiction treatment centers in Spain. Participants were treatment-seeking primary cocaine abusers recruited consecutively at one center and through convenience sampling at the other centers. Assessments included semi-structured clinical interview focused on Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV) ADHD criteria adapted to adulthood, and the Wender-Utah Rating Scale (WURS) for screening childhood history of ADHD according to patients. Probable adult ADHD was diagnosed when patients met DSM-IV criteria of ADHD in adulthood and scored WURS>32. All participants were diagnosed with current cocaine dependence (n=190) or abuse (n=15). Patients with probable adult ADHD, compared with patients having no lifetime ADHD, were more frequently male, reported higher impulsivity, and began to use nicotine, alcohol, cannabis, or cocaine earlier. Before starting the current treatment, patients with probable adult ADHD also showed higher cocaine craving for the previous day, less frequent cocaine abstinence throughout the previous week, and higher use of cocaine and tobacco during the previous month. Impulsivity and male gender were the only independent risk factors of probable adult ADHD in a logistic regression analysis. The prevalence of probable adult ADHD was 20.5% in the sub-sample of patients consecutively recruited (n=78). A diagnosis of probable adult ADHD strongly distinguishes among treatment-seeking cocaine primary abusers regarding past and current key aspects of their addictive disorder; one-fifth of these patients present with probable adult ADHD. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.
Public Attitudes toward Stuttering in Turkey: Probability versus Convenience Sampling
ERIC Educational Resources Information Center
Ozdemir, R. Sertan; St. Louis, Kenneth O.; Topbas, Seyhun
2011-01-01
Purpose: A Turkish translation of the "Public Opinion Survey of Human Attributes-Stuttering" ("POSHA-S") was used to compare probability versus convenience sampling to measure public attitudes toward stuttering. Method: A convenience sample of adults in Eskisehir, Turkey was compared with two replicates of a school-based,…
ERIC Educational Resources Information Center
Neman, Ronald S.; And Others
The study represents an extension of previous research involving the development of scales for the five-card, orally administered, and tape-recorded version of the Thematic Apperception Test(TAT). Scale development is documented and national norms are presented based on a national probability sample of 1,398 youths administered the Cycle III test…
ERIC Educational Resources Information Center
Rodes, Thomas W.
This is the second of three study reports on the national incidence of child care usage as well as consumer needs, preferences, attitudes and opinions on child care, based on 4609 personal interviews conducted in 1975 from a national probability sample of households with children under 14 years of age. The study was sponsored by the Office of…
A Study of Quasar Selection in the Supernova Fields of the Dark Energy Survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tie, S. S.; Martini, P.; Mudd, D.
In this paper, we present a study of quasar selection using the supernova fields of the Dark Energy Survey (DES). We used a quasar catalog from an overlapping portion of the SDSS Stripe 82 region to quantify the completeness and efficiency of selection methods involving color, probabilistic modeling, variability, and combinations of color/probabilistic modeling with variability. In all cases, we considered only objects that appear as point sources in the DES images. We examine color selection methods based on the Wide-field Infrared Survey Explorer (WISE) mid-IR W1-W2 color, a mixture of WISE and DES colors (g - i and i-W1),more » and a mixture of Vista Hemisphere Survey and DES colors (g - i and i - K). For probabilistic quasar selection, we used XDQSO, an algorithm that employs an empirical multi-wavelength flux model of quasars to assign quasar probabilities. Our variability selection uses the multi-band χ 2-probability that sources are constant in the DES Year 1 griz-band light curves. The completeness and efficiency are calculated relative to an underlying sample of point sources that are detected in the required selection bands and pass our data quality and photometric error cuts. We conduct our analyses at two magnitude limits, i < 19.8 mag and i < 22 mag. For the subset of sources with W1 and W2 detections, the W1-W2 color or XDQSOz method combined with variability gives the highest completenesses of >85% for both i-band magnitude limits and efficiencies of >80% to the bright limit and >60% to the faint limit; however, the giW1 and giW1+variability methods give the highest quasar surface densities. The XDQSOz method and combinations of W1W2/giW1/XDQSOz with variability are among the better selection methods when both high completeness and high efficiency are desired. We also present the OzDES Quasar Catalog of 1263 spectroscopically confirmed quasars from three years of OzDES observation in the 30 deg 2 of the DES supernova fields. Finally, the catalog includes quasars with redshifts up to z ~ 4 and brighter than i = 22 mag, although the catalog is not complete up to this magnitude limit.« less
A Study of Quasar Selection in the Supernova Fields of the Dark Energy Survey
Tie, S. S.; Martini, P.; Mudd, D.; ...
2017-02-15
In this paper, we present a study of quasar selection using the supernova fields of the Dark Energy Survey (DES). We used a quasar catalog from an overlapping portion of the SDSS Stripe 82 region to quantify the completeness and efficiency of selection methods involving color, probabilistic modeling, variability, and combinations of color/probabilistic modeling with variability. In all cases, we considered only objects that appear as point sources in the DES images. We examine color selection methods based on the Wide-field Infrared Survey Explorer (WISE) mid-IR W1-W2 color, a mixture of WISE and DES colors (g - i and i-W1),more » and a mixture of Vista Hemisphere Survey and DES colors (g - i and i - K). For probabilistic quasar selection, we used XDQSO, an algorithm that employs an empirical multi-wavelength flux model of quasars to assign quasar probabilities. Our variability selection uses the multi-band χ 2-probability that sources are constant in the DES Year 1 griz-band light curves. The completeness and efficiency are calculated relative to an underlying sample of point sources that are detected in the required selection bands and pass our data quality and photometric error cuts. We conduct our analyses at two magnitude limits, i < 19.8 mag and i < 22 mag. For the subset of sources with W1 and W2 detections, the W1-W2 color or XDQSOz method combined with variability gives the highest completenesses of >85% for both i-band magnitude limits and efficiencies of >80% to the bright limit and >60% to the faint limit; however, the giW1 and giW1+variability methods give the highest quasar surface densities. The XDQSOz method and combinations of W1W2/giW1/XDQSOz with variability are among the better selection methods when both high completeness and high efficiency are desired. We also present the OzDES Quasar Catalog of 1263 spectroscopically confirmed quasars from three years of OzDES observation in the 30 deg 2 of the DES supernova fields. Finally, the catalog includes quasars with redshifts up to z ~ 4 and brighter than i = 22 mag, although the catalog is not complete up to this magnitude limit.« less
Aggressive and chronic periodontitis in a population of Moroccan school students.
Kissa, Jamila; Chemlali, Sihame; El Houari, Bouchra; Amine, Khadija; Khlil, Nadia; Mikou, Salwa; Nadifi, Sellama; Albandar, Jasim M
2016-11-01
This study assessed the prevalence, clinical characteristics, and demographics of chronic and aggressive periodontitis in a representative sample drawn from a subpopulation in Morocco. Eight hundred and thirty students representative of 12+ years old attending schools in the Province of Benslimane, Morocco were selected by a multi-phased, probability sampling. Their age was 12-25 years (mean: 16.1 years) and comprised of 50% males and 50% females. Chronic and aggressive periodontitis were determined clinically. A total of 31% and 10.1% of the subjects had ≥4 mm and ≥6 mm attachment loss, respectively; 4.9% had aggressive periodontitis, and 6.4% had chronic periodontitis. Subjects with chronic periodontitis typically had 4-5 mm attachment loss affecting a few molars or premolars. Subjects with aggressive periodontitis had ≥5 mm attachment loss affecting multiple teeth, and 68% and 73% of these subjects had ≥6 mm attachment loss affecting maxillary and mandibular molars respectively. Attachment loss and periodontitis were significantly more prevalent in the 19-25 years group, than the 12-18 years age group. There were no significant differences in disease prevalence by gender or ethnic groups (Arab versus Berber). This young Moroccan population is at high risk of destructive periodontal disease, and further studies are indicated to investigate the biological and environmental factors that may contribute to the increased risk of disease in this population. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Using GIS to generate spatially balanced random survey designs for natural resource applications.
Theobald, David M; Stevens, Don L; White, Denis; Urquhart, N Scott; Olsen, Anthony R; Norman, John B
2007-07-01
Sampling of a population is frequently required to understand trends and patterns in natural resource management because financial and time constraints preclude a complete census. A rigorous probability-based survey design specifies where to sample so that inferences from the sample apply to the entire population. Probability survey designs should be used in natural resource and environmental management situations because they provide the mathematical foundation for statistical inference. Development of long-term monitoring designs demand survey designs that achieve statistical rigor and are efficient but remain flexible to inevitable logistical or practical constraints during field data collection. Here we describe an approach to probability-based survey design, called the Reversed Randomized Quadrant-Recursive Raster, based on the concept of spatially balanced sampling and implemented in a geographic information system. This provides environmental managers a practical tool to generate flexible and efficient survey designs for natural resource applications. Factors commonly used to modify sampling intensity, such as categories, gradients, or accessibility, can be readily incorporated into the spatially balanced sample design.
Spatially-Explicit Holocene Drought Reconstructions in Amazonian Forests
NASA Astrophysics Data System (ADS)
McMichael, C.; Bush, M. B.
2014-12-01
Climate models predict increasing drought in Amazonian forests over the next century, and the synergy of drought and fire may lead to forest dieback. El Niño Southern Oscillation (ENSO) and the Atlantic Multi-decadal Oscillation (AMO) are two primary drivers of Amazonian drought, and each process has a spatially distinct manifestation in the Basin. Paleoecological reconstructions can contextualize the forest response to past drought periods. Stalagmite and lake sediment records have documented that the early- to mid-Holocene, i.e. 10,000 - 5000 calibrated years before present (cal yr BP), was among the driest periods of the last 100,000 years in western Amazonia. Climatic conditions became wetter and more similar to the modern climate over the last 4000 cal yr BP, and fires rarely occurred in the absence of human activity. Yet there are currently no drought and fire reconstructions that examine the spatially explicit patterns of drought during the Holocene. Here, we present regional drought histories from southwestern and northeastern sections Amazonia for the last 10,000 years that document the drought-fire dynamics resulting from both climatic processes. Our reconstructions were based on a compilation of dated soil charcoal fragments (N= 291) collected from within Amazonia sensu stricto, which were analyzed by region using summed probability analysis. The compiled soil charcoal dates contained limited evidence of fire over the last 10,000 years in some regions. Fire frequency rose markedly across the Basin, however, during the last 2000 years, indicating an increased human presence. Fire probabilities, and thus droughts, had similar increasing trajectories between southwestern and northeastern Amazonia from 1500-1100 cal yr BP, which decoupled from 1100-740 cal yr BP, and then regained synchronicity from 740-500 cal yr BP. Fire probability declined markedly after 500 yr cal BP, coincident with European arrival to the Americas. Native populations were decimated, and fire probabilities returned to similar levels before the rise 2000 years ago. These results suggested that the synergy of humans plus drought have played a large role in historical fire regimes in Amazonian forests for the last 2000 years.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, S; Tianjin University, Tianjin; Hara, W
Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less
NASA Astrophysics Data System (ADS)
Ziehmer, Malin Michelle; Nicolussi, Kurt; Schlüchter, Christian; Leuenberger, Markus
2017-04-01
High-resolution climate reconstructions based on tree-ring proxies are often limited by the individual segment length of living trees selected at the defined sampling sites, which mostly results in relatively short multi-centennial proxy series. A potential extension of living wood records comprise the addition of subfossil and archeological wood remains resulting in chronologies and associated climate reconstructions which are able to cover a few millennia in central Europe (e.g. Büntgen et al., 2011). However, existing multi-millennial tree-ring width chronologies in central Europe rank among the longest continuous chronologies world-wide and span the entire Holocene (Becker et al., 1993; Nicolussi et al. 2009). So far, these chronologies have mainly been used for dating subfossil wood samples, floating chronologies and archeological artifacts, but only in parts for reconstructing climate. Finds of Holocene wood remains in glacier forefields, peat bogs and small lakes allow us not only to establish such long-term tree-ring width records; further they offer the possibility to establish multi-millennial proxy records for the entire Holocene by using a multi-proxy approach which includes both tree-ring width and triple stable isotope ratios. As temperature limits tree growth at the Alpine upper tree line, the existing tree-ring width records are currently limited to reconstruct a single environmental variable. In the framework of the project Alpine Holocene Tree Ring Isotope Records, we combine tree-ring width, cellulose content as well as carbon, oxygen and hydrogen isotope series in a multi-proxy approach which allows the reconstruction of past environments by combining both Holocene wood remains and recent tree samples from two Alpine tree-line species. For this purpose, α-cellulose is prepared from 5-year tree ring blocks following the procedure after Boettger et al. (2007) and subsequently crushed by ultrasonic homogenization (Laumer et al., 2009). The cellulose content is determined for each individual sample and carbon, oxygen and hydrogen isotopic ratios are measured simultaneously (Loader et al., 2015). The isotope records of carbon, oxygen and hydrogen show distinct low-frequency trends for the Early- and Mid-Holocene, but the individual series per proxy are often offset in their isotopic signature. As the sampling sites in our study are distributed along a SW-NE transect, the influence of the site conditions (latitude, longitude, elevation, exposition) and the tree species is tested and subsequently a correction is applied to the individual series. In addition, the tree-ring width records operate as a helpful tool in detecting and attributing the influence of larch budmoth outbreaks on the cellulose content and isotope records. We here present a synthesis of the applied multi-proxy approach and its ability to reconstruct Holocene climate variability for the time span from 9000 to 3500 years b2k covering the Early-Holocene (9000 to 7200 years b2k) and Mid-Holocene (7200 to 4200 years b2k) and the transition to the late Holocene (4200 to 3500 years b2k) as well as the recent 400 years including the modern warming. References Becker, B., & Kromer, B. Palaeogeogr. Palaeoclimatol. Palaeoecol., 1993, 103(1): 67-71 Boettger, T., et al. Anal. Chem., 2007, 79: 4603-4612 Büntgen, U. et al. Science, 2011, 331(6017): 578-582 Laumer, W., et al. Rapid Commun. Mass Spectrom., 2009, 23: 1934-1940 Loader, N.J., et al. Anal. Chem., 2015, 87: 376-380 Nicolussi K., et al. The Holocene, 2009, 19(6): 909-920
Multi-mode reliability-based design of horizontal curves.
Essa, Mohamed; Sayed, Tarek; Hussein, Mohamed
2016-08-01
Recently, reliability analysis has been advocated as an effective approach to account for uncertainty in the geometric design process and to evaluate the risk associated with a particular design. In this approach, a risk measure (e.g. probability of noncompliance) is calculated to represent the probability that a specific design would not meet standard requirements. The majority of previous applications of reliability analysis in geometric design focused on evaluating the probability of noncompliance for only one mode of noncompliance such as insufficient sight distance. However, in many design situations, more than one mode of noncompliance may be present (e.g. insufficient sight distance and vehicle skidding at horizontal curves). In these situations, utilizing a multi-mode reliability approach that considers more than one failure (noncompliance) mode is required. The main objective of this paper is to demonstrate the application of multi-mode (system) reliability analysis to the design of horizontal curves. The process is demonstrated by a case study of Sea-to-Sky Highway located between Vancouver and Whistler, in southern British Columbia, Canada. Two noncompliance modes were considered: insufficient sight distance and vehicle skidding. The results show the importance of accounting for several noncompliance modes in the reliability model. The system reliability concept could be used in future studies to calibrate the design of various design elements in order to achieve consistent safety levels based on all possible modes of noncompliance. Copyright © 2016 Elsevier Ltd. All rights reserved.
Some thoughts on Mercurian resources
NASA Astrophysics Data System (ADS)
Gillett, Stephen L.
Virtually all scenarios on Solar System development ignore Mercury, but such inattention is probably undeserved. Once viable lunar and (probably) asteroidal facilities are established in the next century, Mercury warrants further investigation. Mercury's high solar energy density is a major potential advantage for space-based industries. Indeed, despite its higher gravity, Mercury is roughly twice as easy to leave as the Moon if the additional solar flux is taken into account. Moreover, with solar-driven technologies such as solar sails or electric propulsion, its depth in the Sun's gravity well is less important. Because Mercury is airless and almost certainly waterless, it will be an obvious place to export lunar technology, which will have been developed to deal with very similar conditions. Methods for extracting resources from anhydrous silicates will be particularly germane. Even without solar-powered propulsion, the discovery of low-delta-V access via multiple Venus and Earth encounters makes the planet easier to reach than had been thought. Technology developed for multi-year missions to asteroids and Mars should be readily adaptable to such Mercurian missions. Mercury will not be our first outpost in the Solar System. Nonetheless, as facilities are established in cis-Earth space, it probably merits attention as a next step for development.
Alcohol-related legal infractions and student retention.
Thompson, Kevin M
2007-09-01
The present study employed municipal alcohol-related arrest reports to determine if being arrested/cited reduced the probability of academic retention. Alcohol-related legal infraction data implicating 1,310 college students was gathered during a 4-year period. First- through third-year students were identified in the database by cross-checking names in the campus directory. A random sample of nonarrested students functioned as the comparison group (n = 856). Students not appearing in the directory the following year were defined as nonretained students. Retention was not affected by the experience of one alcohol-related legal infraction. Retention odds were 31% lower for students experiencing multiple arrests, however, than for nonarrested or single-arrested students. Gender moderated the association between arrest and retention, with women who had been arrested more likely to return to school than those who had not been arrested. Retention odds were higher for arrested/cited students if they were in their second or third year of college, a fraternity/sorority member, or charged with an offense other than driving under the influence. Multi-arrested college students are at risk for attrition. Immersion in college life may reduce the odds of attrition among arrested college students.
Izadi, Shahrokh; Zahraei, Seyed Mohsen; Mokhtari-Azad, Talat
2018-02-08
Eight months after the mass immunization campaign of November 2015 against measles and rubella in the southeast of Iran, in order to evaluate the sero-immunity level of the people living in the mentioned region, a serosurvey study was performed. Using a multi-stage probability proportional to size cluster sampling, the sera of 1,056 participants, ranging from 15 months to 20 years old, were tested for measles and rubella IgG antibodies in the National Reference Laboratory at Tehran University of Medical Sciences, Tehran, Iran. The seroprevalence rates of antibodies against measles and rubella in the age groups below 16 years were respectively 98.4 and 93.2%. In the age group of 16 to 20 years, who was not the target of the mass immunization campaign, the said rates were respectively 91.7% and 87.4%. The herd immunity of the age groups below 16 years, who were the target of the campaign, is favourably high and reassuring both for measles and for rubella. Campaigns of supplementary vaccination play a substantial role for filling the gaps in the herd immunity.
Sharma, Koustubh; Bayrakcismith, Rana; Tumursukh, Lkhagvasumberel; Johansson, Orjan; Sevger, Purevsuren; McCarthy, Tom; Mishra, Charudutt
2014-01-01
Population monitoring programmes and estimation of vital rates are key to understanding the mechanisms of population growth, decline or stability, and are important for effective conservation action. We report, for the first time, the population trends and vital rates of the endangered snow leopard based on camera trapping over four years in the Tost Mountains, South Gobi, Mongolia. We used robust design multi-season mark-recapture analysis to estimate the trends in abundance, sex ratio, survival probability and the probability of temporary emigration and immigration for adult and young snow leopards. The snow leopard population remained constant over most of the study period, with no apparent growth (λ = 1.08+-0.25). Comparison of model results with the "known population" of radio-collared snow leopards suggested high accuracy in our estimates. Although seemingly stable, vigorous underlying dynamics were evident in this population, with the adult sex ratio shifting from being male-biased to female-biased (1.67 to 0.38 males per female) during the study. Adult survival probability was 0.82 (SE+-0.08) and that of young was 0.83 (SE+-0.15) and 0.77 (SE +-0.2) respectively, before and after the age of 2 years. Young snow leopards showed a high probability of temporary emigration and immigration (0.6, SE +-0.19 and 0.68, SE +-0.32 before and after the age of 2 years) though not the adults (0.02 SE+-0.07). While the current female-bias in the population and the number of cubs born each year seemingly render the study population safe, the vigorous dynamics suggests that the situation can change quickly. The reduction in the proportion of male snow leopards may be indicative of continuing anthropogenic pressures. Our work reiterates the importance of monitoring both the abundance and population dynamics of species for effective conservation.
Sharma, Koustubh; Bayrakcismith, Rana; Tumursukh, Lkhagvasumberel; Johansson, Orjan; Sevger, Purevsuren; McCarthy, Tom; Mishra, Charudutt
2014-01-01
Population monitoring programmes and estimation of vital rates are key to understanding the mechanisms of population growth, decline or stability, and are important for effective conservation action. We report, for the first time, the population trends and vital rates of the endangered snow leopard based on camera trapping over four years in the Tost Mountains, South Gobi, Mongolia. We used robust design multi-season mark-recapture analysis to estimate the trends in abundance, sex ratio, survival probability and the probability of temporary emigration and immigration for adult and young snow leopards. The snow leopard population remained constant over most of the study period, with no apparent growth (λ = 1.08+−0.25). Comparison of model results with the “known population” of radio-collared snow leopards suggested high accuracy in our estimates. Although seemingly stable, vigorous underlying dynamics were evident in this population, with the adult sex ratio shifting from being male-biased to female-biased (1.67 to 0.38 males per female) during the study. Adult survival probability was 0.82 (SE+−0.08) and that of young was 0.83 (SE+−0.15) and 0.77 (SE +−0.2) respectively, before and after the age of 2 years. Young snow leopards showed a high probability of temporary emigration and immigration (0.6, SE +−0.19 and 0.68, SE +−0.32 before and after the age of 2 years) though not the adults (0.02 SE+−0.07). While the current female-bias in the population and the number of cubs born each year seemingly render the study population safe, the vigorous dynamics suggests that the situation can change quickly. The reduction in the proportion of male snow leopards may be indicative of continuing anthropogenic pressures. Our work reiterates the importance of monitoring both the abundance and population dynamics of species for effective conservation. PMID:25006879
Accreting SMBH in the COSMOS field: the connection to their host galaxies .
NASA Astrophysics Data System (ADS)
Merloni, A.; Bongiorno, A.
Using the rich multi-band photometry in the COSMOS field we explore the host galaxy properties of a large, complete, sample of X-ray and spectroscopically selected AGN. Based on a two-components fit to their Spectral Energy Distribution (SED) we derive rest-frame magnitudes, colours, stellar masses and star formation rates up to z˜ 3. The probability for a galaxy to host a black hole growing at any given specific accretion rate (the ratio of X-ray luminosity to the host stellar mass) is independent of the galaxy mass and follows a power-law distribution in L_X/M. By looking at the normalisation of such a probability distribution, we show how the incidence of AGN increases with redshift as rapidly as (1+z)4.2, in close resemblance with the overall evolution of the specific star formation rate. Although AGN activity and star formation appear to have a common triggering mechanism, we do not find any 'smoking gun' signalling powerful AGN influence on the global properties of their host galaxies.
Multi-Agent Cooperative Target Search
Hu, Jinwen; Xie, Lihua; Xu, Jun; Xu, Zhao
2014-01-01
This paper addresses a vision-based cooperative search for multiple mobile ground targets by a group of unmanned aerial vehicles (UAVs) with limited sensing and communication capabilities. The airborne camera on each UAV has a limited field of view and its target discriminability varies as a function of altitude. First, by dividing the whole surveillance region into cells, a probability map can be formed for each UAV indicating the probability of target existence within each cell. Then, we propose a distributed probability map updating model which includes the fusion of measurement information, information sharing among neighboring agents, information decay and transmission due to environmental changes such as the target movement. Furthermore, we formulate the target search problem as a multi-agent cooperative coverage control problem by optimizing the collective coverage area and the detection performance. The proposed map updating model and the cooperative control scheme are distributed, i.e., assuming that each agent only communicates with its neighbors within its communication range. Finally, the effectiveness of the proposed algorithms is illustrated by simulation. PMID:24865884
Infinite capacity multi-server queue with second optional service channel
NASA Astrophysics Data System (ADS)
Ke, Jau-Chuan; Wu, Chia-Huang; Pearn, Wen Lea
2013-02-01
This paper deals with an infinite-capacity multi-server queueing system with a second optional service (SOS) channel. The inter-arrival times of arriving customers, the service times of the first essential service (FES) and the SOS channel are all exponentially distributed. A customer may leave the system after the FES channel with probability (1-θ), or at the completion of the FES may immediately require a SOS with probability θ (0 <= θ <= 1). The formulae for computing the rate matrix and stationary probabilities are derived by means of a matrix analytical approach. A cost model is developed to determine the optimal values of the number of servers and the two service rates, simultaneously, at the minimal total expected cost per unit time. Quasi-Newton method are employed to deal with the optimization problem. Under optimal operating conditions, numerical results are provided in which several system performance measures are calculated based on assumed numerical values of the system parameters.
NASA Astrophysics Data System (ADS)
Stevens, A. H.; Gentry, D.; Amador, E.; Cable, M. L.; Cantrell, T.; Chaudry, N.; Cullen, T.; Duca, Z.; Jacobsen, M.; Kirby, J.; McCaig, H.; Murukesan, G.; Rader, E.; Rennie, V.; Schwieterman, E.; Sutton, S.; Tan, G.; Yin, C.; Cullen, D.; Geppert, W.; Stockton, A.
2018-04-01
We detail multi-year field investigations in Icelandic Mars analogue environments that have yielded results that can help inform strategies for sample selection and downselection for Mars Sample Return.
Cranford, James A; McCabe, Sean Esteban; Boyd, Carol J; Slayden, Janie; Reed, Mark B; Ketchie, Julie M; Lange, James E; Scott, Marcia S
2008-01-01
This study conducted a follow-up telephone survey of a probability sample of college students who did not respond to a Web survey to determine correlates of and reasons for nonresponse. A stratified random sample of 2502 full-time first-year undergraduate students was invited to participate in a Web-based survey. A random sample of 221 students who did not respond to the original Web survey completed an abbreviated version of the original survey by telephone. Nonresponse did not vary by gender, but nonresponse was higher among Blacks and Hispanics compared to Whites, and Blacks compared to Asians. Nonresponders reported lower frequency of past 28 days drinking, lower levels of past-year and past 28-days heavy episodic drinking, and more time spent preparing for classes than responders. The most common reasons for nonresponse were "too busy" (45.7%), "not interested" (18.1%), and "forgot to complete survey" (18.1%). Reasons for nonresponse to Web surveys among college students are similar to reasons for nonresponse to mail and telephone surveys, and some nonresponse reasons vary as a function of alcohol involvement.
Izquierdo-Sotorrío, Eva; Holgado-Tello, Francisco P.; Carrasco, Miguel Á.
2016-01-01
This study examines the relationships between perceived parental acceptance and children’s behavioral problems (externalizing and internalizing) from a multi-informant perspective. Using mothers, fathers, and children as sources of information, we explore the informant effect and incremental validity. The sample was composed of 681 participants (227 children, 227 fathers, and 227 mothers). Children’s (40% boys) ages ranged from 9 to 17 years (M = 12.52, SD = 1.81). Parents and children completed both the Parental Acceptance Rejection/Control Questionnaire (PARQ/Control) and the check list of the Achenbach System of Empirically Based Assessment (ASEBA). Statistical analyses were based on the correlated uniqueness multitrait-multimethod matrix (model MTMM) by structural equations and different hierarchical regression analyses. Results showed a significant informant effect and a different incremental validity related to which combination of sources was considered. A multi-informant perspective rather than a single one increased the predictive value. Our results suggest that mother–father or child–father combinations seem to be the best way to optimize the multi-informant method in order to predict children’s behavioral problems based on perceived parental acceptance. PMID:27242582
Izquierdo-Sotorrío, Eva; Holgado-Tello, Francisco P; Carrasco, Miguel Á
2016-01-01
This study examines the relationships between perceived parental acceptance and children's behavioral problems (externalizing and internalizing) from a multi-informant perspective. Using mothers, fathers, and children as sources of information, we explore the informant effect and incremental validity. The sample was composed of 681 participants (227 children, 227 fathers, and 227 mothers). Children's (40% boys) ages ranged from 9 to 17 years (M = 12.52, SD = 1.81). Parents and children completed both the Parental Acceptance Rejection/Control Questionnaire (PARQ/Control) and the check list of the Achenbach System of Empirically Based Assessment (ASEBA). Statistical analyses were based on the correlated uniqueness multitrait-multimethod matrix (model MTMM) by structural equations and different hierarchical regression analyses. Results showed a significant informant effect and a different incremental validity related to which combination of sources was considered. A multi-informant perspective rather than a single one increased the predictive value. Our results suggest that mother-father or child-father combinations seem to be the best way to optimize the multi-informant method in order to predict children's behavioral problems based on perceived parental acceptance.
A Student’s t Mixture Probability Hypothesis Density Filter for Multi-Target Tracking with Outliers
Liu, Zhuowei; Chen, Shuxin; Wu, Hao; He, Renke; Hao, Lin
2018-01-01
In multi-target tracking, the outliers-corrupted process and measurement noises can reduce the performance of the probability hypothesis density (PHD) filter severely. To solve the problem, this paper proposed a novel PHD filter, called Student’s t mixture PHD (STM-PHD) filter. The proposed filter models the heavy-tailed process noise and measurement noise as a Student’s t distribution as well as approximates the multi-target intensity as a mixture of Student’s t components to be propagated in time. Then, a closed PHD recursion is obtained based on Student’s t approximation. Our approach can make full use of the heavy-tailed characteristic of a Student’s t distribution to handle the situations with heavy-tailed process and the measurement noises. The simulation results verify that the proposed filter can overcome the negative effect generated by outliers and maintain a good tracking accuracy in the simultaneous presence of process and measurement outliers. PMID:29617348
Gordon, Allegra R; Conron, Kerith J; Calzo, Jerel P; White, Matthew T; Reisner, Sari L; Austin, S Bryn
2018-04-01
Young people may experience school-based violence and bullying victimization related to their gender expression, independent of sexual orientation identity. However, the associations between gender expression and bullying and violence have not been examined in racially and ethnically diverse population-based samples of high school students. This study includes 5469 students (13-18 years) from the 2013 Youth Risk Behavior Surveys conducted in 4 urban school districts. Respondents were 51% Hispanic/Latino, 21% black/African American, 14% white. Generalized additive models were used to examine the functional form of relationships between self-reported gender expression (range: 1 = Most gender conforming, 7 = Most gender nonconforming) and 5 indicators of violence and bullying victimization. We estimated predicted probabilities across gender expression by sex, adjusting for sexual orientation identity and potential confounders. Statistically significant quadratic associations indicated that girls and boys at the most gender conforming and nonconforming ends of the scale had elevated probabilities of fighting and fighting-related injury, compared to those in the middle of the scale (p < .05). There was a significant linear relationship between gender expression and bullying victimization; every unit increase in gender nonconformity was associated with 15% greater odds of experiencing bullying (p < .0001). School-based victimization is associated with conformity and nonconformity to gender norms. School violence prevention programs should include gender diversity education. © 2018, American School Health Association.
Assessing performance and validating finite element simulations using probabilistic knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolin, Ronald M.; Rodriguez, E. A.
Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less
Cruise design for a 5-year period of the 50-year timber sales in Alaska.
John W. Hazard
1985-01-01
Sampling rules and estimation procedures are described for a new cruise design that was developed for 50-year timber sales in Alaska. An example is given of the rate redetermination cruise and analysis for the 1984-1989 period of the Ketchikan Pulp Company sale. In addition, methodology is presented for an alternative sampling technique of sampling with probability...
Sampling considerations for disease surveillance in wildlife populations
Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.
2008-01-01
Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.
NASA Astrophysics Data System (ADS)
Dağlarli, Evren; Temeltaş, Hakan
2007-04-01
This paper presents artificial emotional system based autonomous robot control architecture. Hidden Markov model developed as mathematical background for stochastic emotional and behavior transitions. Motivation module of architecture considered as behavioral gain effect generator for achieving multi-objective robot tasks. According to emotional and behavioral state transition probabilities, artificial emotions determine sequences of behaviors. Also motivational gain effects of proposed architecture can be observed on the executing behaviors during simulation.
The SDSS-III Multi-object Apo Radial-velocity Exoplanet Large-area Survey
NASA Astrophysics Data System (ADS)
Ge, Jian; Mahadevan, S.; Lee, B.; Wan, X.; Zhao, B.; van Eyken, J.; Kane, S.; Guo, P.; Ford, E. B.; Agol, E.; Gaudi, S.; Fleming, S.; Crepp, J.; Cohen, R.; Groot, J.; Galvez, M.; Liu, J.; Ford, H.; Schneider, D.; Seager, S.; Hawley, S. L.; Weinberg, D.; Eisenstein, D.
2007-12-01
As part of SDSS-III survey in 2008-2014, the Multi-object APO Radial-Velocity Exoplanet Large-area Survey (MARVELS) will conduct the largest ground-based Doppler planet survey to date using the SDSS telescope and new generation multi-object Doppler instruments with 120 object capability and 10-20 m/s Doppler precision. The baseline survey plan is to monitor a total of 11,000 V=8-12 stars ( 10,000 main sequence stars and 1000 giant stars) over 800 square degrees over the 6 years. The primary goal is to produce a large, statistically well defined sample of giant planets ( 200) with a wide range of masses ( 0.2-10 Jupiter masses) and orbits (1 day-2 years) drawn from a large of host stars with a diverse set of masses, compositions, and ages for studying the diversity of extrasolar planets and constraining planet formation, migration & dynamical evolution of planetary systems. The survey data will also be used for providing a statistical sample for theoretical comparison and discovering rare systems and identifying signposts for lower-mass or more distant planets. Early science results from the pilot program will be reported. We would like to thank the SDSS MC for allocation of the telescope time and the W.M. Keck Foundation, NSF, NASA and UF for support.
NASA Astrophysics Data System (ADS)
McDonald, G. W.; Cronin, S. J.; Kim, J.-H.; Smith, N. J.; Murray, C. A.; Procter, J. N.
2017-12-01
The economic impacts of volcanism extend well beyond the direct costs of loss of life and asset damage. This paper presents one of the first attempts to assess the economic consequences of disruption associated with volcanic impacts at a range of temporal and spatial scales using multi-regional and dynamic computable general equilibrium (CGE) modelling. Based on the last decade of volcanic research findings at Mt. Taranaki, three volcanic event scenarios (Tahurangi, Inglewood and Opua) differentiated by critical physical thresholds were generated. In turn, the corresponding disruption economic impacts were calculated for each scenario. Under the Tahurangi scenario (annual probability of 0.01-0.02), a small-scale explosive (Volcanic Explosivity Index (VEI) 2-3) and dome forming eruption, the economic impacts were negligible with complete economic recovery experienced within a year. The larger Inglewood sub-Plinian to Plinian eruption scenario event (VEI > 4, annualised probability of 0.003) produced significant impacts on the Taranaki region economy of 207 million (representing 4.0% of regional gross domestic product (GDP) 1 year after the event, 2007 New Zealand dollars), that will take around 5 years to recover. The Opua scenario, the largest magnitude volcanic hazard modelled, is a major flank collapse and debris avalanche event with an annual probability of 0.00018. The associated economic impacts of this scenario were 397 million (representing 7.7% of regional GDP 1 year after the event) with the Taranaki region economy suffering permanent structural changes. Our dynamic analysis illustrates that different economic impacts play out at different stages in a volcanic crisis. We also discuss the key strengths and weaknesses of our modelling along with potential extensions.
Survival of Parents and Siblings of Supercentenarians
Perls, Thomas; Kohler, Iliana V.; Andersen, Stacy; Schoenhofen, Emily; Pennington, JaeMi; Young, Robert; Terry, Dellara; Elo, Irma T.
2011-01-01
Background Given previous evidence of familial predisposition for longevity, we hypothesized that siblings and parents of supercentenarians (age ≥ 110 years) were predisposed to survival to very old age and that, relative to their birth cohorts, their relative survival probabilities (RSPs) are even higher than what has been observed for the siblings of centenarians. Methods Mean age at death conditional upon survival to ages 20 and 50 and survival probabilities from ages 20 and 50 to higher ages were determined for 50 male and 56 female siblings and 54 parents of 29 supercentenarians. These estimates were contrasted with comparable estimates based on birth cohort-specific mortality experience for the United States and Sweden. Results Conditional on survival to age 20 years, mean age at death of supercentenarians’ siblings was ~81 years for men and women. Compared with respective Swedish and U.S. birth cohorts, these estimates were 17%–20% (12–14 years) higher for the brothers and 11%–14% (8–10 years) higher for the sisters. Sisters had a 2.9 times greater probability and brothers had a 4.3 times greater probability of survival from age 20 to age 90. Mothers of supercentenarians had a 5.8 times greater probability of surviving from age 50 to age 90. Fathers also experienced an increased survival probability from age 50 to age 90 of 2.7, but it failed to attain statistical significance. Conclusions The RSPs of siblings and mothers of supercentenarians revealed a substantial survival advantage and were most pronounced at the oldest ages. The RSP to age 90 for siblings of supercentenarians was approximately the same as that reported for siblings of centenarians. It is possible that greater RSPs are observed for reaching even higher ages such as 100 years, but a larger sample of supercentenarians and their siblings and parents is needed to investigate this possibility. PMID:17895443
Survival of parents and siblings of supercentenarians.
Perls, Thomas; Kohler, Iliana V; Andersen, Stacy; Schoenhofen, Emily; Pennington, JaeMi; Young, Robert; Terry, Dellara; Elo, Irma T
2007-09-01
Given previous evidence of familial predisposition for longevity, we hypothesized that siblings and parents of supercentenarians (age >or= 110 years) were predisposed to survival to very old age and that, relative to their birth cohorts, their relative survival probabilities (RSPs) are even higher than what has been observed for the siblings of centenarians. Mean age at death conditional upon survival to ages 20 and 50 and survival probabilities from ages 20 and 50 to higher ages were determined for 50 male and 56 female siblings and 54 parents of 29 supercentenarians. These estimates were contrasted with comparable estimates based on birth cohort-specific mortality experience for the United States and Sweden. Conditional on survival to age 20 years, mean age at death of supercentenarians' siblings was approximately 81 years for men and women. Compared with respective Swedish and U.S. birth cohorts, these estimates were 17%-20% (12-14 years) higher for the brothers and 11%-14% (8-10 years) higher for the sisters. Sisters had a 2.9 times greater probability and brothers had a 4.3 times greater probability of survival from age 20 to age 90. Mothers of supercentenarians had a 5.8 times greater probability of surviving from age 50 to age 90. Fathers also experienced an increased survival probability from age 50 to age 90 of 2.7, but it failed to attain statistical significance. The RSPs of siblings and mothers of supercentenarians revealed a substantial survival advantage and were most pronounced at the oldest ages. The RSP to age 90 for siblings of supercentenarians was approximately the same as that reported for siblings of centenarians. It is possible that greater RSPs are observed for reaching even higher ages such as 100 years, but a larger sample of supercentenarians and their siblings and parents is needed to investigate this possibility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Letant, S E; Kane, S R; Murphy, G A
2008-05-30
This note presents a comparison of Most-Probable-Number Rapid Viability (MPN-RV) PCR and traditional culture methods for the quantification of Bacillus anthracis Sterne spores in macrofoam swabs generated by the Centers for Disease Control and Prevention (CDC) for a multi-center validation study aimed at testing environmental swab processing methods for recovery, detection, and quantification of viable B. anthracis spores from surfaces. Results show that spore numbers provided by the MPN RV-PCR method were in statistical agreement with the CDC conventional culture method for all three levels of spores tested (10{sup 4}, 10{sup 2}, and 10 spores) even in the presence ofmore » dirt. In addition to detecting low levels of spores in environmental conditions, the MPN RV-PCR method is specific, and compatible with automated high-throughput sample processing and analysis protocols.« less
Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2012-01-01
This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.
Tomographic Imaging of a Forested Area By Airborne Multi-Baseline P-Band SAR.
Frey, Othmar; Morsdorf, Felix; Meier, Erich
2008-09-24
In recent years, various attempts have been undertaken to obtain information about the structure of forested areas from multi-baseline synthetic aperture radar data. Tomographic processing of such data has been demonstrated for airborne L-band data but the quality of the focused tomographic images is limited by several factors. In particular, the common Fourierbased focusing methods are susceptible to irregular and sparse sampling, two problems, that are unavoidable in case of multi-pass, multi-baseline SAR data acquired by an airborne system. In this paper, a tomographic focusing method based on the time-domain back-projection algorithm is proposed, which maintains the geometric relationship between the original sensor positions and the imaged target and is therefore able to cope with irregular sampling without introducing any approximations with respect to the geometry. The tomographic focusing quality is assessed by analysing the impulse response of simulated point targets and an in-scene corner reflector. And, in particular, several tomographic slices of a volume representing a forested area are given. The respective P-band tomographic data set consisting of eleven flight tracks has been acquired by the airborne E-SAR sensor of the German Aerospace Center (DLR).
Taubmann, Julia; Sharma, Koustubh; Uulu, Kubanychbek Zhumabai; Hines, James; Mishra, Charudutt
2015-01-01
The Endangered snow leopard Panthera uncia occurs in the Central Asian Mountains, which cover c. 2 million km2. Little is known about its status in the Kyrgyz Alay Mountains, a relatively narrow stretch of habitat connecting the southern and northern global ranges of the species. In 2010 we gathered information on current and past (1990, the last year of the Soviet Union) distributions of snow leopards and five sympatric large mammals across 14,000 km2 of the Kyrgyz Alay. We interviewed 95 key informants from local communities. Across 49 400-km2 grid cells we obtained 1,606 and 962 records of species occurrence (site use) in 1990 and 2010, respectively. The data were analysed using the multi-season site occupancy framework to incorporate uncertainty in detection across interviewees and time periods. High probability of use by snow leopards in the past was recorded in > 70% of the Kyrgyz Alay. Between the two sampling periods 39% of sites showed a high probability of local extinction of snow leopard. We also recorded high probability of local extinction of brown bear Ursus arctos (84% of sites) and Marco Polo sheep Ovis ammon polii (47% of sites), mainly in regions used intensively by people. Data indicated a high probability of local colonization by lynx Lynx lynx in 41% of the sites. Although wildlife has declined in areas of central and eastern Alay, regions in the north-west, and the northern and southern fringes appear to retain high conservation value.
Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems
Ouyang, Fang-Yan; Zheng, Bo; Jiang, Xiong-Fei
2015-01-01
The empirical mode decomposition is applied to analyze the intrinsic multi-scale dynamic behaviors of complex financial systems. In this approach, the time series of the price returns of each stock is decomposed into a small number of intrinsic mode functions, which represent the price motion from high frequency to low frequency. These intrinsic mode functions are then grouped into three modes, i.e., the fast mode, medium mode and slow mode. The probability distribution of returns and auto-correlation of volatilities for the fast and medium modes exhibit similar behaviors as those of the full time series, i.e., these characteristics are rather robust in multi time scale. However, the cross-correlation between individual stocks and the return-volatility correlation are time scale dependent. The structure of business sectors is mainly governed by the fast mode when returns are sampled at a couple of days, while by the medium mode when returns are sampled at dozens of days. More importantly, the leverage and anti-leverage effects are dominated by the medium mode. PMID:26427063
Ximenes, Ricardo Arraes de Alencar; Pereira, Leila Maria Beltrão; Martelli, Celina Maria Turchi; Merchán-Hamann, Edgar; Stein, Airton Tetelbom; Figueiredo, Gerusa Maria; Braga, Maria Cynthia; Montarroyos, Ulisses Ramos; Brasil, Leila Melo; Turchi, Marília Dalva; Fonseca, José Carlos Ferraz da; Lima, Maria Luiza Carvalho de; Alencar, Luis Cláudio Arraes de; Costa, Marcelo; Coral, Gabriela; Moreira, Regina Celia; Cardoso, Maria Regina Alves
2010-09-01
A population-based survey to provide information on the prevalence of hepatitis viral infection and the pattern of risk factors was carried out in the urban population of all Brazilian state capitals and the Federal District, between 2005 and 2009. This paper describes the design and methodology of the study which involved a population aged 5 to 19 for hepatitis A and 10 to 69 for hepatitis B and C. Interviews and blood samples were obtained through household visits. The sample was selected using stratified multi-stage cluster sampling and was drawn with equal probability from each domain of study (region and age-group). Nationwide, 19,280 households and ~31,000 residents were selected. The study is large enough to detect prevalence of viral infection around 0.1% and risk factor assessments within each region. The methodology seems to be a viable way of differentiating between distinct epidemiological patterns of hepatitis A, B and C. These data will be of value for the evaluation of vaccination policies and for the design of control program strategies.
Whisman, Mark A
2016-12-01
Prior research has found that humiliating marital events are associated with depression. Building on this research, the current study investigated the association between one specific humiliating marital event-discovering that one's partner had an affair-and past-year major depressive episode (MDE) in a probability sample of married or cohabiting men and women who were at high risk for depression based on the criterion that they scored below the midpoint on a measure of marital satisfaction (N = 227). Results indicate that (i) women were more likely than men to report discovering their partner had an affair in the prior 12 months; (ii) discovering a partner affair was associated with a higher prevalence of past-year MDE and a lower level of marital adjustment; and (iii) the association between discovering a partner affair and MDE remained statistically significant when holding constant demographic variables and marital adjustment. These results support continued investigation into the impact that finding out about an affair has on the mental health of the person discovering a partner affair. © 2015 Family Process Institute.
George, Amanda M; Olesen, Sarah; Tait, Robert J
2013-10-01
Longitudinal, population-based studies can better assess the relationship of ecstasy use with depression. We examined whether change in ecstasy use was associated with change in depressive symptoms/probable depression over a 4-year period, among a large Australian sample. The Personality and Total Health project is a longitudinal general community study of Australians from Canberra and Queanbeyan. Data from the youngest cohort when aged 24-30 (N = 2, 128) and 4 years later (N = 1, 977) was included. The Goldberg depression scale and the Brief Patient Health Questionnaire measured depressive symptoms and probable depression, respectively. Multilevel growth models also considered demographics, psychosocial characteristics, and other drug use. Ecstasy use was not associated with long-term depressive symptoms or greater odds of depression in multivariate analyses. Users had more self-reported depressive symptoms when using ecstasy compared to not using. However, differences between people who had and had not ever used ecstasy largely accounted for this. Other factors were more important in the prediction of depression. It would be premature to conclude that ecstasy use is not related to the development of long-term depressive symptoms, given the relatively low level of ecstasy and other drug use in this community sample. Results showed that other factors need to be considered when investigating ecstasy use and depression.
Nicholson, Wayne L
2003-12-01
Thermal inactivation kinetics with extrapolation were used to model the survival probabilities of spores of various Bacillus species over time periods of millions of years at the historical ambient temperatures (25-40 degrees C) encountered within the 250 million-year-old Salado formation, from which the putative ancient spore-forming bacterium Salibacillus marismortui strain 2-9-3 was recovered. The model indicated extremely low-to-moderate survival probabilities for spores of mesophiles. but surprisingly high survival probabilities for thermophilic spores. The significance of the results are discussed in terms of the survival probabilities of (i) terrestrial spores in ancient geologic samples and (ii) spores transported between planets within impact ejecta.
NASA Astrophysics Data System (ADS)
Nicholson, Wayne L.
2003-12-01
Thermal inactivation kinetics with extrapolation were used to model the survival probabilities of spores of various Bacillus species over time periods of millions of years at the historical ambient temperatures (25-40 °) encountered within the 250 million-year-old Salado formation, from which the putative ancient spore-forming bacterium Salibacillus marismortui strain 2-9-3 was recovered. The model indicated extremely low-to-moderate survival probabilities for spores of mesophiles, but surprisingly high survival probabilities for thermophilic spores. The significance of the results are discussed in terms of the survival probabilities of (i) terrestrial spores in ancient geologic samples and (ii) spores transported between planets within impact ejecta.
NASA Astrophysics Data System (ADS)
Lanni, Cristiano; Mazzorana, Bruno; Volcan, Claudio; Bertagnolli, Rudi
2015-04-01
Flood hazard is generally assessed by assuming the return period of the rainfall as a proxy for the return period of the discharge and the related hydrograph. Frequently this deterministic view is extended also to the straightforward application of hydrodynamic models. However, the climate (i.e. precipitation), the catchment (i.e. geology, soil and antecedent soil-moisture condition) and the anthropogenic (i.e. drainage system and its regulation) systems interact in a complex way, and the occurrence probability of a flood inundation event can significantly differ from the occurrence probability of the triggering event (i.e. rainfall). In order to reliably determine the spatial patterns of flood intensities and probabilities, the rigorous determination of flood event scenarios is beneficial because it provides a clear, rationale method to recognize and unveil the inherent stochastic behavior of natural processes. Therefore, a multi-scenario approach for hazard assessment should be applied and should consider the possible events taking place in the area potentially subject to flooding (i.e. floodplains). Here, we apply a multi-scenario approach for the assessment of the flood hazard around the Idro lake (Italy). We consider and estimate the probability of occurrence of several scenarios related to the initial (i.e. initial water level in the lake) and boundary (i.e. shape of the hydrograph, downslope drainage, spillway opening operations) conditions characterizing the lake. Finally, we discuss the advantages and issues of the presented methodological procedure compared to traditional (and essentially deterministic) approaches.
Disentangling sampling and ecological explanations underlying species-area relationships
Cam, E.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Alpizar-Jara, R.; Flather, C.H.
2002-01-01
We used a probabilistic approach to address the influence of sampling artifacts on the form of species-area relationships (SARs). We developed a model in which the increase in observed species richness is a function of sampling effort exclusively. We assumed that effort depends on area sampled, and we generated species-area curves under that model. These curves can be realistic looking. We then generated SARs from avian data, comparing SARs based on counts with those based on richness estimates. We used an approach to estimation of species richness that accounts for species detection probability and, hence, for variation in sampling effort. The slopes of SARs based on counts are steeper than those of curves based on estimates of richness, indicating that the former partly reflect failure to account for species detection probability. SARs based on estimates reflect ecological processes exclusively, not sampling processes. This approach permits investigation of ecologically relevant hypotheses. The slope of SARs is not influenced by the slope of the relationship between habitat diversity and area. In situations in which not all of the species are detected during sampling sessions, approaches to estimation of species richness integrating species detection probability should be used to investigate the rate of increase in species richness with area.
Modulation Based on Probability Density Functions
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
2009-01-01
A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.
USDA-ARS?s Scientific Manuscript database
Small, coded, pill-sized tracers embedded in grain are proposed as a method for grain traceability. A sampling process for a grain traceability system was designed and investigated by applying probability statistics using a science-based sampling approach to collect an adequate number of tracers fo...
Negative Correlates of Part-Time Employment during Adolescence: Replication and Elaboration.
ERIC Educational Resources Information Center
Steinberg, Laurence; Dornbusch, Sanford M.
This study examined the relation between part-time employment and adolescent behavior and development in a multi-ethnic, multi-class sample of approximately 4,000 15- through 18-year-olds. The results indicated that long work hours during the school year were associated with diminished investment in schooling and lowered school performance,…
Robinson, Thomas N.; Matheson, Donna; Desai, Manisha; Wilson, Darrell M.; Weintraub, Dana L.; Haskell, William L.; McClain, Arianna; McClure, Samuel; Banda, Jorge; Sanders, Lee M.; Haydel, K. Farish; Killen, Joel D.
2013-01-01
Objective To test the effects of a three-year, community-based, multi-component, multi-level, multi-setting (MMM) approach for treating overweight and obese children. Design Two-arm, parallel group, randomized controlled trial with measures at baseline, 12, 24, and 36 months after randomization. Participants Seven through eleven year old, overweight and obese children (BMI ≥ 85th percentile) and their parents/caregivers recruited from community locations in low-income, primarily Latino neighborhoods in Northern California. Interventions Families are randomized to the MMM intervention versus a community health education active-placebo comparison intervention. Interventions last for three years for each participant. The MMM intervention includes a community-based after school team sports program designed specifically for overweight and obese children, a home-based family intervention to reduce screen time, alter the home food/eating environment, and promote self-regulatory skills for eating and activity behavior change, and a primary care behavioral counseling intervention linked to the community and home interventions. The active-placebo comparison intervention includes semi-annual health education home visits, monthly health education newsletters for children and for parents/guardians, and a series of community-based health education events for families. Main Outcome Measure Body mass index trajectory over the three-year study. Secondary outcome measures include waist circumference, triceps skinfold thickness, accelerometer-measured physical activity, 24-hour dietary recalls, screen time and other sedentary behaviors, blood pressure, fasting lipids, glucose, insulin, hemoglobin A1c, C-reactive protein, alanine aminotransferase, and psychosocial measures. Conclusions The Stanford GOALS trial is testing the efficacy of a novel community-based multi-component, multi-level, multi-setting treatment for childhood overweight and obesity in low-income, Latino families. PMID:24028942
Robinson, Thomas N; Matheson, Donna; Desai, Manisha; Wilson, Darrell M; Weintraub, Dana L; Haskell, William L; McClain, Arianna; McClure, Samuel; Banda, Jorge A; Sanders, Lee M; Haydel, K Farish; Killen, Joel D
2013-11-01
To test the effects of a three-year, community-based, multi-component, multi-level, multi-setting (MMM) approach for treating overweight and obese children. Two-arm, parallel group, randomized controlled trial with measures at baseline, 12, 24, and 36 months after randomization. Seven through eleven year old, overweight and obese children (BMI ≥ 85th percentile) and their parents/caregivers recruited from community locations in low-income, primarily Latino neighborhoods in Northern California. Families are randomized to the MMM intervention versus a community health education active-placebo comparison intervention. Interventions last for three years for each participant. The MMM intervention includes a community-based after school team sports program designed specifically for overweight and obese children, a home-based family intervention to reduce screen time, alter the home food/eating environment, and promote self-regulatory skills for eating and activity behavior change, and a primary care behavioral counseling intervention linked to the community and home interventions. The active-placebo comparison intervention includes semi-annual health education home visits, monthly health education newsletters for children and for parents/guardians, and a series of community-based health education events for families. Body mass index trajectory over the three-year study. Secondary outcome measures include waist circumference, triceps skinfold thickness, accelerometer-measured physical activity, 24-hour dietary recalls, screen time and other sedentary behaviors, blood pressure, fasting lipids, glucose, insulin, hemoglobin A1c, C-reactive protein, alanine aminotransferase, and psychosocial measures. The Stanford GOALS trial is testing the efficacy of a novel community-based multi-component, multi-level, multi-setting treatment for childhood overweight and obesity in low-income, Latino families. © 2013 Elsevier Inc. All rights reserved.
Schloesser, J.T.; Paukert, Craig P.; Doyle, W.J.; Hill, Tracy D.; Steffensen, K.D.; Travnichek, Vincent H.
2012-01-01
Occupancy modeling was used to determine (1) if detection probabilities (p) for 7 regionally imperiled Missouri River fishes (Scaphirhynchus albus, Scaphirhynchus platorynchus, Cycleptus elongatus, Sander canadensis, Macrhybopsis aestivalis, Macrhybopsis gelida, and Macrhybopsis meeki) differed among gear types (i.e. stationary gill nets, drifted trammel nets, and otter trawls), and (2) how detection probabilities were affected by habitat (i.e. pool, bar, and open water), longitudinal position (five 189 to 367 rkm long segments), sampling year (2003 to 2006), and season (July 1 to October 30 and October 31 to June 30). Adult, large-bodied fishes were best detected with gill nets (p: 0.02–0.74), but most juvenile large-bodied and all small-bodied species were best detected with otter trawls (p: 0.02–0.58). Trammel nets may be a redundant sampling gear for imperiled fishes in the lower Missouri River because most species had greater detection probabilities with gill nets or otter trawls. Detection probabilities varied with river segment for S. platorynchus, C. elongatus, and all small-bodied fishes, suggesting that changes in habitat influenced gear efficiency or abundance changes among river segments. Detection probabilities varied by habitat for adult S. albus and S. canadensis, year for juvenile S. albus, C. elongatus, and S. canadensis, and season for adult S. albus. Concentrating sampling effort on gears with the greatest detection probabilities may increase species detections to better monitor a population's response to environmental change and the effects of management actions on large-river fishes.
NASA Astrophysics Data System (ADS)
Krumholz, Mark R.; Adamo, Angela; Fumagalli, Michele; Wofford, Aida; Calzetti, Daniela; Lee, Janice C.; Whitmore, Bradley C.; Bright, Stacey N.; Grasha, Kathryn; Gouliermis, Dimitrios A.; Kim, Hwihyun; Nair, Preethi; Ryon, Jenna E.; Smith, Linda J.; Thilker, David; Ubeda, Leonardo; Zackrisson, Erik
2015-10-01
We investigate a novel Bayesian analysis method, based on the Stochastically Lighting Up Galaxies (slug) code, to derive the masses, ages, and extinctions of star clusters from integrated light photometry. Unlike many analysis methods, slug correctly accounts for incomplete initial mass function (IMF) sampling, and returns full posterior probability distributions rather than simply probability maxima. We apply our technique to 621 visually confirmed clusters in two nearby galaxies, NGC 628 and NGC 7793, that are part of the Legacy Extragalactic UV Survey (LEGUS). LEGUS provides Hubble Space Telescope photometry in the NUV, U, B, V, and I bands. We analyze the sensitivity of the derived cluster properties to choices of prior probability distribution, evolutionary tracks, IMF, metallicity, treatment of nebular emission, and extinction curve. We find that slug's results for individual clusters are insensitive to most of these choices, but that the posterior probability distributions we derive are often quite broad, and sometimes multi-peaked and quite sensitive to the choice of priors. In contrast, the properties of the cluster population as a whole are relatively robust against all of these choices. We also compare our results from slug to those derived with a conventional non-stochastic fitting code, Yggdrasil. We show that slug's stochastic models are generally a better fit to the observations than the deterministic ones used by Yggdrasil. However, the overall properties of the cluster populations recovered by both codes are qualitatively similar.
The MPLEx Protocol for Multi-omic Analyses of Soil Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicora, Carrie D.; Burnum-Johnson, Kristin E.; Nakayasu, Ernesto S.
Mass spectrometry (MS)-based integrated metaproteomic, metabolomic and lipidomic (multi-omic) studies are transforming our ability to understand and characterize microbial communities in environmental and biological systems. These measurements are even enabling enhanced analyses of complex soil microbial communities, which are the most complex microbial systems known to date. Multi-omic analyses, however, do have sample preparation challenges since separate extractions are typically needed for each omic study, thereby greatly amplifying the preparation time and amount of sample required. To address this limitation, a 3-in-1 method for simultaneous metabolite, protein, and lipid extraction (MPLEx) from the exact same soil sample was created bymore » adapting a solvent-based approach. This MPLEx protocol has proven to be simple yet robust for many sample types and even when utilized for limited quantities of complex soil samples. The MPLEx method also greatly enabled the rapid multi-omic measurements needed to gain a better understanding of the members of each microbial community, while evaluating the changes taking place upon biological and environmental perturbations.« less
Farmer, William H.; Koltun, Greg
2017-01-01
Study regionThe state of Ohio in the United States, a humid, continental climate.Study focusThe estimation of nonexceedance probabilities of daily streamflows as an alternative means of establishing the relative magnitudes of streamflows associated with hydrologic and water-quality observations.New hydrological insights for the regionSeveral methods for estimating nonexceedance probabilities of daily mean streamflows are explored, including single-index methodologies (nearest-neighboring index) and geospatial tools (kriging and topological kriging). These methods were evaluated by conducting leave-one-out cross-validations based on analyses of nearly 7 years of daily streamflow data from 79 unregulated streamgages in Ohio and neighboring states. The pooled, ordinary kriging model, with a median Nash–Sutcliffe performance of 0.87, was superior to the single-site index methods, though there was some bias in the tails of the probability distribution. Incorporating network structure through topological kriging did not improve performance. The pooled, ordinary kriging model was applied to 118 locations without systematic streamgaging across Ohio where instantaneous streamflow measurements had been made concurrent with water-quality sampling on at least 3 separate days. Spearman rank correlations between estimated nonexceedance probabilities and measured streamflows were high, with a median value of 0.76. In consideration of application, the degree of regulation in a set of sample sites helped to specify the streamgages required to implement kriging approaches successfully.
An analysis of switching and non-switching slot machine player behaviour.
Coates, Ewan; Blaszczynski, Alex
2013-12-01
Learning theory predicts that, given the repeated choice to bet between two concurrently available slot machines, gamblers will learn to bet more money on the machine with higher expected return (payback percentage) or higher win probability per spin (volatility). The purpose of this study was to investigate whether this occurs when the two machines vary orthogonally on payback percentage and volatility. The sample comprised 52 first year psychology students (mean age = 20.3 years, 20 females, 32 males) who had played a gaming machine at least once in the previous 12 months. Participants were administered a battery of questionnaires designed to assess level of knowledge on the characteristics and operation of poker machines, frequency of poker machine play in the past 12 months, personality traits of impulsivity and capacity for cognitive reflection, and gambling beliefs. For the experimental task, participants were instructed to play on two PC-simulated electronic gaming machines (EGMs or slot machines) that differed on payback percentage and volatility, with the option of freely switching between EGMs after a practice phase. Results indicated that participants were able to easily discriminate between machines and manifested a preference to play machines offering higher payback or volatility. These findings diverged from previous findings of no preference for play on higher payback/volatility machines, potentially due to of the current study's absence of the option to make multi-line and multi-credit bets. It was concluded that return rate parameters like payback percentage and volatility strongly influenced slot machine preference in the absence of betting options like multi-line bets, though more research is needed to determine the effects of such betting options on player distribution of money between multiple EGMs.
Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H
2016-01-01
Objective To examine sociodemographic and behavioural differences between men who have sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey. Methods We compared 148 MSM aged 18–64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010–2012, with men in the same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European MSM Internet Survey (EMIS); 797 in the London Gay Men's Sexual Health Survey; and 1234 in Scotland's Gay Men's Sexual Health Survey. Analyses compared men reporting at least one male sexual partner (past year) on similarly worded questions and multivariable analyses accounted for sociodemographic differences between the surveys. Results MSM in convenience surveys were younger and better educated than MSM in Natsal-3, and a larger proportion identified as gay (85%–95% vs 62%). Partner numbers were higher and same-sex anal sex more common in convenience surveys. Unprotected anal intercourse was more commonly reported in EMIS. Compared with Natsal-3, MSM in convenience surveys were more likely to report gonorrhoea diagnoses and HIV testing (both past year). Differences between the samples were reduced when restricting analysis to gay-identifying MSM. Conclusions National probability surveys better reflect the population of MSM but are limited by their smaller samples of MSM. Convenience surveys recruit larger samples of MSM but tend to over-represent MSM identifying as gay and reporting more sexual risk behaviours. Because both sampling strategies have strengths and weaknesses, methods are needed to triangulate data from probability and convenience surveys. PMID:26965869
Forensic applicability of multi-allelic InDels with mononucleotide homopolymer structures.
Zhang, Shu; Zhu, Qiang; Chen, Xiaogang; Zhao, Yuancun; Zhao, Xiaohong; Yang, Yiwen; Gao, Zehua; Fang, Ting; Wang, Yufang; Zhang, Ji
2018-04-27
Insertion/deletion polymorphisms (InDels), which possess the characteristics of low mutation rates and a short amplicon size, have been regarded as promising markers for forensic DNA analysis. InDels can be classified as bi-allelic or multi-allelic, depending on the number of alleles. Many studies have explored the use of bi-allelic InDels in forensic applications, such as individual identification and ancestry inference. However, multi-allelic InDels have received relatively little attention. In this study, InDels with 2-6 alleles and a minor allele frequency ≥0.01, in Chinese Southern Han (CHS), were retrieved from the 1000 Genomes Project Phase III. Based on the structural analysis of all retrieved InDels, 17 multi-allelic markers with mononucleotide homopolymer structures were selected and combined in one multiplex PCR reaction system. Sensitivity, species specificity and applicability in forensic case work of the multiplex were analyzed. A total of 218 unrelated individuals from a Chinese Han population were genotyped. The combined discriminatory power (CDP), the combined match probability (CMP) and the cumulative probability of exclusion (CPE) were 0.9999999999609, 3.91E-13 and 0.9956, respectively. The results demonstrated that this InDel multiplex panel was highly informative in the investigated population and most of the 26 populations of the 1000 Genomes Project. The data also suggested that multi-allelic InDel markers with monomeric base pair expansions are useful for forensic applications. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
The impacts of recent smoking control policies on individual smoking choice: the case of Japan
2013-01-01
Abstract This article comprehensively examines the impact of recent smoking control policies in Japan, increases in cigarette taxes and the enforcement of the Health Promotion Law, on individual smoking choice by using multi-year and nationwide individual survey data to overcome the analytical problems of previous Japanese studies. In the econometric analyses, I specify a simple binary choice model based on a random utility model to examine the effects of smoking control policies on individual smoking choice by employing the instrumental variable probit model to control for the endogeneity of cigarette prices. The empirical results show that an increase in cigarette prices statistically significantly reduces the smoking probability of males by 1.0 percent and that of females by 1.4 to 2.0 percent. The enforcement of the Health Promotion Law has a statistically significant effect on reducing the smoking probability of males by 15.2 percent and of females by 11.9 percent. Furthermore, an increase in cigarette prices has a statistically significant negative effect on the smoking probability of office workers, non-workers, male manual workers, and female unemployed people, and the enforcement of the Health Promotion Law has a statistically significant effect on decreasing the smoking probabilities of office workers, female manual workers, and male non-workers. JEL classification C25, C26, I18 PMID:23497490
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
Effects of sampling conditions on DNA-based estimates of American black bear abundance
Laufenberg, Jared S.; Van Manen, Frank T.; Clark, Joseph D.
2013-01-01
DNA-based capture-mark-recapture techniques are commonly used to estimate American black bear (Ursus americanus) population abundance (N). Although the technique is well established, many questions remain regarding study design. In particular, relationships among N, capture probability of heterogeneity mixtures A and B (pA and pB, respectively, or p, collectively), the proportion of each mixture (π), number of capture occasions (k), and probability of obtaining reliable estimates of N are not fully understood. We investigated these relationships using 1) an empirical dataset of DNA samples for which true N was unknown and 2) simulated datasets with known properties that represented a broader array of sampling conditions. For the empirical data analysis, we used the full closed population with heterogeneity data type in Program MARK to estimate N for a black bear population in Great Smoky Mountains National Park, Tennessee. We systematically reduced the number of those samples used in the analysis to evaluate the effect that changes in capture probabilities may have on parameter estimates. Model-averaged N for females and males were 161 (95% CI = 114–272) and 100 (95% CI = 74–167), respectively (pooled N = 261, 95% CI = 192–419), and the average weekly p was 0.09 for females and 0.12 for males. When we reduced the number of samples of the empirical data, support for heterogeneity models decreased. For the simulation analysis, we generated capture data with individual heterogeneity covering a range of sampling conditions commonly encountered in DNA-based capture-mark-recapture studies and examined the relationships between those conditions and accuracy (i.e., probability of obtaining an estimated N that is within 20% of true N), coverage (i.e., probability that 95% confidence interval includes true N), and precision (i.e., probability of obtaining a coefficient of variation ≤20%) of estimates using logistic regression. The capture probability for the larger of 2 mixture proportions of the population (i.e., pA or pB, depending on the value of π) was most important for predicting accuracy and precision, whereas capture probabilities of both mixture proportions (pA and pB) were important to explain variation in coverage. Based on sampling conditions similar to parameter estimates from the empirical dataset (pA = 0.30, pB = 0.05, N = 250, π = 0.15, and k = 10), predicted accuracy and precision were low (60% and 53%, respectively), whereas coverage was high (94%). Increasing pB, the capture probability for the predominate but most difficult to capture proportion of the population, was most effective to improve accuracy under those conditions. However, manipulation of other parameters may be more effective under different conditions. In general, the probabilities of obtaining accurate and precise estimates were best when p≥ 0.2. Our regression models can be used by managers to evaluate specific sampling scenarios and guide development of sampling frameworks or to assess reliability of DNA-based capture-mark-recapture studies.
A long-lived lunar core dynamo.
Shea, Erin K; Weiss, Benjamin P; Cassata, William S; Shuster, David L; Tikoo, Sonia M; Gattacceca, Jérôme; Grove, Timothy L; Fuller, Michael D
2012-01-27
Paleomagnetic measurements indicate that a core dynamo probably existed on the Moon 4.2 billion years ago. However, the subsequent history of the lunar core dynamo is unknown. Here we report paleomagnetic, petrologic, and (40)Ar/(39)Ar thermochronometry measurements on the 3.7-billion-year-old mare basalt sample 10020. This sample contains a high-coercivity magnetization acquired in a stable field of at least ~12 microteslas. These data extend the known lifetime of the lunar dynamo by 500 million years. Such a long-lived lunar dynamo probably required a power source other than thermochemical convection from secular cooling of the lunar interior. The inferred strong intensity of the lunar paleofield presents a challenge to current dynamo theory.
Cummings, E. Mark; George, Melissa R. W.; McCoy, Kathleen P.; Davies, Patrick T.
2012-01-01
Advancing the long-term prospective study of explanations for the effects of marital conflict on children’s functioning, relations were examined between interparental conflict in kindergarten, children’s emotional insecurity in the early school years, and subsequent adolescent internalizing and externalizing problems. Based on a community sample of 235 mothers, fathers and children (M = 6.00, 8.02, 12.62 years), and multi-method and multi-reporter assessments, structural equation model (SEM) tests provided support for emotional insecurity in early childhood as an intervening process related to adolescent internalizing and externalizing problems, even with stringent auto-regressive controls over prior levels of functioning for both mediating and outcome variables. Discussion considers implications for understanding pathways between interparental conflict, emotional insecurity and adjustment in childhood and adolescence. PMID:22694264
Rain attenuation measurements: Variability and data quality assessment
NASA Technical Reports Server (NTRS)
Crane, Robert K.
1989-01-01
Year to year variations in the cumulative distributions of rain rate or rain attenuation are evident in any of the published measurements for a single propagation path that span a period of several years of observation. These variations must be described by models for the prediction of rain attenuation statistics. Now that a large measurement data base has been assembled by the International Radio Consultative Committee, the information needed to assess variability is available. On the basis of 252 sample cumulative distribution functions for the occurrence of attenuation by rain, the expected year to year variation in attenuation at a fixed probability level in the 0.1 to 0.001 percent of a year range is estimated to be 27 percent. The expected deviation from an attenuation model prediction for a single year of observations is estimated to exceed 33 percent when any of the available global rain climate model are employed to estimate the rain rate statistics. The probability distribution for the variation in attenuation or rain rate at a fixed fraction of a year is lognormal. The lognormal behavior of the variate was used to compile the statistics for variability.
RADIAL VELOCITY VARIABILITY OF FIELD BROWN DWARFS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prato, L.; Mace, G. N.; Rice, E. L.
2015-07-20
We present paper six of the NIRSPEC Brown Dwarf Spectroscopic Survey, an analysis of multi-epoch, high-resolution (R ∼ 20,000) spectra of 25 field dwarf systems (3 late-type M dwarfs, 16 L dwarfs, and 6 T dwarfs) taken with the NIRSPEC infrared spectrograph at the W. M. Keck Observatory. With a radial velocity (RV) precision of ∼2 km s{sup −1}, we are sensitive to brown dwarf companions in orbits with periods of a few years or less given a mass ratio of 0.5 or greater. We do not detect any spectroscopic binary brown dwarfs in the sample. Given our target properties,more » and the frequency and cadence of observations, we use a Monte Carlo simulation to determine the detection probability of our sample. Even with a null detection result, our 1σ upper limit for very low mass binary frequency is 18%. Our targets included seven known, wide brown dwarf binary systems. No significant RV variability was measured in our multi-epoch observations of these systems, even for those pairs for which our data spanned a significant fraction of the orbital period. Specialized techniques are required to reach the high precisions sensitive to motion in orbits of very low-mass systems. For eight objects, including six T dwarfs, we present the first published high-resolution spectra, many with high signal to noise, that will provide valuable comparison data for models of brown dwarf atmospheres.« less
Ouyang, Zu-Tao; Gao, Yu; Xie, Xiao; Guo, Hai-Qiang; Zhang, Ting-Ting; Zhao, Bin
2013-01-01
Spartina alterniflora has widely invaded the saltmarshes of the Yangtze River Estuary and brought negative effects to the ecosystem. Remote sensing technique has recently been used to monitor its distribution, but the similar morphology and canopy structure among S. alterniflora and its neighbor species make it difficult even with high-resolution images. Nevertheless, these species have divergence on phenological stages throughout the year, which cause distinguishing spectral characteristics among them and provide opportunities for discrimination. The field spectra of the S. alterniflora community as well as its major victims, native Phragmites australis and Scirpus mariqueter, were measured in 2009 and 2010 at multi-phenological stages in the Yangtze River Estuary, aiming to find the most appropriate periods for mapping S. alterniflora. Collected spectral data were analyzed separately for every stage firstly by re-sampling reflectance curves into continued 5-nm-wide hyper-spectral bands and then by re-sampling into broad multi-spectral bands – the same as the band ranges of the TM sensor, as well as calculating commonly used vegetation indices. The results showed that differences among saltmarsh communities’ spectral characteristics were affected by their phenological stages. The germination and early vegetative growth stage and the flowering stage were probably the best timings to identify S. alterniflora. Vegetation indices like NDVI, ANVI, VNVI, and RVI are likely to enhance spectral separability and also make it possible to discriminate S. alterniflora at its withering stage. PMID:23826265
Grant, Evan H. Campbell; Zipkin, Elise; Scott, Sillett T.; Chandler, Richard; Royle, J. Andrew
2014-01-01
Wildlife populations consist of individuals that contribute disproportionately to growth and viability. Understanding a population's spatial and temporal dynamics requires estimates of abundance and demographic rates that account for this heterogeneity. Estimating these quantities can be difficult, requiring years of intensive data collection. Often, this is accomplished through the capture and recapture of individual animals, which is generally only feasible at a limited number of locations. In contrast, N-mixture models allow for the estimation of abundance, and spatial variation in abundance, from count data alone. We extend recently developed multistate, open population N-mixture models, which can additionally estimate demographic rates based on an organism's life history characteristics. In our extension, we develop an approach to account for the case where not all individuals can be assigned to a state during sampling. Using only state-specific count data, we show how our model can be used to estimate local population abundance, as well as density-dependent recruitment rates and state-specific survival. We apply our model to a population of black-throated blue warblers (Setophaga caerulescens) that have been surveyed for 25 years on their breeding grounds at the Hubbard Brook Experimental Forest in New Hampshire, USA. The intensive data collection efforts allow us to compare our estimates to estimates derived from capture–recapture data. Our model performed well in estimating population abundance and density-dependent rates of annual recruitment/immigration. Estimates of local carrying capacity and per capita recruitment of yearlings were consistent with those published in other studies. However, our model moderately underestimated annual survival probability of yearling and adult females and severely underestimates survival probabilities for both of these male stages. The most accurate and precise estimates will necessarily require some amount of intensive data collection efforts (such as capture–recapture). Integrated population models that combine data from both intensive and extensive sources are likely to be the most efficient approach for estimating demographic rates at large spatial and temporal scales.
NASA Astrophysics Data System (ADS)
Mercier, Lény; Panfili, Jacques; Paillon, Christelle; N'diaye, Awa; Mouillot, David; Darnaude, Audrey M.
2011-05-01
Accurate knowledge of fish age and growth is crucial for species conservation and management of exploited marine stocks. In exploited species, age estimation based on otolith reading is routinely used for building growth curves that are used to implement fishery management models. However, the universal fit of the von Bertalanffy growth function (VBGF) on data from commercial landings can lead to uncertainty in growth parameter inference, preventing accurate comparison of growth-based history traits between fish populations. In the present paper, we used a comprehensive annual sample of wild gilthead seabream ( Sparus aurata L.) in the Gulf of Lions (France, NW Mediterranean) to test a methodology improving growth modelling for exploited fish populations. After validating the timing for otolith annual increment formation for all life stages, a comprehensive set of growth models (including VBGF) were fitted to the obtained age-length data, used as a whole or sub-divided between group 0 individuals and those coming from commercial landings (ages 1-6). Comparisons in growth model accuracy based on Akaike Information Criterion allowed assessment of the best model for each dataset and, when no model correctly fitted the data, a multi-model inference (MMI) based on model averaging was carried out. The results provided evidence that growth parameters inferred with VBGF must be used with high caution. Hence, VBGF turned to be among the less accurate for growth prediction irrespective of the dataset and its fit to the whole population, the juvenile or the adult datasets provided different growth parameters. The best models for growth prediction were the Tanaka model, for group 0 juveniles, and the MMI, for the older fish, confirming that growth differs substantially between juveniles and adults. All asymptotic models failed to correctly describe the growth of adult S. aurata, probably because of the poor representation of old individuals in the dataset. Multi-model inference associated with separate analysis of juveniles and adult fish is then advised to obtain objective estimations of growth parameters when sampling cannot be corrected towards older fish.
NASA Astrophysics Data System (ADS)
Luther, Ed; Mendes, Livia; Pan, Jiayi; Costa, Daniel; Sarisozen, Can; Torchilin, Vladimir
2018-02-01
We rely on in vitro cellular cultures to evaluate the effects of the components of multifunctional nano-based formulations under development. We employ an incubator-adapted, label-free holographic imaging cytometer HoloMonitor M4® (Phase Holographic Imaging, Lund, Sweden) to obtain multi-day time-lapse sequences at 5- minute intervals. An automated stage allows hand-free acquisition of multiple fields of view. Our system is based on the Mach-Zehnder interferometry principle to create interference patterns which are deconvolved to produce images of the optical thickness of the field of view. These images are automatically segmented resulting in a full complement of quantitative morphological features, such as optical volume, thickness, and area amongst many others. Precise XY cell locations and the time of acquisition are also recorded. Visualization is best achieved by novel 4-Dimensional plots, where XY position is plotted overtime time (Z-directions) and cell-thickness is coded as color or gray scale brightness. Fundamental events of interest, i.e., cells undergoing mitosis or mitotic dysfunction, cell death, cell-to-cell interactions, motility are discernable. We use both 2D and 3D models of the tumor microenvironment. We report our new analysis method to track feature changes over time based on a 4-sample version of the Kolmogorov-Smirnov test. Feature A is compared to Control A, and Feature B is compared to Control B to give a 2D probability plot of the feature changes over time. As a result, we efficiently obtain vectors quantifying feature changes over time in various sample conditions, i.e., changing compound concentrations or multi-compound combinations.
[Update on pubertal development among primary school students in Shanghai, 2014].
Chen, Y; Zhang, Y T; Chen, C; Jiang, Y R; Song, Y J; Liu, S J; Jiang, F
2016-11-06
Objective: To investigate the current prevalence of pubertal development in healthy Shanghai schoolchildren. Methods: This study was a cross-sectional investigation focused on current pubertal development conducted in healthy Shanghai schoolchildren by multi-stage cluster sampling. The sample included 17 571 children in grades 1-5 investigated in June 2014. The data were weighted by inverse probability weighting (IPW) to make them more representative. At examination, stages of breast and pubic hair development were rated according to the Tanner method. Testicular volume was determined. Data on menarche and spermatorrhea were collected by the status quo method. The rates of precocious puberty, breast, and pubic hair development of Tanner stage ≥Ⅱ in girls aged 6-7 years, menarche in girls aged 6-9 years, and testicular volume ≥4 ml and pubic hair development of Tanner stage ≥Ⅱ in boys aged 6-8 years were calculated. All the data were weighted by IPW. Results: After data processing, 16 197 children's data were analyzed. In girls aged 6-7 years, 17.2% and 2.5% showed evidence of breast and pubic hair development at Tanner stage ≥Ⅱ, respectively. In girls aged 6-9 years, 0.3% had experienced menarche. Schoolgirls' rate of menarche was 4.7%. In girls aged 6-7 years, 19.0% were diagnosed with precocious puberty according to the classic criteria. In boys aged 6-8 years, 1.7% had testicular volume ≥4 ml, and 0.6% showed evidence of pubic hair development at Tanner stage ≥Ⅱ. Schoolboys' incidence rate of spermatorrhea was 0.1%. In boys aged 6-8 years, 2.3% were diagnosed with precocious puberty according to the classic criteria. All the numbers above were weighted. Conclusion: Proper education on adolescence and sex is essential for Shanghai schoolchildren.
NASA Astrophysics Data System (ADS)
Taubenböck, H.; Wurm, M.; Netzband, M.; Zwenzner, H.; Roth, A.; Rahman, A.; Dech, S.
2011-02-01
Estimating flood risks and managing disasters combines knowledge in climatology, meteorology, hydrology, hydraulic engineering, statistics, planning and geography - thus a complex multi-faceted problem. This study focuses on the capabilities of multi-source remote sensing data to support decision-making before, during and after a flood event. With our focus on urbanized areas, sample methods and applications show multi-scale products from the hazard and vulnerability perspective of the risk framework. From the hazard side, we present capabilities with which to assess flood-prone areas before an expected disaster. Then we map the spatial impact during or after a flood and finally, we analyze damage grades after a flood disaster. From the vulnerability side, we monitor urbanization over time on an urban footprint level, classify urban structures on an individual building level, assess building stability and quantify probably affected people. The results show a large database for sustainable development and for developing mitigation strategies, ad-hoc coordination of relief measures and organizing rehabilitation.
Extraterrestrial materials processing
NASA Technical Reports Server (NTRS)
Steurer, W. H.
1982-01-01
The first year results of a multi-year study of processing extraterrestrial materials for use in space are summarized. Theoretically, there are potential major advantages to be derived from the use of such materials for future space endeavors. The types of known or postulated starting raw materials are described including silicate-rich mixed oxides on the Moon, some asteroids and Mars; free metals in some asteroids and in small quantities in the lunar soil; and probably volatiles like water and CO2 on Mars and some asteroids. Candidate processes for space materials are likely to be significantly different from their terrestrial counterparts largely because of: absence of atmosphere; lack of of readily available working fluids; low- or micro-gravity; no carbon-based fuels; readily available solar energy; and severe constraints on manned intervention. The extraction of metals and oxygen from lunar material by magma electrolysis or by vapor/ion phase separation appears practical.
Scheid, Anika; Nebel, Markus E
2012-07-09
Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent) stochastic context-free grammar (SCFG) that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF) approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples), where neither of these two competing approaches generally outperforms the other. In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones), then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst-case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case - without sacrificing much of the accuracy of the results. Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms.
2012-01-01
Background Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent) stochastic context-free grammar (SCFG) that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF) approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples), where neither of these two competing approaches generally outperforms the other. Results In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones), then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst-case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case – without sacrificing much of the accuracy of the results. Conclusions Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms. PMID:22776037
Hem, Sopheak; Ly, Sowath; Votsi, Irene; Vogt, Florian; Asgari, Nima; Buchy, Philippe; Heng, Seiha; Picardeau, Mathieu; Sok, Touch; Ly, Sovann; Huy, Rekol; Guillard, Bertrand; Cauchemez, Simon; Tarantola, Arnaud
2016-01-01
Background Leptospirosis is an emerging but neglected public health challenge in the Asia/Pacific Region with an annual incidence estimated at 10–100 per 100,000 population. No accurate data, however, are available for at-risk rural Cambodian communities. Method We conducted anonymous, unlinked testing for IgM antibodies to Leptospira spp. on paired sera of Cambodian patients <20 years of age between 2007–2009 collected through active, community-based surveillance for febrile illnesses in a convenience sample of 27 rural and semi-rural villages in four districts of Kampong Cham province, Cambodia. Leptospirosis testing was done on paired serological samples negative for Dengue, Japanese encephalitis and Chikungunya viruses after random selection. Convalescent samples found positive while initial samples were negative were considered as proof of acute infection. We then applied a mathematical model to estimate the risk of fever caused by leptospirosis, dengue or other causes in rural Cambodia. Results A total of 630 samples are coming from a randomly selected subset of 2358 samples. IgM positive were found on the convalescent serum sample, among which 100 (15.8%) samples were IgM negative on an earlier sample. Seventeen of these 100 seroconversions were confirmed using a Microagglutination Test. We estimated the probability of having a fever due to leptospirosis at 1. 03% (95% Credible Interval CI: 0. 95%–1. 22%) per semester. In comparison, this probability was 2. 61% (95% CI: 2. 55%, 2. 83%) for dengue and 17. 65% (95% CI: 17. 49%, 18. 08%) for other causes. Conclusion Our data from febrile cases aged below 20 years suggest that the burden of leptospirosis is high in rural Cambodian communities. This is especially true during the rainy season, even in the absence of identified epidemics. PMID:27043016
Rothmann, Mark
2005-01-01
When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.
Comet and asteroid hazard to the terrestrial planets
NASA Astrophysics Data System (ADS)
Ipatov, S. I.; Mather, J. C.
2004-01-01
We estimated the rate of comet and asteroid collisions with the terrestrial planets by calculating the orbits of 13,000 Jupiter-crossing objects (JCOs) and 1300 resonant asteroids and computing the probabilities of collisions based on random-phase approximations and the orbital elements sampled with a 500 years step. The Bulirsh-Stoer and a symplectic orbit integrator gave similar results for orbital evolution, but may give different collision probabilities with the Sun. A small fraction of former JCOs reached orbits with aphelia inside Jupiter's orbit and some reached Apollo orbits with semi-major axes less than 2 AU, Aten orbits and inner-Earth orbits (with aphelia less than 0.983 AU) and remained there for millions of years. Though less than 0.1% of the total, these objects were responsible for most of the collision probability of former JCOs with Earth and Venus. We conclude that a significant fraction of near-Earth objects could be extinct comets that came from the trans-Neptunian region or most of such comets disintegrated during their motion in near-Earth object orbits.
NASA Astrophysics Data System (ADS)
Reilly, T. J.; Focazio, M. J.; Murdoch, P. S.; Benzel, W. M.; Fisher, S. C.; Griffin, D. W.; Iwanowicz, L. R.; Jones, D. K.; Loftin, K. A.
2014-12-01
Enhanced dispersion and concentration of contaminants such as trace metals and organic pollutants through storm-induced disturbances and sea level rise (SLR) are major factors that could adversely impact the health and resilience of communities and ecosystems in coming years. As part of the response to Hurricane Sandy, the U.S. Geological Survey collected data on the effects of contaminant source disturbance and dispersion. A major limitation of conducting pre- and post-Sandy comparisons was the lack of baseline data in locations proximal to potential contaminant sources and mitigation activities, sensitive ecosystems, and recreational facilities where human and ecological exposures are probable. To address this limitation, a Sediment-bound Contaminant Resiliency and Response (SCoRR) strategy with two operational modes, Resiliency (baseline) and Response (event-based), has been designed by leveraging existing interagency networks and resources. In Resiliency Mode, sites will be identified and sampled using standardized procedures prioritized to develop baseline data and to define sediment-quality based environmental health metrics. In Response Mode, a subset of sites within the network will be evaluated to ensure that adequate pre-event data exist at priority locations. If deficient, pre-event samples will be collected from priority locations. Crews will be deployed post-event to resample these locations allowing direct evaluation of impacts, as well as redefining baseline conditions for these areas. A tiered analytical and data integration strategy has been developed that will identify vulnerable human and environmental receptors, the sediment-bound contaminants present, and the biological activity and potential effects of exposure to characterized sediments. Communication mechanisms are in development to make resulting data available in a timely fashion and in a suitable format for informing event response and recovery efforts.
PMHT Approach for Multi-Target Multi-Sensor Sonar Tracking in Clutter.
Li, Xiaohua; Li, Yaan; Yu, Jing; Chen, Xiao; Dai, Miao
2015-11-06
Multi-sensor sonar tracking has many advantages, such as the potential to reduce the overall measurement uncertainty and the possibility to hide the receiver. However, the use of multi-target multi-sensor sonar tracking is challenging because of the complexity of the underwater environment, especially the low target detection probability and extremely large number of false alarms caused by reverberation. In this work, to solve the problem of multi-target multi-sensor sonar tracking in the presence of clutter, a novel probabilistic multi-hypothesis tracker (PMHT) approach based on the extended Kalman filter (EKF) and unscented Kalman filter (UKF) is proposed. The PMHT can efficiently handle the unknown measurements-to-targets and measurements-to-transmitters data association ambiguity. The EKF and UKF are used to deal with the high degree of nonlinearity in the measurement model. The simulation results show that the proposed algorithm can improve the target tracking performance in a cluttered environment greatly, and its computational load is low.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyun Jin; Han, Seungbong; Kim, Young Seok, E-mail: ysk@amc.seoul.kr
Purpose: A nomogram is a predictive statistical model that generates the continuous probability of a clinical event such as death or recurrence. The aim of the study was to construct a nomogram to predict 5-year overall survival after postoperative radiation therapy for stage IB to IIA cervical cancer. Methods and Materials: The clinical data from 1702 patients with early-stage cervical cancer, treated at 10 participating hospitals from 1990 to 2011, were reviewed to develop a prediction nomogram based on the Cox proportional hazards model. Demographic, clinical, and pathologic variables were included and analyzed to formulate the nomogram. The discrimination andmore » calibration power of the model was measured using a concordance index (c-index) and calibration curve. Results: The median follow-up period for surviving patients was 75.6 months, and the 5-year overall survival probability was 87.1%. The final model was constructed using the following variables: age, number of positive pelvic lymph nodes, parametrial invasion, lymphovascular invasion, and the use of concurrent chemotherapy. The nomogram predicted the 5-year overall survival with a c-index of 0.69, which was superior to the predictive power of the International Federation of Gynecology and Obstetrics (FIGO) staging system (c-index of 0.54). Conclusions: A survival-predicting nomogram that offers an accurate level of prediction and discrimination was developed based on a large multi-center study. The model may be more useful than the FIGO staging system for counseling individual patients regarding prognosis.« less
Cherry, S.; White, G.C.; Keating, K.A.; Haroldson, Mark A.; Schwartz, Charles C.
2007-01-01
Current management of the grizzly bear (Ursus arctos) population in Yellowstone National Park and surrounding areas requires annual estimation of the number of adult female bears with cubs-of-the-year. We examined the performance of nine estimators of population size via simulation. Data were simulated using two methods for different combinations of population size, sample size, and coefficient of variation of individual sighting probabilities. We show that the coefficient of variation does not, by itself, adequately describe the effects of capture heterogeneity, because two different distributions of capture probabilities can have the same coefficient of variation. All estimators produced biased estimates of population size with bias decreasing as effort increased. Based on the simulation results we recommend the Chao estimator for model M h be used to estimate the number of female bears with cubs of the year; however, the estimator of Chao and Shen may also be useful depending on the goals of the research.
Nada, Khaled H; Suliman, El Daw A
2010-07-01
To measure the prevalence of HIV/AIDS risk behaviors and related factors in a large, probability-based sample of boys and girls aged 12-17 years living on the streets of Egypt's largest urban centers of Greater Cairo and Alexandria. Time-location sampling (TLS) was used to recruit a cross-sectional sample of street children. Procedures entailed using key informants and field observation to create a sampling frame of locations at predetermined time intervals of the day, where street children congregate in the two cities, selecting a random sample of time-locations from the complete list, and intercepting children in the selected time-locations to assess eligibility and conduct interviews. Interviews gathered basic demographic information, life events on the street (including violence, abuse, forced sex), sexual and drug use behaviors, and HIV/AIDS knowledge. A total of 857 street children were enrolled in the two cities, with an age, sex, and time-location composition matching the sampling frame. The majority of these children had faced harassment or abuse (93%) typically by police and other street children, had used drugs (62%), and, among the older adolescents, were sexually active (67%). Among the sexually active 15-17-year-olds, most reported multiple partners (54%) and never using condoms (52%). Most girls (53% in Greater Cairo and 90% in Alexandria) had experienced sexual abuse. The majority of street children experienced more than one of these risks. Overlaps with populations at highest risk for HIV were substantial, namely men who have sex with men, commercial sex workers, and injection drug users. Our study using a randomized TLS approach produced a rigorous, diverse, probability-based sample of street children and documented very high levels of multiple concurrent risks. Our findings strongly advocate for multiple services including those addressing HIV and STI prevention and care, substance use, shelters, and sensitization of authorities to the plight of street children in Egypt.
Latin Hypercube Sampling (LHS) UNIX Library/Standalone
DOE Office of Scientific and Technical Information (OSTI.GOV)
2004-05-13
The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less
Liu, Jing; Li, Yongping; Huang, Guohe; Fu, Haiyan; Zhang, Junlong; Cheng, Guanhui
2017-06-01
In this study, a multi-level-factorial risk-inference-based possibilistic-probabilistic programming (MRPP) method is proposed for supporting water quality management under multiple uncertainties. The MRPP method can handle uncertainties expressed as fuzzy-random-boundary intervals, probability distributions, and interval numbers, and analyze the effects of uncertainties as well as their interactions on modeling outputs. It is applied to plan water quality management in the Xiangxihe watershed. Results reveal that a lower probability of satisfying the objective function (θ) as well as a higher probability of violating environmental constraints (q i ) would correspond to a higher system benefit with an increased risk of violating system feasibility. Chemical plants are the major contributors to biological oxygen demand (BOD) and total phosphorus (TP) discharges; total nitrogen (TN) would be mainly discharged by crop farming. It is also discovered that optimistic decision makers should pay more attention to the interactions between chemical plant and water supply, while decision makers who possess a risk-averse attitude would focus on the interactive effect of q i and benefit of water supply. The findings can help enhance the model's applicability and identify a suitable water quality management policy for environmental sustainability according to the practical situations.
Broekhuizen, Henk; IJzerman, Maarten J; Hauber, A Brett; Groothuis-Oudshoorn, Catharina G M
2017-03-01
The need for patient engagement has been recognized by regulatory agencies, but there is no consensus about how to operationalize this. One approach is the formal elicitation and use of patient preferences for weighing clinical outcomes. The aim of this study was to demonstrate how patient preferences can be used to weigh clinical outcomes when both preferences and clinical outcomes are uncertain by applying a probabilistic value-based multi-criteria decision analysis (MCDA) method. Probability distributions were used to model random variation and parameter uncertainty in preferences, and parameter uncertainty in clinical outcomes. The posterior value distributions and rank probabilities for each treatment were obtained using Monte-Carlo simulations. The probability of achieving the first rank is the probability that a treatment represents the highest value to patients. We illustrated our methodology for a simplified case on six HIV treatments. Preferences were modeled with normal distributions and clinical outcomes were modeled with beta distributions. The treatment value distributions showed the rank order of treatments according to patients and illustrate the remaining decision uncertainty. This study demonstrated how patient preference data can be used to weigh clinical evidence using MCDA. The model takes into account uncertainty in preferences and clinical outcomes. The model can support decision makers during the aggregation step of the MCDA process and provides a first step toward preference-based personalized medicine, yet requires further testing regarding its appropriate use in real-world settings.
Adaptive Randomization of Neratinib in Early Breast Cancer
Park, John W.; Liu, Minetta C.; Yee, Douglas; Yau, Christina; van 't Veer, Laura J.; Symmans, W. Fraser; Paoloni, Melissa; Perlmutter, Jane; Hylton, Nola M.; Hogarth, Michael; DeMichele, Angela; Buxton, Meredith B.; Chien, A. Jo; Wallace, Anne M.; Boughey, Judy C.; Haddad, Tufia C.; Chui, Stephen Y.; Kemmer, Kathleen A.; Kaplan, Henry G.; Liu, Minetta C.; Isaacs, Claudine; Nanda, Rita; Tripathy, Debasish; Albain, Kathy S.; Edmiston, Kirsten K.; Elias, Anthony D.; Northfelt, Donald W.; Pusztai, Lajos; Moulder, Stacy L.; Lang, Julie E.; Viscusi, Rebecca K.; Euhus, David M.; Haley, Barbara B.; Khan, Qamar J.; Wood, William C.; Melisko, Michelle; Schwab, Richard; Lyandres, Julia; Davis, Sarah E.; Hirst, Gillian L.; Sanil, Ashish; Esserman, Laura J.; Berry, Donald A.
2017-01-01
Background I-SPY2, a standing, multicenter, adaptive phase 2 neoadjuvant trial ongoing in high-risk clinical stage II/III breast cancer, is designed to evaluate multiple, novel experimental agents added to standard chemotherapy for their ability to improve the rate of pathologic complete response (pCR). Experimental therapies are compared against a common control arm. We report efficacy for the tyrosine kinase inhibitor neratinib. Methods Eligible women had ≥2.5 cm stage II/III breast cancer, categorized into 8 biomarker subtypes based on HER2, hormone-receptor status (HR), and MammaPrint. Neratinib was evaluated for 10 signatures (prospectively defined subtype combinations), with primary endpoint pCR. MR volume changes inform likelihood of pCR for each patient prior to surgery. Adaptive assignment to experimental arms within disease subtype was based on current Bayesian probabilities of superiority over control. Accrual to experimental arm stop at any time for futility or graduation within a particular signature based on Bayesian predictive probability of success in a confirmatory trial. The maximum sample size in any experimental arm is 120 patients, Results With 115 patients and 78 concurrently randomized controls, neratinib graduated in the HER2+/HR− signature, with mean pCR rate 56% (95% PI: 37 to 73%) vs 33% for controls (11 to 54%). Final predictive probability of success, updated when all pathology data were available, was 79%. Conclusion Adaptive, multi-armed trials can efficiently identify responding tumor subtypes. Neratinib added to standard therapy is highly likely to improve pCR rates in HER2+/HR2212; breast cancer. Confirmation in I-SPY 3, a phase 3 neoadjuvant registration trial, is planned. PMID:27406346
Wang, Yaping; Nie, Jingxin; Yap, Pew-Thian; Li, Gang; Shi, Feng; Geng, Xiujuan; Guo, Lei; Shen, Dinggang
2014-01-01
Accurate and robust brain extraction is a critical step in most neuroimaging analysis pipelines. In particular, for the large-scale multi-site neuroimaging studies involving a significant number of subjects with diverse age and diagnostic groups, accurate and robust extraction of the brain automatically and consistently is highly desirable. In this paper, we introduce population-specific probability maps to guide the brain extraction of diverse subject groups, including both healthy and diseased adult human populations, both developing and aging human populations, as well as non-human primates. Specifically, the proposed method combines an atlas-based approach, for coarse skull-stripping, with a deformable-surface-based approach that is guided by local intensity information and population-specific prior information learned from a set of real brain images for more localized refinement. Comprehensive quantitative evaluations were performed on the diverse large-scale populations of ADNI dataset with over 800 subjects (55∼90 years of age, multi-site, various diagnosis groups), OASIS dataset with over 400 subjects (18∼96 years of age, wide age range, various diagnosis groups), and NIH pediatrics dataset with 150 subjects (5∼18 years of age, multi-site, wide age range as a complementary age group to the adult dataset). The results demonstrate that our method consistently yields the best overall results across almost the entire human life span, with only a single set of parameters. To demonstrate its capability to work on non-human primates, the proposed method is further evaluated using a rhesus macaque dataset with 20 subjects. Quantitative comparisons with popularly used state-of-the-art methods, including BET, Two-pass BET, BET-B, BSE, HWA, ROBEX and AFNI, demonstrate that the proposed method performs favorably with superior performance on all testing datasets, indicating its robustness and effectiveness. PMID:24489639
Sacco, Ralph L.; Khatri, Minesh; Rundek, Tatjana; Xu, Qiang; Gardener, Hannah; Boden-Albala, Bernadette; Di Tullio, Marco R.; Homma, Shunichi; Elkind, Mitchell SV; Paik, Myunghee C
2010-01-01
Objective To improve global vascular risk prediction with behavioral and anthropometric factors. Background Few cardiovascular risk models are designed to predict the global vascular risk of MI, stroke, or vascular death in multi-ethnic individuals, and existing schemes do not fully include behavioral risk factors. Methods A randomly-derived, population-based, prospective cohort of 2737 community participants free of stroke and coronary artery disease were followed annually for a median of 9.0 years in the Northern Manhattan Study (mean age 69 years; 63.2% women; 52.7% Hispanic, 24.9% African-American, 19.9% white). A global vascular risk score (GVRS) predictive of stroke, myocardial infarction, or vascular death was developed by adding variables to the traditional Framingham cardiovascular variables based on the likelihood ratio criterion. Model utility was assessed through receiver operating characteristics, calibration, and effect on reclassification of subjects. Results Variables which significantly added to the traditional Framingham profile included waist circumference, alcohol consumption, and physical activity. Continuous measures for blood pressure and fasting blood sugar were used instead of hypertension and diabetes. Ten -year event-free probabilities were 0.95 for the first quartile of GVRS, 0.89 for the second quartile, 0.79 for the third quartile, and 0.56 for the fourth quartile. The addition of behavioral factors in our model improved prediction of 10 -year event rates compared to a model restricted to the traditional variables. Conclusion A global vascular risk score that combines both traditional, behavioral, and anthropometric risk factors, uses continuous variables for physiological parameters, and is applicable to non-white subjects could improve primary prevention strategies. PMID:19958966
Framing From Experience: Cognitive Processes and Predictions of Risky Choice.
Gonzalez, Cleotilde; Mehlhorn, Katja
2016-07-01
A framing bias shows risk aversion in problems framed as "gains" and risk seeking in problems framed as "losses," even when these are objectively equivalent and probabilities and outcomes values are explicitly provided. We test this framing bias in situations where decision makers rely on their own experience, sampling the problem's options (safe and risky) and seeing the outcomes before making a choice. In Experiment 1, we replicate the framing bias in description-based decisions and find risk indifference in gains and losses in experience-based decisions. Predictions of an Instance-Based Learning model suggest that objective probabilities as well as the number of samples taken are factors that contribute to the lack of framing effect. We test these two factors in Experiment 2 and find no framing effect when a few samples are taken but when large samples are taken, the framing effect appears regardless of the objective probability values. Implications of behavioral results and cognitive modeling are discussed. Copyright © 2015 Cognitive Science Society, Inc.
Farberg, Aaron S; Winkelmann, Richard R; Tucker, Natalie; White, Richard; Rigel, Darrell S
2017-09-01
BACKGROUND: Early diagnosis of melanoma is critical to survival. New technologies, such as a multi-spectral digital skin lesion analysis (MSDSLA) device [MelaFind, STRATA Skin Sciences, Horsham, Pennsylvania] may be useful to enhance clinician evaluation of concerning pigmented skin lesions. Previous studies evaluated the effect of only the binary output. OBJECTIVE: The objective of this study was to determine how decisions dermatologists make regarding pigmented lesion biopsies are impacted by providing both the underlying classifier score (CS) and associated probability risk provided by multi-spectral digital skin lesion analysis. This outcome was also compared against the improvement reported with the provision of only the binary output. METHODS: Dermatologists attending an educational conference evaluated 50 pigmented lesions (25 melanomas and 25 benign lesions). Participants were asked if they would biopsy the lesion based on clinical images, and were asked this question again after being shown multi-spectral digital skin lesion analysis data that included the probability graphs and classifier score. RESULTS: Data were analyzed from a total of 160 United States board-certified dermatologists. Biopsy sensitivity for melanoma improved from 76 percent following clinical evaluation to 92 percent after quantitative multi-spectral digital skin lesion analysis information was provided ( p <0.0001). Specificity improved from 52 percent to 79 percent ( p <0.0001). The positive predictive value increased from 61 percent to 81 percent ( p <0.01) when the quantitative data were provided. Negative predictive value also increased (68% vs. 91%, p<0.01), and overall biopsy accuracy was greater with multi-spectral digital skin lesion analysis (64% vs. 86%, p <0.001). Interrater reliability improved (intraclass correlation 0.466 before, 0.559 after). CONCLUSION: Incorporating the classifier score and probability data into physician evaluation of pigmented lesions led to both increased sensitivity and specificity, thereby resulting in more accurate biopsy decisions.
Economic evaluation of nebulized magnesium sulphate in acute severe asthma in children.
Petrou, Stavros; Boland, Angela; Khan, Kamran; Powell, Colin; Kolamunnage-Dona, Ruwanthi; Lowe, John; Doull, Iolo; Hood, Kerry; Williamson, Paula
2014-10-01
The aim of this study was to estimate the cost-effectiveness of nebulized magnesium sulphate (MgSO4) in acute asthma in children from the perspective of the UK National Health Service and personal social services. An economic evaluation was conducted based on evidence from a randomized placebo controlled multi-center trial of nebulized MgSO4 in severe acute asthma in children. Participants comprised 508 children aged 2-16 years presenting to an emergency department or a children's assessment unit with severe acute asthma across thirty hospitals in the United Kingdom. Children were randomly allocated to receive nebulized salbutamol and ipratropium bromide mixed with either 2.5 ml of isotonic MgSO4 or 2.5 ml of isotonic saline on three occasions at 20-min intervals. Cost-effectiveness outcomes were constructed around the Yung Asthma Severity Score (ASS) after 60 min of treatment; whilst cost-utility outcomes were constructed around the quality-adjusted life-year (QALY) metric. The nonparametric bootstrap method was used to present cost-effectiveness acceptability curves at alternative cost-effectiveness thresholds for either: (i) a unit reduction in ASS; or (ii) an additional QALY. MgSO4 had a 75.1 percent probability of being cost-effective at a GBP 1,000 (EUR 1,148) per unit decrement in ASS threshold, an 88.0 percent probability of being more effective (in terms of reducing the ASS) and a 36.6 percent probability of being less costly. MgSO4 also had a 67.6 percent probability of being cost-effective at a GBP 20,000 (EUR 22,957) per QALY gained threshold, an 8.5 percent probability of being more effective (in terms of generating increased QALYs) and a 69.1 percent probability of being less costly. Sensitivity analyses showed that the results of the economic evaluation were particularly sensitive to the methods used for QALY estimation. The probability of cost-effectiveness of nebulized isotonic MgSO4, given as an adjuvant to standard treatment of severe acute asthma in children, is less than 70 percent across accepted cost-effectiveness thresholds for an additional QALY.
1979-03-22
multi-station discriminants than by those based on network averages. In spite of this situ - ation, average a posteriori probabilities were sometimes...Technology, Pasadena, California. Allen, C. R., L. T. Silver, and F. G. Stehi (1960). Agua Blanca fault - a major transverse structure of northern Baja
Techasrivichien, Teeranee; Darawuttimaprakorn, Niphon; Punpuing, Sureeporn; Musumari, Patou Masika; Lukhele, Bhekumusa Wellington; El-Saaidi, Christina; Suguimoto, S Pilar; Feldman, Mitchell D; Ono-Kihara, Masako; Kihara, Masahiro
2016-02-01
Thailand has undergone rapid modernization with implications for changes in sexual norms. We investigated sexual behavior and attitudes across generations and gender among a probability sample of the general population of Nonthaburi province located near Bangkok in 2012. A tablet-based survey was performed among 2,138 men and women aged 15-59 years identified through a three-stage, stratified, probability proportional to size, clustered sampling. Descriptive statistical analysis was carried out accounting for the effects of multistage sampling. Relationship of age and gender to sexual behavior and attitudes was analyzed by bivariate analysis followed by multivariate logistic regression analysis to adjust for possible confounding. Patterns of sexual behavior and attitudes varied substantially across generations and gender. We found strong evidence for a decline in the age of sexual initiation, a shift in the type of the first sexual partner, and a greater rate of acceptance of adolescent premarital sex among younger generations. The study highlighted profound changes among young women as evidenced by a higher number of lifetime sexual partners as compared to older women. In contrast to the significant gender gap in older generations, sexual profiles of Thai young women have evolved to resemble those of young men with attitudes gradually converging to similar sexual standards. Our data suggest that higher education, being never-married, and an urban lifestyle may have been associated with these changes. Our study found that Thai sexual norms are changing dramatically. It is vital to continue monitoring such changes, considering the potential impact on the HIV/STIs epidemic and unintended pregnancies.
Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A; Szabo, Aniko
2014-01-01
Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. The study reexamined lower leg postmortem human subjects (PMHS) data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and noninjury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the covariable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal, and log-logistic distributions was based on the Akaike information criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. The mean age, stature, and weight were 58.2±15.1 years, 1.74±0.08 m, and 74.9±13.8 kg, respectively. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other 2 distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-olds at 5, 25, and 50% risk levels age groups for lower leg fracture. For 25, 45, and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines.
Physical Interpretation of the Correlation Between Multi-Angle Spectral Data and Canopy Height
NASA Technical Reports Server (NTRS)
Schull, M. A.; Ganguly, S.; Samanta, A.; Huang, D.; Shabanov, N. V.; Jenkins, J. P.; Chiu, J. C.; Marshak, A.; Blair, J. B.; Myneni, R. B.;
2007-01-01
Recent empirical studies have shown that multi-angle spectral data can be useful for predicting canopy height, but the physical reason for this correlation was not understood. We follow the concept of canopy spectral invariants, specifically escape probability, to gain insight into the observed correlation. Airborne Multi-Angle Imaging Spectrometer (AirMISR) and airborne Laser Vegetation Imaging Sensor (LVIS) data acquired during a NASA Terrestrial Ecology Program aircraft campaign underlie our analysis. Two multivariate linear regression models were developed to estimate LVIS height measures from 28 AirMISR multi-angle spectral reflectances and from the spectrally invariant escape probability at 7 AirMISR view angles. Both models achieved nearly the same accuracy, suggesting that canopy spectral invariant theory can explain the observed correlation. We hypothesize that the escape probability is sensitive to the aspect ratio (crown diameter to crown height). The multi-angle spectral data alone therefore may not provide enough information to retrieve canopy height globally
Gariepy, Aileen M; Creinin, Mitchell D; Smith, Kenneth J; Xu, Xiao
2014-08-01
To compare the expected probability of pregnancy after hysteroscopic versus laparoscopic sterilization based on available data using decision analysis. We developed an evidence-based Markov model to estimate the probability of pregnancy over 10 years after three different female sterilization procedures: hysteroscopic, laparoscopic silicone rubber band application and laparoscopic bipolar coagulation. Parameter estimates for procedure success, probability of completing follow-up testing and risk of pregnancy after different sterilization procedures were obtained from published sources. In the base case analysis at all points in time after the sterilization procedure, the initial and cumulative risk of pregnancy after sterilization is higher in women opting for hysteroscopic than either laparoscopic band or bipolar sterilization. The expected pregnancy rates per 1000 women at 1 year are 57, 7 and 3 for hysteroscopic sterilization, laparoscopic silicone rubber band application and laparoscopic bipolar coagulation, respectively. At 10 years, the cumulative pregnancy rates per 1000 women are 96, 24 and 30, respectively. Sensitivity analyses suggest that the three procedures would have an equivalent pregnancy risk of approximately 80 per 1000 women at 10 years if the probability of successful laparoscopic (band or bipolar) sterilization drops below 90% and successful coil placement on first hysteroscopic attempt increases to 98% or if the probability of undergoing a hysterosalpingogram increases to 100%. Based on available data, the expected population risk of pregnancy is higher after hysteroscopic than laparoscopic sterilization. Consistent with existing contraceptive classification, future characterization of hysteroscopic sterilization should distinguish "perfect" and "typical" use failure rates. Pregnancy probability at 1 year and over 10 years is expected to be higher in women having hysteroscopic as compared to laparoscopic sterilization. Copyright © 2014 Elsevier Inc. All rights reserved.
A Bayesian Method for Evaluating and Discovering Disease Loci Associations
Jiang, Xia; Barmada, M. Michael; Cooper, Gregory F.; Becich, Michael J.
2011-01-01
Background A genome-wide association study (GWAS) typically involves examining representative SNPs in individuals from some population. A GWAS data set can concern a million SNPs and may soon concern billions. Researchers investigate the association of each SNP individually with a disease, and it is becoming increasingly commonplace to also analyze multi-SNP associations. Techniques for handling so many hypotheses include the Bonferroni correction and recently developed Bayesian methods. These methods can encounter problems. Most importantly, they are not applicable to a complex multi-locus hypothesis which has several competing hypotheses rather than only a null hypothesis. A method that computes the posterior probability of complex hypotheses is a pressing need. Methodology/Findings We introduce the Bayesian network posterior probability (BNPP) method which addresses the difficulties. The method represents the relationship between a disease and SNPs using a directed acyclic graph (DAG) model, and computes the likelihood of such models using a Bayesian network scoring criterion. The posterior probability of a hypothesis is computed based on the likelihoods of all competing hypotheses. The BNPP can not only be used to evaluate a hypothesis that has previously been discovered or suspected, but also to discover new disease loci associations. The results of experiments using simulated and real data sets are presented. Our results concerning simulated data sets indicate that the BNPP exhibits both better evaluation and discovery performance than does a p-value based method. For the real data sets, previous findings in the literature are confirmed and additional findings are found. Conclusions/Significance We conclude that the BNPP resolves a pressing problem by providing a way to compute the posterior probability of complex multi-locus hypotheses. A researcher can use the BNPP to determine the expected utility of investigating a hypothesis further. Furthermore, we conclude that the BNPP is a promising method for discovering disease loci associations. PMID:21853025
Le, Quang A; Doctor, Jason N
2011-05-01
As quality-adjusted life years have become the standard metric in health economic evaluations, mapping health-profile or disease-specific measures onto preference-based measures to obtain quality-adjusted life years has become a solution when health utilities are not directly available. However, current mapping methods are limited due to their predictive validity, reliability, and/or other methodological issues. We employ probability theory together with a graphical model, called a Bayesian network, to convert health-profile measures into preference-based measures and to compare the results to those estimated with current mapping methods. A sample of 19,678 adults who completed both the 12-item Short Form Health Survey (SF-12v2) and EuroQoL 5D (EQ-5D) questionnaires from the 2003 Medical Expenditure Panel Survey was split into training and validation sets. Bayesian networks were constructed to explore the probabilistic relationships between each EQ-5D domain and 12 items of the SF-12v2. The EQ-5D utility scores were estimated on the basis of the predicted probability of each response level of the 5 EQ-5D domains obtained from the Bayesian inference process using the following methods: Monte Carlo simulation, expected utility, and most-likely probability. Results were then compared with current mapping methods including multinomial logistic regression, ordinary least squares, and censored least absolute deviations. The Bayesian networks consistently outperformed other mapping models in the overall sample (mean absolute error=0.077, mean square error=0.013, and R overall=0.802), in different age groups, number of chronic conditions, and ranges of the EQ-5D index. Bayesian networks provide a new robust and natural approach to map health status responses into health utility measures for health economic evaluations.
Wang, Shi-Heng; Lin, I-Chin; Chen, Chuan-Yu; Chen, Duan-Rung; Chan, Ta-Chien; Chen, Wei J
2013-12-01
To examine the association between alcohol in school environments and adolescent alcohol use over the previous 6 months. A multi-level logistic regression analysis was performed of cross-sectional surveys conducted in 2004, 2005 and 2006. A total of 52 214 students aged 11-19 years from 387 middle or high schools were selected from a nationally representative, multi-stage, stratified probability sampling across Taiwan. Information on socio-demographic features and substance use experiences was collected using self-administered questionnaires. The alcohol in the environment was measured using the availability of convenience stores surrounding the schools. Using geographical information systems, the weighted numbers of convenience stores within 1 km, a 12-15-minute walk, of a school were calculated. The schools were later categorized into three subgroups via the tertile of nearby convenience stores. Considering the compositional characteristics, the availability of convenience stores was found to account for 1.5% of the school-level variance of youthful drinking. The odds ratios (95% confidence interval) of alcohol use over the previous 6 months among youth attending schools with medium and high availability were 1.04 (0.96-1.13) and 1.08 (1.00-1.17), respectively, with a P-value of 0.04 in the trend test. The greater availability of convenience stores near a school is associated with an increased risk of alcohol use among adolescents over the previous 6 months. © 2013 Society for the Study of Addiction.
Stochastic Inversion of 2D Magnetotelluric Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong
2010-07-01
The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, itmore » provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less
ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104
McDonell, James R; Ben-Arieh, Asher; Melton, Gary B
2015-03-01
This article reports the evaluation results from Strong Communities for Children, a multi-year comprehensive community-based initiative to prevent child maltreatment and improve children's safety. The outcome study consisted of a survey of a random sample of caregivers of children under age 10 in the Strong Communities service area and a set of comparison communities matched at the block group level on demography. Survey data were collected in two waves 4 years apart. Data were collected on (a) perceptions of the neighborhood and neighbors (e.g., neighboring, collective efficacy), (b) perceptions of neighbors' parenting practices, (c) parental attitudes and beliefs (e.g., parental stress; parental efficacy), and (d) self-reported parenting practices. The survey data were supplemented by data on substantiated reported rates of child abuse and neglect per 1,000 children and ICD-9 coded child injuries suggesting child abuse and neglect per 1,000 children. Compared to the non-intervention sample across time, the Strong Communities samples showed significant changes in the expected direction for social support, collective efficacy, child safety in the home, observed parenting practices, parental stress, parental efficacy, self-reported parenting practices, rates of officially substantiated child maltreatment, and rates of ICD-9 coded child injuries suggesting child maltreatment. These promising results, obtained through multiple methods of evaluation, confirm that a community mobilization strategy can shift norms of parents' care for their children and neighbors' support for one another, so that young children are safer at home and in the community. Replications should be undertaken and evaluated in other communities under diverse auspices. Copyright © 2014 Elsevier Ltd. All rights reserved.
Performance and applications of GaAs:Cr-based Medipix detector in X-ray CT
NASA Astrophysics Data System (ADS)
Kozhevnikov, D.; Chelkov, G.; Demichev, M.; Gridin, A.; Smolyanskiy, P.; Zhemchugov, A.
2017-01-01
In the recent years, the method of single photon counting X-ray μ-CT is being actively developed and applied in various fields. Results of our studies carried out using the MARS μ-CT scanner equipped with GaAs Medipix-based camera are presented. The procedure of mechanical alignment of the scanner is described, including direct and indirect measurements of the spatial resolution. The software chain for data processing and reconstruction has been developed and reported. We demonstrate the possibility to apply the scanner for research in geology and medicine and provide demo images of geological samples (chrome spinellids, titanium magnetite ore) and medical samples (atherosclerotic plaque, abdominal aortic aneurysm). The first results of multi-energy scans using GaAs:Cr-based camera are shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wasson, J.T.; Ouyang, Xinwei; Wang, Jianmin
1989-03-01
The authors report concentrations of 14 elements in the metal of 38 iron meteorites and a pallasite. The meteorites are classified based on these data and on structural observations. Three samples are paired with previously classified irons; thus, these additional 35 irons raise the number of well-classified, independent iron meteorites to 598. One Yamato iron contains 342 mg/g Ni, the second highest Ni content in an IAB iron after Oktibbeha County. Two small irons from Western Australia appear to be metal nodules from mesosiderites. Several of the new irons are from Antarctica. Of 24 independent irons from Antarctica, 8 aremore » ungrouped. The fraction, 0.333, is much higher than the fraction 0.161 among all 598 classified irons. Statistical tests show that it is highly improbably ({approximately}2.9% probability) that the Antarctic population is a random sample of the larger population. The difference is probably related to the fact that the median mass of Antarctic irons is about two orders of magnitude smaller than that of non-Antarctic irons. It is doubtful that the difference results from fragmentation patterns yielding different size distributions favoring smaller masses among ungrouped irons. More likely is the possibility that smaller meteoroids tend to sample a larger number of asteroidal source regions, perhaps because small meteoroids tend to have higher ejection velocities or because small meteoroids have random-walked a greater increment of orbital semimajor axis away from that of the parent body.« less
Norms governing urban African American adolescents’ sexual and substance-using behavior
Dolcini, M. Margaret; Catania, Joseph A.; Harper, Gary W.; Watson, Susan E.; Ellen, Jonathan M.; Towner, Senna L.
2013-01-01
Using a probability-based neighborhood sample of urban African American youth and a sample of their close friends (N = 202), we conducted a one-year longitudinal study to examine key questions regarding sexual and drug using norms. The results provide validation of social norms governing sexual behavior, condom use, and substance use among friendship groups. These norms had strong to moderate homogeneity; and both normative strength and homogeneity were relatively stable over a one-year period independent of changes in group membership. The data further suggest that sex and substance using norms may operate as a normative set. Similar to studies of adults, we identified three distinct “norm-based” social strata in our sample. Together, our findings suggest that the norms investigated are valid targets for health promotion efforts, and such efforts may benefit from tailoring programs to the normative sets that make up the different social strata in a given adolescent community. PMID:23072891
Multi-view L2-SVM and its multi-view core vector machine.
Huang, Chengquan; Chung, Fu-lai; Wang, Shitong
2016-03-01
In this paper, a novel L2-SVM based classifier Multi-view L2-SVM is proposed to address multi-view classification tasks. The proposed Multi-view L2-SVM classifier does not have any bias in its objective function and hence has the flexibility like μ-SVC in the sense that the number of the yielded support vectors can be controlled by a pre-specified parameter. The proposed Multi-view L2-SVM classifier can make full use of the coherence and the difference of different views through imposing the consensus among multiple views to improve the overall classification performance. Besides, based on the generalized core vector machine GCVM, the proposed Multi-view L2-SVM classifier is extended into its GCVM version MvCVM which can realize its fast training on large scale multi-view datasets, with its asymptotic linear time complexity with the sample size and its space complexity independent of the sample size. Our experimental results demonstrated the effectiveness of the proposed Multi-view L2-SVM classifier for small scale multi-view datasets and the proposed MvCVM classifier for large scale multi-view datasets. Copyright © 2015 Elsevier Ltd. All rights reserved.
Using Latent Class Analysis to Model Temperament Types.
Loken, Eric
2004-10-01
Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks was used for model selection. Results show at least three types of infant temperament, with patterns consistent with those identified by previous researchers who classified the infants using a theoretically based system. Multiple imputation of group memberships is proposed as an alternative to assigning subjects to the latent class with maximum posterior probability in order to reflect variance due to uncertainty in the parameter estimation. Latent class membership at four months of age predicted longitudinal outcomes at four years of age. The example illustrates issues relevant to all mixture models, including estimation, multi-modality, model selection, and comparisons based on the latent group indicators.
Grizzly Bear Noninvasive Genetic Tagging Surveys: Estimating the Magnitude of Missed Detections
Fisher, Jason T.; Heim, Nicole; Code, Sandra; Paczkowski, John
2016-01-01
Sound wildlife conservation decisions require sound information, and scientists increasingly rely on remotely collected data over large spatial scales, such as noninvasive genetic tagging (NGT). Grizzly bears (Ursus arctos), for example, are difficult to study at population scales except with noninvasive data, and NGT via hair trapping informs management over much of grizzly bears’ range. Considerable statistical effort has gone into estimating sources of heterogeneity, but detection error–arising when a visiting bear fails to leave a hair sample–has not been independently estimated. We used camera traps to survey grizzly bear occurrence at fixed hair traps and multi-method hierarchical occupancy models to estimate the probability that a visiting bear actually leaves a hair sample with viable DNA. We surveyed grizzly bears via hair trapping and camera trapping for 8 monthly surveys at 50 (2012) and 76 (2013) sites in the Rocky Mountains of Alberta, Canada. We used multi-method occupancy models to estimate site occupancy, probability of detection, and conditional occupancy at a hair trap. We tested the prediction that detection error in NGT studies could be induced by temporal variability within season, leading to underestimation of occupancy. NGT via hair trapping consistently underestimated grizzly bear occupancy at a site when compared to camera trapping. At best occupancy was underestimated by 50%; at worst, by 95%. Probability of false absence was reduced through successive surveys, but this mainly accounts for error imparted by movement among repeated surveys, not necessarily missed detections by extant bears. The implications of missed detections and biased occupancy estimates for density estimation–which form the crux of management plans–require consideration. We suggest hair-trap NGT studies should estimate and correct detection error using independent survey methods such as cameras, to ensure the reliability of the data upon which species management and conservation actions are based. PMID:27603134
NASA Astrophysics Data System (ADS)
Chen, Yen-Luan; Chang, Chin-Chih; Sheu, Dwan-Fang
2016-04-01
This paper proposes the generalised random and age replacement policies for a multi-state system composed of multi-state elements. The degradation of the multi-state element is assumed to follow the non-homogeneous continuous time Markov process which is a continuous time and discrete state process. A recursive approach is presented to efficiently compute the time-dependent state probability distribution of the multi-state element. The state and performance distribution of the entire multi-state system is evaluated via the combination of the stochastic process and the Lz-transform method. The concept of customer-centred reliability measure is developed based on the system performance and the customer demand. We develop the random and age replacement policies for an aging multi-state system subject to imperfect maintenance in a failure (or unacceptable) state. For each policy, the optimum replacement schedule which minimises the mean cost rate is derived analytically and discussed numerically.
Nonnegative definite EAP and ODF estimation via a unified multi-shell HARDI reconstruction.
Cheng, Jian; Jiang, Tianzi; Deriche, Rachid
2012-01-01
In High Angular Resolution Diffusion Imaging (HARDI), Orientation Distribution Function (ODF) and Ensemble Average Propagator (EAP) are two important Probability Density Functions (PDFs) which reflect the water diffusion and fiber orientations. Spherical Polar Fourier Imaging (SPFI) is a recent model-free multi-shell HARDI method which estimates both EAP and ODF from the diffusion signals with multiple b values. As physical PDFs, ODFs and EAPs are nonnegative definite respectively in their domains S2 and R3. However, existing ODF/EAP estimation methods like SPFI seldom consider this natural constraint. Although some works considered the nonnegative constraint on the given discrete samples of ODF/EAP, the estimated ODF/EAP is not guaranteed to be nonnegative definite in the whole continuous domain. The Riemannian framework for ODFs and EAPs has been proposed via the square root parameterization based on pre-estimated ODFs and EAPs by other methods like SPFI. However, there is no work on how to estimate the square root of ODF/EAP called as the wavefuntion directly from diffusion signals. In this paper, based on the Riemannian framework for ODFs/EAPs and Spherical Polar Fourier (SPF) basis representation, we propose a unified model-free multi-shell HARDI method, named as Square Root Parameterized Estimation (SRPE), to simultaneously estimate both the wavefunction of EAPs and the nonnegative definite ODFs and EAPs from diffusion signals. The experiments on synthetic data and real data showed SRPE is more robust to noise and has better EAP reconstruction than SPFI, especially for EAP profiles at large radius.
Qureshi, Waqas T; Michos, Erin D; Flueckiger, Peter; Blaha, Michael; Sandfort, Veit; Herrington, David M; Burke, Gregory; Yeboah, Joseph
2016-09-01
The increase in statin eligibility by the new cholesterol guidelines is mostly driven by the Pooled Cohort Equation (PCE) criterion (≥7.5% 10-year PCE). The impact of replacing the PCE with either the modified Framingham Risk Score (FRS) or the Systematic Coronary Risk Evaluation (SCORE) on assessment of atherosclerotic cardiovascular disease (ASCVD) risk assessment and statin eligibility remains unknown. We assessed the comparative benefits of using the PCE, FRS, and SCORE for ASCVD risk assessment in the Multi-Ethnic Study of Atherosclerosis. Of 6,815 participants, 654 (mean age 61.4 ± 10.3; 47.1% men; 37.1% whites; 27.2% blacks; 22.3% Hispanics; 12.0% Chinese-Americans) were included in analysis. Area under the curve (AUC) and decision curve analysis were used to compare the 3 risk scores. Decision curve analysis is the plot of net benefit versus probability thresholds; net benefit = true positive rate - (false positive rate × weighting factor). Weighting factor = Threshold probability/1 - threshold probability. After a median of 8.6 years, 342 (6.0%) ASCVD events (myocardial infarction, coronary heart disease death, fatal or nonfatal stroke) occurred. All 4 risk scores had acceptable discriminative ability for incident ASCVD events; (AUC [95% CI] PCE: 0.737 [0.713 to 0.762]; FRS: 0.717 [0.691 to 0.743], SCORE (high risk) 0.722 [0.696 to 0.747], and SCORE (low risk): 0.721 [0.696 to 0.746]. At the ASCVD risk threshold recommended for statin eligibility for primary prevention (≥7.5%), the PCE provides the best net benefit. Replacing the PCE with the SCORE (high), SCORE (low) and FRS results in a 2.9%, 8.9%, and 17.1% further increase in statin eligibility. The PCE has the best discrimination and net benefit for primary ASCVD risk assessment in a US-based multiethnic cohort compared with the SCORE or the FRS. Copyright © 2016 Elsevier Inc. All rights reserved.
Köke, Albère; Hitters, Minou; Rijnders, Nieke; Pont, Menno
2017-01-01
Background A multi-centre RCT has shown that multidisciplinary rehabilitation treatment (MRT) is more effective in reducing fatigue over the long-term in comparison with cognitive behavioural therapy (CBT) for patients with chronic fatigue syndrome (CFS), but evidence on its cost-effectiveness is lacking. Aim To compare the cost-effectiveness of MRT versus CBT for patients with CFS from a societal perspective. Methods A multi-centre randomized controlled trial comparing MRT with CBT was conducted among 122 patients with CFS diagnosed using the 1994 criteria of the Centers for Disease Control and Prevention and aged between 18 and 60 years. The societal costs (healthcare costs, patient and family costs, and costs for loss of productivity), fatigue severity, quality of life, quality-adjusted life-year (QALY), and cost-effectiveness ratios (ICERs) were measured over a follow-up period of one year. The main outcome of the cost-effectiveness analysis was fatigue measured by the Checklist Individual Strength (CIS). The main outcome of the cost-utility analysis was the QALY based on the EuroQol-5D-3L utilities. Sensitivity analyses were performed, and uncertainty was calculated using the cost-effectiveness acceptability curves and cost-effectiveness planes. Results The data of 109 patients (57 MRT and 52 CBT) were analyzed. MRT was significantly more effective in reducing fatigue at 52 weeks. The mean difference in QALY between the treatments was not significant (0.09, 95% CI: -0.02 to 0.19). The total societal costs were significantly higher for patients allocated to MRT (a difference of €5,389, 95% CI: 2,488 to 8,091). MRT has a high probability of being the most cost effective, using fatigue as the primary outcome. The ICER is €856 per unit of the CIS fatigue subscale. The results of the cost-utility analysis, using the QALY, indicate that the CBT had a higher likelihood of being more cost-effective. Conclusions The probability of being more cost-effective is higher for MRT when using fatigue as primary outcome variable. Using QALY as the primary outcome, CBT has the highest probability of being more cost-effective. Trial registration ISRCTN77567702. PMID:28574985
NASA Astrophysics Data System (ADS)
Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.
2018-05-01
One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.
Sample Size Determination for Rasch Model Tests
ERIC Educational Resources Information Center
Draxler, Clemens
2010-01-01
This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…
NASA Astrophysics Data System (ADS)
Xiong, Pei-Ying; Yu, Xu-Tao; Zhang, Zai-Chen; Zhan, Hai-Tao; Hua, Jing-Yu
2017-08-01
Quantum multi-hop teleportation is important in the field of quantum communication. In this study, we propose a quantum multi-hop communication model and a quantum routing protocol with multihop teleportation for wireless mesh backbone networks. Based on an analysis of quantum multi-hop protocols, a partially entangled Greenberger-Horne-Zeilinger (GHZ) state is selected as the quantum channel for the proposed protocol. Both quantum and classical wireless channels exist between two neighboring nodes along the route. With the proposed routing protocol, quantum information can be transmitted hop by hop from the source node to the destination node. Based on multi-hop teleportation based on the partially entangled GHZ state, a quantum route established with the minimum number of hops. The difference between our routing protocol and the classical one is that in the former, the processes used to find a quantum route and establish quantum channel entanglement occur simultaneously. The Bell state measurement results of each hop are piggybacked to quantum route finding information. This method reduces the total number of packets and the magnitude of air interface delay. The deduction of the establishment of a quantum channel between source and destination is also presented here. The final success probability of quantum multi-hop teleportation in wireless mesh backbone networks was simulated and analyzed. Our research shows that quantum multi-hop teleportation in wireless mesh backbone networks through a partially entangled GHZ state is feasible.
Li, Wangzhe; Zhang, Xia; Yao, Jianping
2013-08-26
We report, to the best of our knowledge, the first realization of a multi-wavelength distributed feedback (DFB) semiconductor laser array with an equivalent chirped grating profile based on equivalent chirp technology. All the lasers in the laser array have an identical grating period with an equivalent chirped grating structure, which are realized by nonuniform sampling of the gratings. Different wavelengths are achieved by changing the sampling functions. A multi-wavelength DFB semiconductor laser array is fabricated and the lasing performance is evaluated. The results show that the equivalent chirp technology is an effective solution for monolithic integration of a multi-wavelength laser array with potential for large volume fabrication.
Sampling designs matching species biology produce accurate and affordable abundance indices
Farley, Sean; Russell, Gareth J.; Butler, Matthew J.; Selinger, Jeff
2013-01-01
Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions, which raised capture probabilities. The grid design was least biased (−10.5%), but imprecise (CV 21.2%), and used most effort (16,100 trap-nights). The targeted configuration was more biased (−17.3%), but most precise (CV 12.3%), with least effort (7,000 trap-nights). Targeted sampling generated encounter rates four times higher, and capture and recapture probabilities 11% and 60% higher than grid sampling, in a sampling frame 88% smaller. Bears had unequal probability of capture with both sampling designs, partly because some bears never had traps available to sample them. Hence, grid and targeted sampling generated abundance indices, not estimates. Overall, targeted sampling provided the most accurate and affordable design to index abundance. Targeted sampling may offer an alternative method to index the abundance of other species inhabiting expansive and inaccessible landscapes elsewhere, provided their attraction to resource concentrations. PMID:24392290
Koneff, M.D.; Royle, J. Andrew; Forsell, D.J.; Wortham, J.S.; Boomer, G.S.; Perry, M.C.
2005-01-01
Survey design for wintering scoters (Melanitta sp.) and other sea ducks that occur in offshore waters is challenging because these species have large ranges, are subject to distributional shifts among years and within a season, and can occur in aggregations. Interest in winter sea duck population abundance surveys has grown in recent years. This interest stems from concern over the population status of some sea ducks, limitations of extant breeding waterfowl survey programs in North America and logistical challenges and costs of conducting surveys in northern breeding regions, high winter area philopatry in some species and potential conservation implications, and increasing concern over offshore development and other threats to sea duck wintering habitats. The efficiency and practicality of statistically-rigorous monitoring strategies for mobile, aggregated wintering sea duck populations have not been sufficiently investigated. This study evaluated a 2-phase adaptive stratified strip transect sampling plan to estimate wintering population size of scoters, long-tailed ducks (Clangua hyemalis), and other sea ducks and provide information on distribution. The sampling plan results in an optimal allocation of a fixed sampling effort among offshore strata in the U.S. mid-Atlantic coast region. Phase I transect selection probabilities were based on historic distribution and abundance data, while Phase 2 selection probabilities were based on observations made during Phase 1 flights. Distance sampling methods were used to estimate detection rates. Environmental variables thought to affect detection rates were recorded during the survey and post-stratification and covariate modeling were investigated to reduce the effect of heterogeneity on detection estimation. We assessed cost-precision tradeoffs under a number of fixed-cost sampling scenarios using Monte Carlo simulation. We discuss advantages and limitations of this sampling design for estimating wintering sea duck abundance and mapping distribution and suggest improvements for future surveys.
NASA Astrophysics Data System (ADS)
Masud, M. B.; Khaliq, M. N.; Wheater, H. S.
2017-04-01
This study assesses projected changes to drought characteristics in Alberta, Saskatchewan and Manitoba, the prairie provinces of Canada, using a multi-regional climate model (RCM) ensemble available through the North American Regional Climate Change Assessment Program. Simulations considered include those performed with six RCMs driven by National Center for Environmental Prediction reanalysis II for the 1981-2003 period and those driven by four Atmosphere-Ocean General Circulation Models for the 1970-1999 and 2041-2070 periods (i.e. eleven current and the same number of corresponding future period simulations). Drought characteristics are extracted using two drought indices, namely the Standardized Precipitation Index (SPI) and the Standardized Precipitation Evapotranspiration Index (SPEI). Regional frequency analysis is used to project changes to selected 20- and 50-year regional return levels of drought characteristics for fifteen homogeneous regions, covering the study area. In addition, multivariate analyses of drought characteristics, derived on the basis of 6-month SPI and SPEI values, are developed using the copula approach for each region. Analysis of multi-RCM ensemble-averaged projected changes to mean and selected return levels of drought characteristics show increases over the southern and south-western parts of the study area. Based on bi- and trivariate joint occurrence probabilities of drought characteristics, the southern regions along with the central regions are found highly drought vulnerable, followed by the southwestern and southeastern regions. Compared to the SPI-based analysis, the results based on SPEI suggest drier conditions over many regions in the future, indicating potential effects of rising temperatures on drought risks. These projections will be useful in the development of appropriate adaptation strategies for the water and agricultural sectors, which play an important role in the economy of the study area.
Rivera-Hidalgo, F; Shulman, J D; Beach, M M
2004-11-01
To determine point and annual prevalence of recurrent aphthous stomatitis (RAS). Reported prevalence of RAS in textbooks and much of the literature varies according to study location, patient selection and whether point prevalence (presence of lesions at examination) or period prevalence (history of lesions during a specified period) is reported. Many studies are based on non-probability samples and this may contribute to significant variation in reported prevalence and factors presumed to be associated with RAS. We analyzed data from the Third National Health and Nutrition Examination Survey, 1988-1994, a large United States probability sample, for RAS and covariates suggested in the literature using bivariate and multivariate logistic regression. Oral mucosal examinations were performed on 17 235 adults 17 years and older. Of these, 146 (0.89%) had at least one clinically apparent aphthous lesion. For annual (reported) prevalence, Whites (20.87%) and Mexican-Americans (12.88%) had several fold higher prevalence of RAS than Blacks (4.96%). Adults younger than 40 years of age had almost twice the prevalence (22.54%) of those older than 40 years (13.42%). Annual prevalence was significantly higher in whites and Mexican-Americans (compared with blacks), individuals 17-39 years of age, cigarette non-smokers, and those with recurrent herpes labialis history; while it was lower in males. Point prevalence was significantly higher in whites, Mexican-American, individuals 17-39 years of age, cigarette non-smokers, and males.
The Dynamics of Internalizing and Externalizing Comorbidity Across the Early School Years
Willner, Cynthia J.; Gatzke-Kopp, Lisa M.; Bray, Bethany C.
2017-01-01
High rates of comorbidity are observed between internalizing and externalizing problems, yet the developmental dynamics of comorbid symptom presentations are not yet well understood. This study explored the developmental course of latent profiles of internalizing and externalizing symptoms across kindergarten, 1st, and 2nd grade. The sample consisted of 336 children from an urban, low-income community, selected based on relatively high (61%) or low (39%) aggressive/oppositional behavior problems at school entry (64% male; 70% African American, 20% Hispanic). Teachers reported on children’s symptoms in each year. An exploratory latent profile analysis of children’s scores on aggression/oppositionality, hyperactivity/inattention, anxiety, and social withdrawal symptom factors revealed 4 latent symptom profiles: comorbid (48% of the sample in each year), internalizing (19–23%), externalizing (21–22%), and well-adjusted (7–11%). The developmental course of these symptom profiles was examined using a latent transition analysis, which revealed remarkably high continuity in the comorbid symptom profile (89% from one year to the next) and moderately high continuity in both the internalizing and externalizing profiles (80% and 71%, respectively). Internalizing children had a 20% probability of remitting to the well-adjusted profile by the following year, whereas externalizing children had a 25% probability of transitioning to the comorbid profile. These results are consistent with the hypothesis that a common vulnerability factor contributes to developmentally stable internalizing-externalizing comorbidity, while also suggesting that some children with externalizing symptoms are at risk for subsequently accumulating internalizing symptoms. PMID:27739391
Deterministic multidimensional nonuniform gap sampling.
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.
Optimal Control via Self-Generated Stochasticity
NASA Technical Reports Server (NTRS)
Zak, Michail
2011-01-01
The problem of global maxima of functionals has been examined. Mathematical roots of local maxima are the same as those for a much simpler problem of finding global maximum of a multi-dimensional function. The second problem is instability even if an optimal trajectory is found, there is no guarantee that it is stable. As a result, a fundamentally new approach is introduced to optimal control based upon two new ideas. The first idea is to represent the functional to be maximized as a limit of a probability density governed by the appropriately selected Liouville equation. Then, the corresponding ordinary differential equations (ODEs) become stochastic, and that sample of the solution that has the largest value will have the highest probability to appear in ODE simulation. The main advantages of the stochastic approach are that it is not sensitive to local maxima, the function to be maximized must be only integrable but not necessarily differentiable, and global equality and inequality constraints do not cause any significant obstacles. The second idea is to remove possible instability of the optimal solution by equipping the control system with a self-stabilizing device. The applications of the proposed methodology will optimize the performance of NASA spacecraft, as well as robot performance.
NASA Astrophysics Data System (ADS)
Reato, Thomas; Demir, Begüm; Bruzzone, Lorenzo
2017-10-01
This paper presents a novel class sensitive hashing technique in the framework of large-scale content-based remote sensing (RS) image retrieval. The proposed technique aims at representing each image with multi-hash codes, each of which corresponds to a primitive (i.e., land cover class) present in the image. To this end, the proposed method consists of a three-steps algorithm. The first step is devoted to characterize each image by primitive class descriptors. These descriptors are obtained through a supervised approach, which initially extracts the image regions and their descriptors that are then associated with primitives present in the images. This step requires a set of annotated training regions to define primitive classes. A correspondence between the regions of an image and the primitive classes is built based on the probability of each primitive class to be present at each region. All the regions belonging to the specific primitive class with a probability higher than a given threshold are highly representative of that class. Thus, the average value of the descriptors of these regions is used to characterize that primitive. In the second step, the descriptors of primitive classes are transformed into multi-hash codes to represent each image. This is achieved by adapting the kernel-based supervised locality sensitive hashing method to multi-code hashing problems. The first two steps of the proposed technique, unlike the standard hashing methods, allow one to represent each image by a set of primitive class sensitive descriptors and their hash codes. Then, in the last step, the images in the archive that are very similar to a query image are retrieved based on a multi-hash-code-matching scheme. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed technique in terms of retrieval accuracy when compared to the standard hashing methods.
Consensus for second-order multi-agent systems with position sampled data
NASA Astrophysics Data System (ADS)
Wang, Rusheng; Gao, Lixin; Chen, Wenhai; Dai, Dameng
2016-10-01
In this paper, the consensus problem with position sampled data for second-order multi-agent systems is investigated. The interaction topology among the agents is depicted by a directed graph. The full-order and reduced-order observers with position sampled data are proposed, by which two kinds of sampled data-based consensus protocols are constructed. With the provided sampled protocols, the consensus convergence analysis of a continuous-time multi-agent system is equivalently transformed into that of a discrete-time system. Then, by using matrix theory and a sampled control analysis method, some sufficient and necessary consensus conditions based on the coupling parameters, spectrum of the Laplacian matrix and sampling period are obtained. While the sampling period tends to zero, our established necessary and sufficient conditions are degenerated to the continuous-time protocol case, which are consistent with the existing result for the continuous-time case. Finally, the effectiveness of our established results is illustrated by a simple simulation example. Project supported by the Natural Science Foundation of Zhejiang Province, China (Grant No. LY13F030005) and the National Natural Science Foundation of China (Grant No. 61501331).
Olea, Ricardo A.; Luppens, James A.
2012-01-01
There are multiple ways to characterize uncertainty in the assessment of coal resources, but not all of them are equally satisfactory. Increasingly, the tendency is toward borrowing from the statistical tools developed in the last 50 years for the quantitative assessment of other mineral commodities. Here, we briefly review the most recent of such methods and formulate a procedure for the systematic assessment of multi-seam coal deposits taking into account several geological factors, such as fluctuations in thickness, erosion, oxidation, and bed boundaries. A lignite deposit explored in three stages is used for validating models based on comparing a first set of drill holes against data from infill and development drilling. Results were fully consistent with reality, providing a variety of maps, histograms, and scatterplots characterizing the deposit and associated uncertainty in the assessments. The geostatistical approach was particularly informative in providing a probability distribution modeling deposit wide uncertainty about total resources and a cumulative distribution of coal tonnage as a function of local uncertainty.
IMPACT OF LEAD ACID BATTERIES AND CADMIUM STABILIZERS ON INCINERATOR EMISSIONS
The Waste Analysis Sampling, Testing and Evaluation (WASTE) Program is a multi-year, multi-disciplinary program designed to elicit the source and fate of environmentally significant trace materials as a solid waste progresses through management processes. s part of the WASTE Prog...
Gwadz, Marya; Cleland, Charles M; Jenness, Samuel M; Silverman, Elizabeth; Hagan, Holly; Ritchie, Amanda S; Leonard, Noelle R; McCright-Gill, Talaya; Martinez, Belkis; Swain, Quentin; Kutnick, Alexandra; Sherpa, Dawa
2016-02-01
Annual HIV testing is recommended for high-risk populations in the United States, to identify HIV infections early and provide timely linkage to treatment. However, heterosexuals at high risk for HIV, due to their residence in urban areas of high poverty and elevated HIV prevalence, test for HIV less frequently than other risk groups, and late diagnosis of HIV is common. Yet the factors impeding HIV testing in this group, which is predominantly African American/Black and Latino/Hispanic, are poorly understood. The present study addresses this gap. Using a systematic community-based sampling method, venue-based sampling (VBS), we estimate rates of lifetime and recent (past year) HIV testing among high-risk heterosexuals (HRH), and explore a set of putative multi-level barriers to and facilitators of recent testing, by gender. Participants were 338 HRH African American/Black and Latino/Hispanic adults recruited using VBS, who completed a computerized structured assessment battery guided by the Theory of Triadic Influence, comprised of reliable/valid measures on socio-demographic characteristics, HIV testing history, and multi-level barriers to HIV testing. Logistic regression analysis was used to identify factors associated with HIV testing within the past year. Most HRH had tested at least once (94%), and more than half had tested within the past year (58%), but only 37% tested annually. In both men and women, the odds of recent testing were similar and associated with structural factors (better access to testing) and sexually transmitted infection (STI) testing and diagnosis. Thus VBS identified serious gaps in rates of annual HIV testing among HRH. Improvements in access to high-quality HIV testing and leveraging of STI testing are needed to increase the proportion of HRH testing annually for HIV. Such improvements could increase early detection of HIV, improve the long-term health of individuals, and reduce HIV transmission by increasing rates of viral suppression.
Sampling guidelines for oral fluid-based surveys of group-housed animals.
Rotolo, Marisa L; Sun, Yaxuan; Wang, Chong; Giménez-Lirola, Luis; Baum, David H; Gauger, Phillip C; Harmon, Karen M; Hoogland, Marlin; Main, Rodger; Zimmerman, Jeffrey J
2017-09-01
Formulas and software for calculating sample size for surveys based on individual animal samples are readily available. However, sample size formulas are not available for oral fluids and other aggregate samples that are increasingly used in production settings. Therefore, the objective of this study was to develop sampling guidelines for oral fluid-based porcine reproductive and respiratory syndrome virus (PRRSV) surveys in commercial swine farms. Oral fluid samples were collected in 9 weekly samplings from all pens in 3 barns on one production site beginning shortly after placement of weaned pigs. Samples (n=972) were tested by real-time reverse-transcription PCR (RT-rtPCR) and the binary results analyzed using a piecewise exponential survival model for interval-censored, time-to-event data with misclassification. Thereafter, simulation studies were used to study the barn-level probability of PRRSV detection as a function of sample size, sample allocation (simple random sampling vs fixed spatial sampling), assay diagnostic sensitivity and specificity, and pen-level prevalence. These studies provided estimates of the probability of detection by sample size and within-barn prevalence. Detection using fixed spatial sampling was as good as, or better than, simple random sampling. Sampling multiple barns on a site increased the probability of detection with the number of barns sampled. These results are relevant to PRRSV control or elimination projects at the herd, regional, or national levels, but the results are also broadly applicable to contagious pathogens of swine for which oral fluid tests of equivalent performance are available. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Wang, Peilong; Wang, Xiao; Zhang, Wei; Su, Xiaoou
2014-02-01
A novel and efficient determination method for multi-class compounds including β-agonists, sedatives, nitro-imidazoles and aflatoxins in porcine formula feed based on a fast "one-pot" extraction/multifunction impurity adsorption (MFIA) clean-up procedure has been developed. 23 target analytes belonging to four different class compounds could be determined simultaneously in a single run. Conditions for "one-pot" extraction were studied in detail. Under the optimized conditions, the multi-class compounds in porcine formula feed samples were extracted and purified with methanol contained ammonia and absorbents by one step. The compounds in extracts were purified by using multi types of absorbent based on MFIA in one pot. The multi-walled carbon nanotubes were employed to improved clean-up efficiency. Shield BEH C18 column was used to separate 23 target analytes, followed by tandem mass spectrometry (MS/MS) detection using an electro-spray ionization source in positive mode. Recovery studies were done at three fortification levels. Overall average recoveries of target compounds in porcine formula feed at each levels were >51.6% based on matrix fortified calibration with coefficients of variation from 2.7% to 13.2% (n=6). The limit of determination (LOD) of these compounds in porcine formula feed sample matrix was <5.0 μg/kg. This method was successfully applied in screening and confirmation of target drugs in >30 porcine formula feed samples. It was demonstrated that the integration of the MFIA protocol with the MS/MS instrument could serve as a valuable strategy for rapid screening and reliable confirmatory analysis of multi-class compounds in real samples. Copyright © 2013 Elsevier B.V. All rights reserved.
Lang-Halter, Evi; Schober, Steffen; Scherer, Siegfried
2016-09-01
During a 1-year longitudinal study, water, sediment and water plants from two creeks and one pond were sampled monthly and analyzed for the presence of Listeria species. A total of 90 % of 30 sediment samples, 84 % of 31 water plant samples and 67 % of 36 water samples were tested positive. Generally, most probable number counts ranged between 1 and 40 g-1, only occasionally >110 cfu g-1 were detected. Species differentiation based on FT-IR spectroscopy and multiplex PCR of a total of 1220 isolates revealed L. innocua (46 %), L. seeligeri (27 %), L. monocytogenes (25 %) and L. ivanovii (2 %). Titers and species compositions were similar during all seasons. While the species distributions in sediments and associated Ranunculus fluitans plants appeared to be similar in both creeks, RAPD typing did not provide conclusive evidence that the populations of these environments were connected. It is concluded that (i) the fresh-water sediments and water plants are year-round populated by Listeria, (ii) no clear preference for growth in habitats as different as sediments and water plants was found and (iii) the RAPD-based intraspecific biodiversity is high compared to the low population density.
NASA Astrophysics Data System (ADS)
Zhou, Peng; Chen, Xiang; Shang, Zhicai
2009-03-01
In this article, the concept of multi conformation-based quantitative structure-activity relationship (MCB-QSAR) is proposed, and based upon that, we describe a new approach called the side-chain conformational space analysis (SCSA) to model and predict protein-peptide binding affinities. In SCSA, multi-conformations (rather than traditional single-conformation) have received much attention, and the statistical average information on multi-conformations of side chains is determined using self-consistent mean field theory based upon side chain rotamer library. Thereby, enthalpy contributions (including electrostatic, steric, hydrophobic interaction and hydrogen bond) and conformational entropy effects to the binding are investigated in terms of occurrence probability of residue rotamers. Then, SCSA was applied into the dataset of 419 HLA-A*0201 binding peptides, and nonbonding contributions of each position in peptide ligands are well determined. For the peptides, the hydrogen bond and electrostatic interactions of the two ends are essential to the binding specificity, van der Waals and hydrophobic interactions of all the positions ensure strong binding affinity, and the loss of conformational entropy at anchor positions partially counteracts other favorable nonbonding effects.
NASA Astrophysics Data System (ADS)
Taner, M. U.; Ray, P.; Brown, C.
2016-12-01
Hydroclimatic nonstationarity due to climate change poses challenges for long-term water infrastructure planning in river basin systems. While designing strategies that are flexible or adaptive hold intuitive appeal, development of well-performing strategies requires rigorous quantitative analysis that address uncertainties directly while making the best use of scientific information on the expected evolution of future climate. Multi-stage robust optimization (RO) offers a potentially effective and efficient technique for addressing the problem of staged basin-level planning under climate change, however the necessity of assigning probabilities to future climate states or scenarios is an obstacle to implementation, given that methods to reliably assign probabilities to future climate states are not well developed. We present a method that overcomes this challenge by creating a bottom-up RO-based framework that decreases the dependency on probability distributions of future climate and rather employs them after optimization to aid selection amongst competing alternatives. The iterative process yields a vector of `optimal' decision pathways each under the associated set of probabilistic assumptions. In the final phase, the vector of optimal decision pathways is evaluated to identify the solutions that are least sensitive to the scenario probabilities and are most-likely conditional on the climate information. The framework is illustrated for the planning of new dam and hydro-agricultural expansions projects in the Niger River Basin over a 45-year planning period from 2015 to 2060.
Liu, Jinjun; Leng, Yonggang; Lai, Zhihui; Fan, Shengbo
2018-04-25
Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method.
Wolkenstein, P; Machovcová, A; Szepietowski, J C; Tennstedt, D; Veraldi, S; Delarue, A
2018-02-01
Although acne vulgaris is a common skin disorder, limited epidemiological data exist specifically for European populations. To determine the prevalence of self-reported acne among young people in Europe and evaluate the effect of lifestyle on acne. We conducted a cross-sectional population-based online survey in representative samples of individuals aged 15-24 years in Belgium, Czech and Slovak Republics, France, Italy, Poland and Spain (n = 10 521), identified by a quota sampling method based on age, geographic location and socio-professional category. The overall adjusted prevalence of self-reported acne was 57.8% (95% confidence interval 56.9% to 58.7%). The rates per country ranged from 42.2% in Poland to 73.5% in the Czech and Slovak Republics. The prevalence of acne was highest at age 15-17 years and decreased with age. On multivariate analysis, a history of maternal or paternal acne was associated with an increased probability of having acne (odds ratio 3.077, 95% CI 2.743 to 3.451, and 2.700, 95% CI 2.391 to 3.049, respectively; both P < 0.0001), as was the consumption of chocolate (OR 1.276, 95% CI 1.094 to 1.488, for quartile 4 vs. quartile 1). Increasing age (OR 0.728, 95% CI 0.639 to 0.830 for age 21-24 years vs. 15-17 years) and smoking tobacco (OR 0.705, 95% CI 0.616 to 0.807) were associated with a reduced probability of acne. The overall prevalence of self-reported acne was high in adolescents/young adults in the European countries investigated. Heredity was the main risk factor for developing acne. © 2017 European Academy of Dermatology and Venereology.
Filtering Airborne LIDAR Data by AN Improved Morphological Method Based on Multi-Gradient Analysis
NASA Astrophysics Data System (ADS)
Li, Y.
2013-05-01
The technology of airborne Light Detection And Ranging (LIDAR) is capable of acquiring dense and accurate 3D geospatial data. Although many related efforts have been made by a lot of researchers in the last few years, LIDAR data filtering is still a challenging task, especially for area with high relief or hybrid geographic features. In order to address the bare-ground extraction from LIDAR point clouds of complex landscapes, a novel morphological filtering algorithm is proposed based on multi-gradient analysis in terms of the characteristic of LIDAR data distribution in this paper. Firstly, point clouds are organized by an index mesh. Then, the multigradient of each point is calculated using the morphological method. And, objects are removed gradually by choosing some points to carry on an improved opening operation constrained by multi-gradient iteratively. 15 sample data provided by ISPRS Working Group III/3 are employed to test the filtering algorithm proposed. These sample data include those environments that may lead to filtering difficulty. Experimental results show that filtering algorithm proposed by this paper is of high adaptability to various scenes including urban and rural areas. Omission error, commission error and total error can be simultaneously controlled in a relatively small interval. This algorithm can efficiently remove object points while preserves ground points to a great degree.
NASA Astrophysics Data System (ADS)
Yang, Liping; Zhang, Lei; He, Jiansen; Tu, Chuanyi; Li, Shengtai; Wang, Xin; Wang, Linghua
2018-03-01
Multi-order structure functions in the solar wind are reported to display a monofractal scaling when sampled parallel to the local magnetic field and a multifractal scaling when measured perpendicularly. Whether and to what extent will the scaling anisotropy be weakened by the enhancement of turbulence amplitude relative to the background magnetic strength? In this study, based on two runs of the magnetohydrodynamic (MHD) turbulence simulation with different relative levels of turbulence amplitude, we investigate and compare the scaling of multi-order magnetic structure functions and magnetic probability distribution functions (PDFs) as well as their dependence on the direction of the local field. The numerical results show that for the case of large-amplitude MHD turbulence, the multi-order structure functions display a multifractal scaling at all angles to the local magnetic field, with PDFs deviating significantly from the Gaussian distribution and a flatness larger than 3 at all angles. In contrast, for the case of small-amplitude MHD turbulence, the multi-order structure functions and PDFs have different features in the quasi-parallel and quasi-perpendicular directions: a monofractal scaling and Gaussian-like distribution in the former, and a conversion of a monofractal scaling and Gaussian-like distribution into a multifractal scaling and non-Gaussian tail distribution in the latter. These results hint that when intermittencies are abundant and intense, the multifractal scaling in the structure functions can appear even if it is in the quasi-parallel direction; otherwise, the monofractal scaling in the structure functions remains even if it is in the quasi-perpendicular direction.
NASA Astrophysics Data System (ADS)
Wang, Bei; Sugi, Takenao; Wang, Xingyu; Nakamura, Masatoshi
Data for human sleep study may be affected by internal and external influences. The recorded sleep data contains complex and stochastic factors, which increase the difficulties for the computerized sleep stage determination techniques to be applied for clinical practice. The aim of this study is to develop an automatic sleep stage determination system which is optimized for variable sleep data. The main methodology includes two modules: expert knowledge database construction and automatic sleep stage determination. Visual inspection by a qualified clinician is utilized to obtain the probability density function of parameters during the learning process of expert knowledge database construction. Parameter selection is introduced in order to make the algorithm flexible. Automatic sleep stage determination is manipulated based on conditional probability. The result showed close agreement comparing with the visual inspection by clinician. The developed system can meet the customized requirements in hospitals and institutions.
Kern, Andrea K.; Harzhauser, Mathias; Soliman, Ali; Piller, Werner E.; Mandic, Oleg
2013-01-01
A high-resolution multi-proxy analysis was conducted on a 1.5-m-long core of Tortonian age (~ 10.5 Ma; Late Miocene) from Austria (Europe). The lake sediments were studied with a 1-cm resolution to detect all small-scale variations based on palynomorphs (pollen and dinoflagellate cysts), ostracod abundance, geochemistry (carbon and sulfur) and geophysics (magnetic susceptibility and natural gamma radiation). Based on an already established age model for a longer interval of the same core, this sequence can be limited to approx. two millennia of Late Miocene time with a resolution of ~ 13.7 years per sample. The previous study documented the presence of solar forcing, which was verified within various proxies on this 1.5-m core by a combination of REDFIT spectra and Gaussian filters. Significant repetitive signals ranged in two discrete intervals corresponding roughly to 55–82 and 110–123 years, fitting well within the lower and upper Gleissberg cycle ranges. Based on these results, the environmental changes along the 2000-year Late Miocene sequence are discussed. No major ecological turnovers are expected in this very short interval. Nonetheless, even within this brief time span, dinoflagellates document rapid changes between oligotrophic and eutrophic conditions, which are frequently coupled with lake stratification and dysoxic bottom waters. These phases prevented ostracods and molluscs from settling and promoted the activity of sulfur bacteria. The pollen record indicates rather stable wetland vegetation with a forested hinterland. Shifts in the pollen spectra can be mainly attributed to variations in transport mechanisms. These are represented by a few phases of fluvial input but mainly by changes in wind intensity and probably also wind direction. Such influence is most likely caused by solar cycles, leading to a change in source area for the input into the lake. Furthermore, these solar-induced variations seem to be modulated by longer solar cycles. The filtered data display comparable patterns and modulations, which seem to be forced by the 1000-year and 1500-year cycles. The 1000-year cycle modulated especially the lake surface proxies, whereas the 1500-year cycle is mainly reflected in hinterland proxies, indicating strong influence on transport mechanisms. PMID:23407808
Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H
2016-09-01
To examine sociodemographic and behavioural differences between men who have sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey. We compared 148 MSM aged 18-64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010-2012, with men in the same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European MSM Internet Survey (EMIS); 797 in the London Gay Men's Sexual Health Survey; and 1234 in Scotland's Gay Men's Sexual Health Survey. Analyses compared men reporting at least one male sexual partner (past year) on similarly worded questions and multivariable analyses accounted for sociodemographic differences between the surveys. MSM in convenience surveys were younger and better educated than MSM in Natsal-3, and a larger proportion identified as gay (85%-95% vs 62%). Partner numbers were higher and same-sex anal sex more common in convenience surveys. Unprotected anal intercourse was more commonly reported in EMIS. Compared with Natsal-3, MSM in convenience surveys were more likely to report gonorrhoea diagnoses and HIV testing (both past year). Differences between the samples were reduced when restricting analysis to gay-identifying MSM. National probability surveys better reflect the population of MSM but are limited by their smaller samples of MSM. Convenience surveys recruit larger samples of MSM but tend to over-represent MSM identifying as gay and reporting more sexual risk behaviours. Because both sampling strategies have strengths and weaknesses, methods are needed to triangulate data from probability and convenience surveys. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Achenbach, Thomas M; Ivanova, Masha Y; Rescorla, Leslie A
2017-11-01
Originating in the 1960s, the Achenbach System of Empirically Based Assessment (ASEBA) comprises a family of instruments for assessing problems and strengths for ages 1½-90+ years. To provide an overview of the ASEBA, related research, and future directions for empirically based assessment and taxonomy. Standardized, multi-informant ratings of transdiagnostic dimensions of behavioral, emotional, social, and thought problems are hierarchically scored on narrow-spectrum syndrome scales, broad-spectrum internalizing and externalizing scales, and a total problems (general psychopathology) scale. DSM-oriented and strengths scales are also scored. The instruments and scales have been iteratively developed from assessments of clinical and population samples of hundreds of thousands of individuals. Items, instruments, scales, and norms are tailored to different kinds of informants for ages 1½-5, 6-18, 18-59, and 60-90+ years. To take account of differences between informants' ratings, parallel instruments are completed by parents, teachers, youths, adult probands, and adult collaterals. Syndromes and Internalizing/Externalizing scales derived from factor analyses of each instrument capture variations in patterns of problems that reflect different informants' perspectives. Confirmatory factor analyses have supported the syndrome structures in dozens of societies. Software displays scale scores in relation to user-selected multicultural norms for the age and gender of the person being assessed, according to ratings by each type of informant. Multicultural norms are derived from population samples in 57 societies on every inhabited continent. Ongoing and future research includes multicultural assessment of elders; advancing transdiagnostic progress and outcomes assessment; and testing higher order structures of psychopathology. Copyright © 2017 Elsevier Inc. All rights reserved.
Lake Superior Phytoplankton Characterization from the 2006 Probability Based Survey
We conducted a late summer probability based survey of Lake Superior in 2006 which consisted of 52 sites stratified across 3 depth zones. As part of this effort, we collected composite phytoplankton samples from the epilimnion and the fluorescence maxima (Fmax) at 29 of the site...
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Wei
2016-10-01
An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.
Andrews, Derek S.; Gudbrandsen, Christina M.; Marquand, Andre F.; Ginestet, Cedric E.; Daly, Eileen M.; Murphy, Clodagh M.; Lai, Meng-Chuan; Lombardo, Michael V.; Ruigrok, Amber N. V.; Bullmore, Edward T.; Suckling, John; Williams, Steven C. R.; Baron-Cohen, Simon; Craig, Michael C.; Murphy, Declan G. M.
2017-01-01
Importance Autism spectrum disorder (ASD) is 2 to 5 times more common in male individuals than in female individuals. While the male preponderant prevalence of ASD might partially be explained by sex differences in clinical symptoms, etiological models suggest that the biological male phenotype carries a higher intrinsic risk for ASD than the female phenotype. To our knowledge, this hypothesis has never been tested directly, and the neurobiological mechanisms that modulate ASD risk in male individuals and female individuals remain elusive. Objectives To examine the probability of ASD as a function of normative sex-related phenotypic diversity in brain structure and to identify the patterns of sex-related neuroanatomical variability associated with low or high probability of ASD. Design, Setting, and Participants This study examined a cross-sectional sample of 98 right-handed, high-functioning adults with ASD and 98 matched neurotypical control individuals aged 18 to 42 years. A multivariate probabilistic classification approach was used to develop a predictive model of biological sex based on cortical thickness measures assessed via magnetic resonance imaging in neurotypical controls. This normative model was subsequently applied to individuals with ASD. The study dates were June 2005 to October 2009, and this analysis was conducted between June 2015 and July 2016. Main Outcomes and Measures Sample and population ASD probability estimates as a function of normative sex-related diversity in brain structure, as well as neuroanatomical patterns associated with low or high ASD probability in male individuals and female individuals. Results Among the 98 individuals with ASD, 49 were male and 49 female, with a mean (SD) age of 26.88 (7.18) years. Among the 98 controls, 51 were male and 47 female, with a mean (SD) age of 27.39 (6.44) years. The sample probability of ASD increased significantly with predictive probabilities for the male neuroanatomical brain phenotype. For example, biological female individuals with a more male-typic pattern of brain anatomy were significantly (ie, 3 times) more likely to have ASD than biological female individuals with a characteristically female brain phenotype (P = .72 vs .24, respectively; χ21 = 20.26; P < .001; difference in P values, 0.48; 95% CI, 0.29-0.68). This finding translates to an estimated variability in population prevalence from 0.2% to 1.3%, respectively. Moreover, the patterns of neuroanatomical variability carrying low or high ASD probability were sex specific (eg, in inferior temporal regions, where ASD has different neurobiological underpinnings in male individuals and female individuals). Conclusions and Relevance These findings highlight the need for considering normative sex-related phenotypic diversity when determining an individual’s risk for ASD and provide important novel insights into the neurobiological mechanisms mediating sex differences in ASD prevalence. PMID:28196230
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakayasu, Ernesto S.; Nicora, Carrie D.; Sims, Amy C.
2016-05-03
ABSTRACT Integrative multi-omics analyses can empower more effective investigation and complete understanding of complex biological systems. Despite recent advances in a range of omics analyses, multi-omic measurements of the same sample are still challenging and current methods have not been well evaluated in terms of reproducibility and broad applicability. Here we adapted a solvent-based method, widely applied for extracting lipids and metabolites, to add proteomics to mass spectrometry-based multi-omics measurements. Themetabolite,protein, andlipidextraction (MPLEx) protocol proved to be robust and applicable to a diverse set of sample types, including cell cultures, microbial communities, and tissues. To illustrate the utility of thismore » protocol, an integrative multi-omics analysis was performed using a lung epithelial cell line infected with Middle East respiratory syndrome coronavirus, which showed the impact of this virus on the host glycolytic pathway and also suggested a role for lipids during infection. The MPLEx method is a simple, fast, and robust protocol that can be applied for integrative multi-omic measurements from diverse sample types (e.g., environmental,in vitro, and clinical). IMPORTANCEIn systems biology studies, the integration of multiple omics measurements (i.e., genomics, transcriptomics, proteomics, metabolomics, and lipidomics) has been shown to provide a more complete and informative view of biological pathways. Thus, the prospect of extracting different types of molecules (e.g., DNAs, RNAs, proteins, and metabolites) and performing multiple omics measurements on single samples is very attractive, but such studies are challenging due to the fact that the extraction conditions differ according to the molecule type. Here, we adapted an organic solvent-based extraction method that demonstrated broad applicability and robustness, which enabled comprehensive proteomics, metabolomics, and lipidomics analyses from the same sample.« less
NASA Astrophysics Data System (ADS)
Desai, A. R.; Reed, D. E.; Dugan, H. A.; Loken, L. C.; Schramm, P.; Golub, M.; Huerd, H.; Baldocchi, A. K.; Roberts, R.; Taebel, Z.; Hart, J.; Hanson, P. C.; Stanley, E. H.; Cartwright, E.
2017-12-01
Freshwater ecosystems are hotspots of regional to global carbon cycling. However, significant sample biases limit our ability to quantify and predict these fluxes. For lakes, scaled flux estimates suffer biased sampling toward 1) low-nutrient pristine lakes, 2) infrequent temporal sampling, 3) field campaigns limited to the growing season, and 4) replicates limited to near the center of the lake. While these biases partly reflect the realities of ecological sampling, there is a need to extend observations towards the large fraction of freshwater systems worldwide that are impaired by human activities and those facing significant interannual variability owing to climatic change. Also, for seasonally ice-covered lakes, much of the annual budget of carbon fluxes is thought to be explained by variation in the shoulder seasons of spring ice melt and fall turnover. Recent advances in automated, continuous multi-year temporal sampling coupled with rapid methods for spatial mapping of CO2 fluxes has strong potential to rectify these sampling biases. Here, we demonstrate these advances in an eutrophic seasonally-ice covered lake with an urban shoreline and agricultural watershed. Multiple years of half-hourly eddy covariance flux tower observations from two locations are coupled with frequent spatial samples of these fluxes and drivers by speedboat, floating chamber fluxes, automated buoy-based monitoring of lake nutrient and physical profiles, and ensemble of physical-ecosystem models. High primary productivity in the water column leads to an average net carbon sink during the growing season in much of the lake, but annual net carbon fluxes show the lake can act as an annual source or a sink of carbon depending the timing of spring and fall turnover. Trophic interactions and internal waves drive shorter-term variation while nutrients and biology drive seasonal variation. However, discrepancies remain among methods to quantify fluxes, requiring further investigation.
Accounting for Incomplete Species Detection in Fish Community Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta
2013-01-01
Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less
The DEIMOS 10K Spectroscopic Survey Catalog of the COSMOS Field
NASA Astrophysics Data System (ADS)
Hasinger, G.; Capak, P.; Salvato, M.; Barger, A. J.; Cowie, L. L.; Faisst, A.; Hemmati, S.; Kakazu, Y.; Kartaltepe, J.; Masters, D.; Mobasher, B.; Nayyeri, H.; Sanders, D.; Scoville, N. Z.; Suh, H.; Steinhardt, C.; Yang, Fengwei
2018-05-01
We present a catalog of 10,718 objects in the COSMOS field, observed through multi-slit spectroscopy with the Deep Imaging Multi-Object Spectrograph (DEIMOS) on the Keck II telescope in the wavelength range ∼5500–9800 Å. The catalog contains 6617 objects with high-quality spectra (two or more spectral features), and 1798 objects with a single spectroscopic feature confirmed by the photometric redshift. For 2024 typically faint objects, we could not obtain reliable redshifts. The objects have been selected from a variety of input catalogs based on multi-wavelength observations in the field, and thus have a diverse selection function, which enables the study of the diversity in the galaxy population. The magnitude distribution of our objects is peaked at I AB ∼ 23 and K AB ∼ 21, with a secondary peak at K AB ∼ 24. We sample a broad redshift distribution in the range 0 < z < 6, with one peak at z ∼ 1, and another one around z ∼ 4. We have identified 13 redshift spikes at z > 0.65 with chance probabilities < 4 × 10‑4, some of which are clearly related to protocluster structures of sizes >10 Mpc. An object-to-object comparison with a multitude of other spectroscopic samples in the same field shows that our DEIMOS sample is among the best in terms of fraction of spectroscopic failures and relative redshift accuracy. We have determined the fraction of spectroscopic blends to about 0.8% in our sample. This is likely a lower limit and at any rate well below the most pessimistic expectations. Interestingly, we find evidence for strong lensing of Lyα background emitters within the slits of 12 of our target galaxies, increasing their apparent density by about a factor of 4. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
Horton, Bethany Jablonski; Wages, Nolan A.; Conaway, Mark R.
2016-01-01
Toxicity probability interval designs have received increasing attention as a dose-finding method in recent years. In this study, we compared the two-stage, likelihood-based continual reassessment method (CRM), modified toxicity probability interval (mTPI), and the Bayesian optimal interval design (BOIN) in order to evaluate each method's performance in dose selection for Phase I trials. We use several summary measures to compare the performance of these methods, including percentage of correct selection (PCS) of the true maximum tolerable dose (MTD), allocation of patients to doses at and around the true MTD, and an accuracy index. This index is an efficiency measure that describes the entire distribution of MTD selection and patient allocation by taking into account the distance between the true probability of toxicity at each dose level and the target toxicity rate. The simulation study considered a broad range of toxicity curves and various sample sizes. When considering PCS, we found that CRM outperformed the two competing methods in most scenarios, followed by BOIN, then mTPI. We observed a similar trend when considering the accuracy index for dose allocation, where CRM most often outperformed both the mTPI and BOIN. These trends were more pronounced with increasing number of dose levels. PMID:27435150
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy
Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less
A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events
NASA Astrophysics Data System (ADS)
Zorzetto, E.; Marani, M.
2017-12-01
The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.
Estimation and simulation of multi-beam sonar noise.
Holmin, Arne Johannes; Korneliussen, Rolf J; Tjøstheim, Dag
2016-02-01
Methods for the estimation and modeling of noise present in multi-beam sonar data, including the magnitude, probability distribution, and spatial correlation of the noise, are developed. The methods consider individual acoustic samples and facilitate compensation of highly localized noise as well as subtraction of noise estimates averaged over time. The modeled noise is included in an existing multi-beam sonar simulation model [Holmin, Handegard, Korneliussen, and Tjøstheim, J. Acoust. Soc. Am. 132, 3720-3734 (2012)], resulting in an improved model that can be used to strengthen interpretation of data collected in situ at any signal to noise ratio. Two experiments, from the former study in which multi-beam sonar data of herring schools were simulated, are repeated with inclusion of noise. These experiments demonstrate (1) the potentially large effect of changes in fish orientation on the backscatter from a school, and (2) the estimation of behavioral characteristics such as the polarization and packing density of fish schools. The latter is achieved by comparing real data with simulated data for different polarizations and packing densities.
Pearson, Kristen Nicole; Kendall, William L.; Winkelman, Dana L.; Persons, William R.
2015-01-01
Our findings reveal evidence for skipped spawning in a potamodromous cyprinid, humpback chub (HBC; Gila cypha ). Using closed robust design mark-recapture models, we found, on average, spawning HBC transition to the skipped spawning state () with a probability of 0.45 (95% CRI (i.e. credible interval): 0.10, 0.80) and skipped spawners remain in the skipped spawning state () with a probability of 0.60 (95% CRI: 0.26, 0.83), yielding an average spawning cycle of every 2.12 years, conditional on survival. As a result, migratory skipped spawners are unavailable for detection during annual sampling events. If availability is unaccounted for, survival and detection probability estimates will be biased. Therefore, we estimated annual adult survival probability (S), while accounting for skipped spawning, and found S remained reasonably stable throughout the study period, with an average of 0.75 ((95% CRI: 0.66, 0.82), process varianceσ2 = 0.005), while skipped spawning probability was highly dynamic (σ2 = 0.306). By improving understanding of HBC spawning strategies, conservation decisions can be based on less biased estimates of survival and a more informed population model structure.
NASA Astrophysics Data System (ADS)
Mastrolorenzo, G.; Pappalardo, L.; Troise, C.; Panizza, A.; de Natale, G.
2005-05-01
Integrated volcanological-probabilistic approaches has been used in order to simulate pyroclastic density currents and fallout and produce hazard maps for Campi Flegrei and Somma Vesuvius areas. On the basis of the analyses of all types of pyroclastic flows, surges, secondary pyroclastic density currents and fallout events occurred in the volcanological history of the two volcanic areas and the evaluation of probability for each type of events, matrixs of input parameters for a numerical simulation have been performed. The multi-dimensional input matrixs include the main controlling parameters of the pyroclasts transport and deposition dispersion, as well as the set of possible eruptive vents used in the simulation program. Probabilistic hazard maps provide of each points of campanian area, the yearly probability to be interested by a given event with a given intensity and resulting demage. Probability of a few events in one thousand years are typical of most areas around the volcanoes whitin a range of ca 10 km, including Neaples. Results provide constrains for the emergency plans in Neapolitan area.
Liu, Wei; Kulin, Merima; Kazaz, Tarik; Shahid, Adnan; Moerman, Ingrid; De Poorter, Eli
2017-09-12
Driven by the fast growth of wireless communication, the trend of sharing spectrum among heterogeneous technologies becomes increasingly dominant. Identifying concurrent technologies is an important step towards efficient spectrum sharing. However, due to the complexity of recognition algorithms and the strict condition of sampling speed, communication systems capable of recognizing signals other than their own type are extremely rare. This work proves that multi-model distribution of the received signal strength indicator (RSSI) is related to the signals' modulation schemes and medium access mechanisms, and RSSI from different technologies may exhibit highly distinctive features. A distinction is made between technologies with a streaming or a non-streaming property, and appropriate feature spaces can be established either by deriving parameters such as packet duration from RSSI or directly using RSSI's probability distribution. An experimental study shows that even RSSI acquired at a sub-Nyquist sampling rate is able to provide sufficient features to differentiate technologies such as Wi-Fi, Long Term Evolution (LTE), Digital Video Broadcasting-Terrestrial (DVB-T) and Bluetooth. The usage of the RSSI distribution-based feature space is illustrated via a sample algorithm. Experimental evaluation indicates that more than 92% accuracy is achieved with the appropriate configuration. As the analysis of RSSI distribution is straightforward and less demanding in terms of system requirements, we believe it is highly valuable for recognition of wideband technologies on constrained devices in the context of dynamic spectrum access.
Liu, Wei; Kulin, Merima; Kazaz, Tarik; De Poorter, Eli
2017-01-01
Driven by the fast growth of wireless communication, the trend of sharing spectrum among heterogeneous technologies becomes increasingly dominant. Identifying concurrent technologies is an important step towards efficient spectrum sharing. However, due to the complexity of recognition algorithms and the strict condition of sampling speed, communication systems capable of recognizing signals other than their own type are extremely rare. This work proves that multi-model distribution of the received signal strength indicator (RSSI) is related to the signals’ modulation schemes and medium access mechanisms, and RSSI from different technologies may exhibit highly distinctive features. A distinction is made between technologies with a streaming or a non-streaming property, and appropriate feature spaces can be established either by deriving parameters such as packet duration from RSSI or directly using RSSI’s probability distribution. An experimental study shows that even RSSI acquired at a sub-Nyquist sampling rate is able to provide sufficient features to differentiate technologies such as Wi-Fi, Long Term Evolution (LTE), Digital Video Broadcasting-Terrestrial (DVB-T) and Bluetooth. The usage of the RSSI distribution-based feature space is illustrated via a sample algorithm. Experimental evaluation indicates that more than 92% accuracy is achieved with the appropriate configuration. As the analysis of RSSI distribution is straightforward and less demanding in terms of system requirements, we believe it is highly valuable for recognition of wideband technologies on constrained devices in the context of dynamic spectrum access. PMID:28895879
McGowan, C.P.; Millspaugh, J.J.; Ryan, M.R.; Kruse, C.D.; Pavelka, G.
2009-01-01
Estimating reproductive success for birds with precocial young can be difficult because chicks leave nests soon after hatching and individuals or broods can be difficult to track. Researchers often turn to estimating survival during the prefledging period and, though effective, mark-recapture based approaches are not always feasible due to cost, time, and animal welfare concerns. Using a threatened population of Piping Plovers (Charadrius melodus) that breeds along the Missouri River, we present an approach for estimating chick survival during the prefledging period using long-term (1993-2005), count-based, age-class data. We used a modified catch-curve analysis, and data collected during three 5-day sampling periods near the middle of the breeding season. The approach has several ecological and statistical assumptions and our analyses were designed to minimize the probability of violating those assumptions. For example, limiting the sampling periods to only 5 days gave reasonable assurance that population size was stable during the sampling period. Annual daily survival estimates ranged from 0.825 (SD = 0.03) to 0.931 (0.02) depending on year and sampling period, with these estimates assuming constant survival during the prefledging period and no change in the age structure of the population. The average probability of survival to fledging ranged from 0.126 to 0.188. Our results are similar to other published estimates for this species in similar habitats. This method of estimating chick survival may be useful for a variety of precocial bird species when mark-recapture methods are not feasible and only count-based age class data are available. ?? 2009 Association of Field Ornithologists.
A multi-source probabilistic hazard assessment of tephra dispersal in the Neapolitan area
NASA Astrophysics Data System (ADS)
Sandri, Laura; Costa, Antonio; Selva, Jacopo; Folch, Arnau; Macedonio, Giovanni; Tonini, Roberto
2015-04-01
In this study we present the results obtained from a long-term Probabilistic Hazard Assessment (PHA) of tephra dispersal in the Neapolitan area. Usual PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping eruption sizes and possible vent positions in a limited number of classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA then results from combining simulations considering different volcanological and meteorological conditions through weights associated to their specific probability of occurrence. However, volcanological parameters (i.e., erupted mass, eruption column height, eruption duration, bulk granulometry, fraction of aggregates) typically encompass a wide range of values. Because of such a natural variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. In the present study, we use a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological input values are chosen by using a stratified sampling method. This procedure allows for quantifying hazard without relying on the definition of scenarios, thus avoiding potential biases introduced by selecting single representative scenarios. Embedding this procedure into the Bayesian Event Tree scheme enables the tephra fall PHA and its epistemic uncertainties. We have appied this scheme to analyze long-term tephra fall PHA from Vesuvius and Campi Flegrei, in a multi-source paradigm. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained show that PHA accounting for the whole natural variability are consistent with previous probabilities maps elaborated for Vesuvius and Campi Flegrei on the basis of single representative scenarios, but show significant differences. In particular, the area characterized by a 300 kg/m2-load exceedance probability larger than 5%, accounting for the whole range of variability (that is, from small violent strombolian to plinian eruptions), is similar to that displayed in the maps based on the medium magnitude reference eruption, but it is of a smaller extent. This is due to the relatively higher weight of the small magnitude eruptions considered in this study, but neglected in the reference scenario maps. On the other hand, in our new maps the area characterized by a 300 kg/m2-load exceedance probability larger than 1% is much larger than that of the medium magnitude reference eruption, due to the contribution of plinian eruptions at lower probabilities, again neglected in the reference scenario maps.
Campollo, Octavio; Sheikhattari, Payam; Alvarez, Cesar; Toro-Guerrero, Jaime; Sanchez Avila, Hector; Wagner, Fernando A
2018-01-01
AIM To determine the prevalence of drug and substance abuse among high school students in Jalisco and its association with the severity of health, behavior and psychosocial problems in order to provide evidence for possible prevention and treatment needs. METHODS A multi-stage random sample of Jalisco high school students was given a paper-and-pencil survey based upon an adapted version of the drug use screening inventory (DUSI) (n = 24699; n = 2832). The DUSI showed adequate psychometric characteristics in this population. The statistical analyses accommodated the complex survey design with attention to unequal probability of selection and clustering of participants within schools and regions. RESULTS An estimated 44% of the students had smoked tobacco, one in five students was a current smoker, and one in four students used to smoke but had not smoked for one year or more. By contrast, 6.8% of the students reported having used marijuana, cocaine, or both. Behavioral problems, deviant peer affiliation, and troubled families were independently associated with drug use. One in two students who used tobacco or alcohol had used these drugs in the past year (46% and 54%, respectively), and one in four students who used marijuana or cocaine in their lifetime had used those drugs in the past year (28% in both cases). CONCLUSION The rates of cocaine use as well as the proportion of current users were higher than expected among high school students and indicate changing patterns of drug use in Mexico. These results corroborate that the general trend of drug use by youth in Mexico is increasing. Results from this study help us better understand the needs of at-risk youth and the need for new treatment and prevention strategies. PMID:29568730
An operational system of fire danger rating over Mediterranean Europe
NASA Astrophysics Data System (ADS)
Pinto, Miguel M.; DaCamara, Carlos C.; Trigo, Isabel F.; Trigo, Ricardo M.
2017-04-01
A methodology is presented to assess fire danger based on the probability of exceedance of prescribed thresholds of daily released energy. The procedure is developed and tested over Mediterranean Europe, defined by latitude circles of 35 and 45°N and meridians of 10°W and 27.5°E, for the period 2010-2016. The procedure involves estimating the so-called static and daily probabilities of exceedance. For a given point, the static probability is estimated by the ratio of the number of daily fire occurrences releasing energy above a given threshold to the total number of occurrences inside a cell centred at the point. The daily probability of exceedance which takes into account meteorological factors by means of the Canadian Fire Weather Index (FWI) is in turn estimated based on a Generalized Pareto distribution with static probability and FWI as covariates of the scale parameter. The rationale of the procedure is that small fires, assessed by the static probability, have a weak dependence on weather, whereas the larger fires strongly depend on concurrent meteorological conditions. It is shown that observed frequencies of exceedance over the study area for the period 2010-2016 match with the estimated values of probability based on the developed models for static and daily probabilities of exceedance. Some (small) variability is however found between different years suggesting that refinements can be made in future works by using a larger sample to further increase the robustness of the method. The developed methodology presents the advantage of evaluating fire danger with the same criteria for all the study area, making it a good parameter to harmonize fire danger forecasts and forest management studies. Research was performed within the framework of EUMETSAT Satellite Application Facility for Land Surface Analysis (LSA SAF). Part of methods developed and results obtained are on the basis of the platform supported by The Navigator Company that is currently providing information about fire meteorological danger for Portugal for a wide range of users.
An Exemplar-Based Multi-View Domain Generalization Framework for Visual Recognition.
Niu, Li; Li, Wen; Xu, Dong; Cai, Jianfei
2018-02-01
In this paper, we propose a new exemplar-based multi-view domain generalization (EMVDG) framework for visual recognition by learning robust classifier that are able to generalize well to arbitrary target domain based on the training samples with multiple types of features (i.e., multi-view features). In this framework, we aim to address two issues simultaneously. First, the distribution of training samples (i.e., the source domain) is often considerably different from that of testing samples (i.e., the target domain), so the performance of the classifiers learnt on the source domain may drop significantly on the target domain. Moreover, the testing data are often unseen during the training procedure. Second, when the training data are associated with multi-view features, the recognition performance can be further improved by exploiting the relation among multiple types of features. To address the first issue, considering that it has been shown that fusing multiple SVM classifiers can enhance the domain generalization ability, we build our EMVDG framework upon exemplar SVMs (ESVMs), in which a set of ESVM classifiers are learnt with each one trained based on one positive training sample and all the negative training samples. When the source domain contains multiple latent domains, the learnt ESVM classifiers are expected to be grouped into multiple clusters. To address the second issue, we propose two approaches under the EMVDG framework based on the consensus principle and the complementary principle, respectively. Specifically, we propose an EMVDG_CO method by adding a co-regularizer to enforce the cluster structures of ESVM classifiers on different views to be consistent based on the consensus principle. Inspired by multiple kernel learning, we also propose another EMVDG_MK method by fusing the ESVM classifiers from different views based on the complementary principle. In addition, we further extend our EMVDG framework to exemplar-based multi-view domain adaptation (EMVDA) framework when the unlabeled target domain data are available during the training procedure. The effectiveness of our EMVDG and EMVDA frameworks for visual recognition is clearly demonstrated by comprehensive experiments on three benchmark data sets.
Empirical likelihood method for non-ignorable missing data problems.
Guan, Zhong; Qin, Jing
2017-01-01
Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.
Esmaeilzadeh, Safooreh; Allahverdipour, Hamid; Fathi, Behrouz; Shirzadi, Shayesteh
2016-01-01
Background: In spite of developed countries there are progressive trend about HIV/AIDS and its’ aspects of transmission in the low socio-economic societies. The aim of this was to explain the youth's behavior in adopting HIV/AIDS related preventive behaviors in a sample of Iranian university students by emphasizing on fear appeals approaches alongside examining the role of self-control trait for explaining adoption on danger or fear control processes based on Extended Parallel Process Model (EPPM). Methods: A sample of 156 randomly selected university students in Jolfa, Iran was recruited in a predictive cross-sectional study by application of a researcher-designed questionnaire through self-report data collection manner. Sexual high risk behaviors, the EPPM variables, self-control trait, and general self-efficacy were measured as theoretical framework. Results: Findings indicated that 31.3% of participants were in the fear control process versus 68.7% in danger control about HIV/AIDS and also the presence of multi-sex partners and amphetamine consumption amongst the participants. Low self-control trait and low perceived susceptibility significantly were related to having a history of multi-sex partners while high level of self-efficacy significantly increased the probability of condom use. Conclusion: Findings of the study were indicative of the protective role of high level of self-control, perceived susceptibility and self-efficacy factors on youth's high-risk behaviors and their preventative skills as well. PMID:26573026
Cooperative Localization for Multi-AUVs Based on GM-PHD Filters and Information Entropy Theory
Zhang, Lichuan; Wang, Tonghao; Xu, Demin
2017-01-01
Cooperative localization (CL) is considered a promising method for underwater localization with respect to multiple autonomous underwater vehicles (multi-AUVs). In this paper, we proposed a CL algorithm based on information entropy theory and the probability hypothesis density (PHD) filter, aiming to enhance the global localization accuracy of the follower. In the proposed framework, the follower carries lower cost navigation systems, whereas the leaders carry better ones. Meanwhile, the leaders acquire the followers’ observations, including both measurements and clutter. Then, the PHD filters are utilized on the leaders and the results are communicated to the followers. The followers then perform weighted summation based on all received messages and obtain a final positioning result. Based on the information entropy theory and the PHD filter, the follower is able to acquire a precise knowledge of its position. PMID:28991191
Multi-Sensory Intervention Observational Research
ERIC Educational Resources Information Center
Thompson, Carla J.
2011-01-01
An observational research study based on sensory integration theory was conducted to examine the observed impact of student selected multi-sensory experiences within a multi-sensory intervention center relative to the sustained focus levels of students with special needs. A stratified random sample of 50 students with severe developmental…
Social networks and health-related quality of life: a population based study among older adults.
Gallegos-Carrillo, Katia; Mudgal, Jyoti; Sánchez-García, Sergio; Wagner, Fernando A; Gallo, Joseph J; Salmerón, Jorge; García-Peña, Carmen
2009-01-01
To examine the relationship between components of social networks and health-related quality of life (HRQL) in older adults with and without depressive symptoms. Comparative cross-sectional study with data from the cohort study 'Integral Study of Depression', carried out in Mexico City during 2004. The sample was selected through a multi-stage probability design. HRQL was measured with the SF-36. Geriatric Depression Scale (GDS) and the Short Anxiety Screening Test (SAST) determined depressive symptoms and anxiety. T-test and multiple linear regressions were conducted. Older adults with depressive symptoms had the lowest scores in all HRQL scales. A larger network of close relatives and friends was associated with better HRQL on several scales. Living alone did not significantly affect HRQL level, in either the study or comparison group. A positive association between some components of social networks and good HRQL exists even in older adults with depressive symptoms.
ERIC Educational Resources Information Center
Herek, Gregory M.
2009-01-01
Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or…
Prevalence and Determinants of Suboptimal Vitamin D Levels in a Multiethnic Asian Population.
Man, Ryan Eyn Kidd; Li, Ling-Jun; Cheng, Ching-Yu; Wong, Tien Yin; Lamoureux, Ecosse; Sabanayagam, Charumathi
2017-03-22
This population-based cross-sectional study examined the prevalence and risk factors of suboptimal vitamin D levels (assessed using circulating 25-hydroxycholecalciferol (25(OH)D)) in a multi-ethnic sample of Asian adults. Plasma 25(OH)D concentration of 1139 Chinese, Malay and Indians (40-80 years) were stratified into normal (≥30 ng/mL), and suboptimal (including insufficiency and deficiency, <30 ng/mL) based on the 2011 Endocrine Society Clinical Practice Guidelines. Logistic regression models were used to assess the associations of demographic, lifestyle and clinical risk factors with the outcome. Of the 1139 participants, 25(OH)D concentration was suboptimal in 76.1%. In multivariable models, age ≤65 years (compared to age >65 years), Malay and Indian ethnicities (compared to Chinese ethnicity), and higher body mass index, HbA1c, education and income levels were associated with suboptimal 25(OH)D concentration ( p < 0.05). In a population-based sample of Asian adults, approximately 75% had suboptimal 25(OH)D concentration. Targeted interventions and stricter reinforcements of existing guidelines for vitamin D supplementation are needed for groups at risk of vitamin D insufficiency/deficiency.
Prevalence and Determinants of Suboptimal Vitamin D Levels in a Multiethnic Asian Population
Man, Ryan Eyn Kidd; Li, Ling-Jun; Cheng, Ching-Yu; Wong, Tien Yin; Lamoureux, Ecosse; Sabanayagam, Charumathi
2017-01-01
This population-based cross-sectional study examined the prevalence and risk factors of suboptimal vitamin D levels (assessed using circulating 25-hydroxycholecalciferol (25(OH)D)) in a multi-ethnic sample of Asian adults. Plasma 25(OH)D concentration of 1139 Chinese, Malay and Indians (40–80 years) were stratified into normal (≥30 ng/mL), and suboptimal (including insufficiency and deficiency, <30 ng/mL) based on the 2011 Endocrine Society Clinical Practice Guidelines. Logistic regression models were used to assess the associations of demographic, lifestyle and clinical risk factors with the outcome. Of the 1139 participants, 25(OH)D concentration was suboptimal in 76.1%. In multivariable models, age ≤65 years (compared to age >65 years), Malay and Indian ethnicities (compared to Chinese ethnicity), and higher body mass index, HbA1c, education and income levels were associated with suboptimal 25(OH)D concentration (p < 0.05). In a population-based sample of Asian adults, approximately 75% had suboptimal 25(OH)D concentration. Targeted interventions and stricter reinforcements of existing guidelines for vitamin D supplementation are needed for groups at risk of vitamin D insufficiency/deficiency. PMID:28327512
The geologic setting of the Luna 16 landing site
McCauley, J.F.; Scott, D.H.
1972-01-01
The Luna 16 landing site is similar in its geologic setting to Apollos 11 and 12. All three sites are located on basaltic mare fill which occurs mostly within multi-ring basins formed by impact earlier in the moon's history. A regolith developed by impact bombardment is present at each of these sites. The regolith is composed mostly of locally derived volcanic material, but also contains exotic fine fragments that have been ballistically transported into the landing sites by large impact events which formed craters such as Langrenus and Copernicus. These exotic fragments probably consist mostly of earlier reworked multi-ring basin debris and, although not directly traceable to individual sources, they do represent a good statistical sample of the composition of most of the premare terrac regions. ?? 1972.
Multi-MHz FDML OCT: snapshot retinal imaging at 6.7 million axial-scans per second
NASA Astrophysics Data System (ADS)
Klein, Thomas; Wieser, Wolfgang; André, Raphael; Pfeiffer, Tom; Eigenwillig, Christoph M.; Huber, Robert
2012-01-01
We demonstrate the acquisition of densely sampled wide-field 3D OCT datasets of the human retina in 0.3s. This performance is achieved with a multi-MHz Fourier domain mode-locked (FDML) laser source operating at 1050nm. A two-beam setup doubles the 3.35MHz laser sweep rate to 6.7MHz, which is 16x faster than results achieved with any non-FDML source used for retinal OCT. We discuss two main benefits of these high line rates: First, large datasets over an ultra-wide field of view can be acquired with a low probability of distortions. Second, even if eye movements occur, now the scan rate is high enough to directly correct even the fastest saccades without loss of information.
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-12-08
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-01-01
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-01
... a low, medium, or high probability of retiring early. The determination is based on the year a... the expected retirement age after the probability of early retirement has been determined using Table I. These tables establish, by probability category, the expected retirement age based on both the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-01
..., medium, or high probability of retiring early. The determination is based on the year a participant would... the expected retirement age after the probability of early retirement has been determined using Table I. These tables establish, by probability category, the expected retirement age based on both the...
ERIC Educational Resources Information Center
Kustos, Paul Nicholas
2010-01-01
Student difficulty in the study of probability arises in intuitively-based misconceptions derived from heuristics. One such heuristic, the one of note for this research study, is that of representativeness, in which an individual informally assesses the probability of an event based on the degree to which the event is similar to the sample from…
Sampling Approaches for Multi-Domain Internet Performance Measurement Infrastructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calyam, Prasad
2014-09-15
The next-generation of high-performance networks being developed in DOE communities are critical for supporting current and emerging data-intensive science applications. The goal of this project is to investigate multi-domain network status sampling techniques and tools to measure/analyze performance, and thereby provide “network awareness” to end-users and network operators in DOE communities. We leverage the infrastructure and datasets available through perfSONAR, which is a multi-domain measurement framework that has been widely deployed in high-performance computing and networking communities; the DOE community is a core developer and the largest adopter of perfSONAR. Our investigations include development of semantic scheduling algorithms, measurement federationmore » policies, and tools to sample multi-domain and multi-layer network status within perfSONAR deployments. We validate our algorithms and policies with end-to-end measurement analysis tools for various monitoring objectives such as network weather forecasting, anomaly detection, and fault-diagnosis. In addition, we develop a multi-domain architecture for an enterprise-specific perfSONAR deployment that can implement monitoring-objective based sampling and that adheres to any domain-specific measurement policies.« less
Age of majority assessment in Dutch individuals based on Cameriere's third molar maturity index.
Boyacıoğlu Doğru, Hatice; Gulsahi, Ayşe; Çehreli, Sevi Burçak; Galić, Ivan; van der Stelt, Paul; Cameriere, Roberto
2018-01-01
Radiological examination of the third molar is done in living individuals for estimation of chronological age, especially in the late adolescence. The aim of this study was to assess the application of Cameriere's third molar maturity index (I 3M ) to determine whether an individual is 18 years or older (adult) or younger than 18 years (minor) in a sample of Dutch individuals. The sample consisted of panoramic images of 360 individuals aged between 14 and 22 years old. Three observers performed the measurements. Gender was not statistically significant in discriminating adults and minors. The highest value of the Youden index of the receiver operating curve analysis was for the value of I 3M <0.08 in discriminating individuals as minor or adult. The specificity (Sp) and sensitivity (Se) results for females were 96.3% and 72.7% respectively. The Sp and Se for males were 95.0% and 84.0% respectively. The probabilities of correctly classified individuals were 83.3% and 88.9%, and Bayes post-test probability was 96.3% and 95.7% in females and males respectively. Obtained results showed that the specific cut-off point of I 3M <0.08 may be a useful and reliable method for adult age assessment in a Dutch population. Copyright © 2017 Elsevier B.V. All rights reserved.
40 CFR 51.1009 - Reasonable further progress (RFP) requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... between the base year and the attainment year. (e) For a multi-State nonattainment area, the RFP plans for each State represented in the nonattainment area must demonstrate RFP on the basis of common multi... State in a multi-State nonattainment area must ensure that the sources within its boundaries comply with...
Migration confers winter survival benefits in a partially migratory songbird
Zúñiga, Daniel; Gager, Yann; Kokko, Hanna; Fudickar, Adam Michael; Schmidt, Andreas; Naef-Daenzer, Beat; Wikelski, Martin
2017-01-01
To evolve and to be maintained, seasonal migration, despite its risks, has to yield fitness benefits compared with year-round residency. Empirical data supporting this prediction have remained elusive in the bird literature. To test fitness related benefits of migration, we studied a partial migratory population of European blackbirds (Turdus merula) over 7 years. Using a combination of capture-mark-recapture and radio telemetry, we compared survival probabilities between migrants and residents estimated by multi-event survival models, showing that migrant blackbirds had 16% higher probability to survive the winter compared to residents. A subsequent modelling exercise revealed that residents should have 61.25% higher breeding success than migrants, to outweigh the survival costs of residency. Our results support theoretical models that migration should confer survival benefits to evolve, and thus provide empirical evidence to understand the evolution and maintenance of migration. PMID:29157357
Effects of heterogeneous traffic with speed limit zone on the car accidents
NASA Astrophysics Data System (ADS)
Marzoug, R.; Lakouari, N.; Bentaleb, K.; Ez-Zahraouy, H.; Benyoussef, A.
2016-06-01
Using the extended Nagel-Schreckenberg (NS) model, we numerically study the impact of the heterogeneity of traffic with speed limit zone (SLZ) on the probability of occurrence of car accidents (Pac). SLZ in the heterogeneous traffic has an important effect, typically in the mixture velocities case. In the deterministic case, SLZ leads to the appearance of car accidents even in the low densities, in this region Pac increases with increasing of fraction of fast vehicles (Ff). In the nondeterministic case, SLZ decreases the effect of braking probability Pb in the low densities. Furthermore, the impact of multi-SLZ on the probability Pac is also studied. In contrast with the homogeneous case [X. Li, H. Kuang, Y. Fan and G. Zhang, Int. J. Mod. Phys. C 25 (2014) 1450036], it is found that in the low densities the probability Pac without SLZ (n = 0) is low than Pac with multi-SLZ (n > 0). However, the existence of multi-SLZ in the road decreases the risk of collision in the congestion phase.
Veenstra, David L; Guzauskas, Gregory F; Villa, Kathleen F; Boudreau, Denise M
2017-05-01
A Phase-3 study of defibrotide compared with historical controls demonstrated a 23% improvement in 100-day survival post-hematopoietic stem cell transplantation (HSCT) among patients with veno-occlusive disease with multi-organ dysfunction (VOD with MOD). To estimate the budget impact and cost-effectiveness of introducing defibrotide to a transplant center. The authors developed a budget impact model from the perspective of a bone-marrow transplant center. It was estimated that 2.3% of adults and 4.2% of children would develop VOD with MOD following HSCT based on a retrospective hospital database analysis and the effect that treating patients with defibrotide would have on costs for adult and pediatric centers was estimated. A cost-utility analysis (CUA) was also developed to capture the long-term cost-effectiveness of defibrotide. Projected life expectancies in the two groups were estimated based on trial data, transplant registry data, studies of long-term survival among HSCT patients, and US population life-tables. There was an estimated 3% increase ($330,706) per year in total adult transplantation center costs associated with adopting defibrotide, and a <1% increase ($106,385) for pediatric transplant centers, assuming 100 transplants per year. In the CUA, the lifetime increase in cost per patient was $106,928, life expectancy increased by 3.74 years, and quality-adjusted life-years (QALYs) increased by 2.24. The incremental cost-effectiveness ratio (ICER) was $47,736 per QALY gained; 88% probability defibrotide was cost-effective at a $100,000/QALY threshold. The budget impact of defibrotide for a transplant center is relatively modest compared to the overall cost of transplantation. Defibrotide provides an important survival advantage for VOD with MOD patients, and the life years gained lead to defibrotide being highly cost-effective.
A Hierarchical multi-input and output Bi-GRU Model for Sentiment Analysis on Customer Reviews
NASA Astrophysics Data System (ADS)
Zhang, Liujie; Zhou, Yanquan; Duan, Xiuyu; Chen, Ruiqi
2018-03-01
Multi-label sentiment classification on customer reviews is a practical challenging task in Natural Language Processing. In this paper, we propose a hierarchical multi-input and output model based bi-directional recurrent neural network, which both considers the semantic and lexical information of emotional expression. Our model applies two independent Bi-GRU layer to generate part of speech and sentence representation. Then the lexical information is considered via attention over output of softmax activation on part of speech representation. In addition, we combine probability of auxiliary labels as feature with hidden layer to capturing crucial correlation between output labels. The experimental result shows that our model is computationally efficient and achieves breakthrough improvements on customer reviews dataset.
Probabilities and Predictions: Modeling the Development of Scientific Problem-Solving Skills
ERIC Educational Resources Information Center
Stevens, Ron; Johnson, David F.; Soller, Amy
2005-01-01
The IMMEX (Interactive Multi-Media Exercises) Web-based problem set platform enables the online delivery of complex, multimedia simulations, the rapid collection of student performance data, and has already been used in several genetic simulations. The next step is the use of these data to understand and improve student learning in a formative…
We conducted a probability-based sampling of Lake Superior in 2006 and compared the zooplankton biomass estimate with laser optical plankton counter (LOPC) predictions. The net survey consisted of 52 sites stratified across three depth zones (0-30, 30-150, >150 m). The LOPC tow...
Probabilistic flood damage modelling at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno
2014-05-01
Decisions on flood risk management and adaptation are usually based on risk analyses. Such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments. Most damage models have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood damage models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we show how the model BT-FLEMO (Bagging decision Tree based Flood Loss Estimation MOdel) can be applied on the meso-scale, namely on the basis of ATKIS land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany. The application of BT-FLEMO provides a probability distribution of estimated damage to residential buildings per municipality. Validation is undertaken on the one hand via a comparison with eight other damage models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official damage data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of damage estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation model BT-FLEMO is that it inherently provides quantitative information about the uncertainty of the prediction. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64.
Using climate model simulations to assess the current climate risk to maize production
NASA Astrophysics Data System (ADS)
Kent, Chris; Pope, Edward; Thompson, Vikki; Lewis, Kirsty; Scaife, Adam A.; Dunstone, Nick
2017-05-01
The relationship between the climate and agricultural production is of considerable importance to global food security. However, there has been relatively little exploration of climate-variability related yield shocks. The short observational yield record does not adequately sample natural inter-annual variability thereby limiting the accuracy of probability assessments. Focusing on the United States and China, we present an innovative use of initialised ensemble climate simulations and a new agro-climatic indicator, to calculate the risk of severe water stress. Combined, these regions provide 60% of the world’s maize, and therefore, are crucial to global food security. To probe a greater range of inter-annual variability, the indicator is applied to 1400 simulations of the present day climate. The probability of severe water stress in the major maize producing regions is quantified, and in many regions an increased risk is found compared to calculations from observed historical data. Analysis suggests that the present day climate is also capable of producing unprecedented severe water stress conditions. Therefore, adaptation plans and policies based solely on observed events from the recent past may considerably under-estimate the true risk of climate-related maize shocks. The probability of a major impact event occurring simultaneously across both regions—a multi-breadbasket failure—is estimated to be up to 6% per decade and arises from a physically plausible climate state. This novel approach highlights the significance of climate impacts on crop production shocks and provides a platform for considerably improving food security assessments, in the present day or under a changing climate, as well as development of new risk based climate services.
NASA Astrophysics Data System (ADS)
Santosa, B.; Siswanto, N.; Fiqihesa
2018-04-01
This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution
Making Sense of 'Big Data' in Provenance Studies
NASA Astrophysics Data System (ADS)
Vermeesch, P.
2014-12-01
Huge online databases can be 'mined' to reveal previously hidden trends and relationships in society. One could argue that sedimentary geology has entered a similar era of 'Big Data', as modern provenance studies routinely apply multiple proxies to dozens of samples. Just like the Internet, sedimentary geology now requires specialised statistical tools to interpret such large datasets. These can be organised on three levels of progressively higher order:A single sample: The most effective way to reveal the provenance information contained in a representative sample of detrital zircon U-Pb ages are probability density estimators such as histograms and kernel density estimates. The widely popular 'probability density plots' implemented in IsoPlot and AgeDisplay compound analytical uncertainty with geological scatter and are therefore invalid.Several samples: Multi-panel diagrams comprising many detrital age distributions or compositional pie charts quickly become unwieldy and uninterpretable. For example, if there are N samples in a study, then the number of pairwise comparisons between samples increases quadratically as N(N-1)/2. This is simply too much information for the human eye to process. To solve this problem, it is necessary to (a) express the 'distance' between two samples as a simple scalar and (b) combine all N(N-1)/2 such values in a single two-dimensional 'map', grouping similar and pulling apart dissimilar samples. This can be easily achieved using simple statistics-based dissimilarity measures and a standard statistical method called Multidimensional Scaling (MDS).Several methods: Suppose that we use four provenance proxies: bulk petrography, chemistry, heavy minerals and detrital geochronology. This will result in four MDS maps, each of which likely show slightly different trends and patterns. To deal with such cases, it may be useful to use a related technique called 'three way multidimensional scaling'. This results in two graphical outputs: an MDS map, and a map with 'weights' showing to what extent the different provenance proxies influence the horizontal and vertical axis of the MDS map. Thus, detrital data can not only inform the user about the provenance of sediments, but also about the causal relationships between the mineralogy, geochronology and chemistry.
Mohammadkhani, Parvaneh; Khanipour, Hamid; Azadmehr, Hedieh; Mobramm, Ardeshir; Naseri, Esmaeil
2015-01-01
The aim of this study was to evaluate suicide probability in Iranian males with substance abuse or dependence disorder and to investigate the predictors of suicide probability based on trait mindfulness, reasons for living and severity of general psychiatric symptoms. Participants were 324 individuals with substance abuse or dependence in an outpatient setting and prison. Reasons for living questionnaire, Mindfulness Attention Awareness Scale and Suicide probability Scale were used as instruments. Sample was selected based on convenience sampling method. Data were analyzed using SPSS and AMOS. The life-time prevalence of suicide attempt in the outpatient setting was35% and it was 42% in the prison setting. Suicide probability in the prison setting was significantly higher than in the outpatient setting (p<0.001). The severity of general symptom strongly correlated with suicide probability. Trait mindfulness, not reasons for living beliefs, had a mediating effect in the relationship between the severity of general symptoms and suicide probability. Fear of social disapproval, survival and coping beliefs and child-related concerns significantly predicted suicide probability (p<0.001). It could be suggested that trait mindfulness was more effective in preventing suicide probability than beliefs about reasons for living in individuals with substance abuse or dependence disorders. The severity of general symptom should be regarded as an important risk factor of suicide probability.
Mattfeldt, S.D.; Bailey, L.L.; Grant, E.H.C.
2009-01-01
Monitoring programs have the potential to identify population declines and differentiate among the possible cause(s) of these declines. Recent criticisms regarding the design of monitoring programs have highlighted a failure to clearly state objectives and to address detectability and spatial sampling issues. Here, we incorporate these criticisms to design an efficient monitoring program whose goals are to determine environmental factors which influence the current distribution and measure change in distributions over time for a suite of amphibians. In designing the study we (1) specified a priori factors that may relate to occupancy, extinction, and colonization probabilities and (2) used the data collected (incorporating detectability) to address our scientific questions and adjust our sampling protocols. Our results highlight the role of wetland hydroperiod and other local covariates in the probability of amphibian occupancy. There was a change in overall occupancy probabilities for most species over the first three years of monitoring. Most colonization and extinction estimates were constant over time (years) and space (among wetlands), with one notable exception: local extinction probabilities for Rana clamitans were lower for wetlands with longer hydroperiods. We used information from the target system to generate scenarios of population change and gauge the ability of the current sampling to meet monitoring goals. Our results highlight the limitations of the current sampling design, emphasizing the need for long-term efforts, with periodic re-evaluation of the program in a framework that can inform management decisions.
Big data sharing and analysis to advance research in post-traumatic epilepsy.
Duncan, Dominique; Vespa, Paul; Pitkanen, Asla; Braimah, Adebayo; Lapinlampi, Nina; Toga, Arthur W
2018-06-01
We describe the infrastructure and functionality for a centralized preclinical and clinical data repository and analytic platform to support importing heterogeneous multi-modal data, automatically and manually linking data across modalities and sites, and searching content. We have developed and applied innovative image and electrophysiology processing methods to identify candidate biomarkers from MRI, EEG, and multi-modal data. Based on heterogeneous biomarkers, we present novel analytic tools designed to study epileptogenesis in animal model and human with the goal of tracking the probability of developing epilepsy over time. Copyright © 2017. Published by Elsevier Inc.
Pore-scale Simulation and Imaging of Multi-phase Flow and Transport in Porous Media (Invited)
NASA Astrophysics Data System (ADS)
Crawshaw, J.; Welch, N.; Daher, I.; Yang, J.; Shah, S.; Grey, F.; Boek, E.
2013-12-01
We combine multi-scale imaging and computer simulation of multi-phase flow and reactive transport in rock samples to enhance our fundamental understanding of long term CO2 storage in rock formations. The imaging techniques include Confocal Laser Scanning Microscopy (CLSM), micro-CT and medical CT scanning, with spatial resolutions ranging from sub-micron to mm respectively. First, we report a new sample preparation technique to study micro-porosity in carbonates using CLSM in 3 dimensions. Second, we use micro-CT scanning to generate high resolution 3D pore space images of carbonate and cap rock samples. In addition, we employ micro-CT to image the processes of evaporation in fractures and cap rock degradation due to exposure to CO2 flow. Third, we use medical CT scanning to image spontaneous imbibition in carbonate rock samples. Our imaging studies are complemented by computer simulations of multi-phase flow and transport, using the 3D pore space images obtained from the scanning experiments. We have developed a massively parallel lattice-Boltzmann (LB) code to calculate the single phase flow field in these pore space images. The resulting flow fields are then used to calculate hydrodynamic dispersion using a novel scheme to predict probability distributions for molecular displacements using the LB method and a streamline algorithm, modified for optimal solid boundary conditions. We calculate solute transport on pore-space images of rock cores with increasing degree of heterogeneity: a bead pack, Bentheimer sandstone and Portland carbonate. We observe that for homogeneous rock samples, such as bead packs, the displacement distribution remains Gaussian with time increasing. In the more heterogeneous rocks, on the other hand, the displacement distribution develops a stagnant part. We observe that the fraction of trapped solute increases from the beadpack (0 %) to Bentheimer sandstone (1.5 %) to Portland carbonate (8.1 %), in excellent agreement with PFG-NMR experiments. We then use our preferred multi-phase model to directly calculate flow in pore space images of two different sandstones and observe excellent agreement with experimental relative permeabilities. Also we calculate cluster size distributions in good agreement with experimental studies. Our analysis shows that the simulations are able to predict both multi-phase flow and transport properties directly on large 3D pore space images of real rocks. Pore space images, left and velocity distributions, right (Yang and Boek, 2013)
Yoganandan, Narayan; Arun, Mike W.J.; Pintar, Frank A.; Szabo, Aniko
2015-01-01
Objective Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. Methods The study re-examined lower leg PMHS data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and non-injury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the co-variable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal and log-logistic distributions was based on the Akaike Information Criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. Results The mean age, stature and weight: 58.2 ± 15.1 years, 1.74 ± 0.08 m and 74.9 ± 13.8 kg. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other two distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-old at five, 25 and 50% risk levels age groups for lower leg fracture. For 25, 45 and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. Conclusions This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines. PMID:25307381
Accurate, multi-kb reads resolve complex populations and detect rare microorganisms.
Sharon, Itai; Kertesz, Michael; Hug, Laura A; Pushkarev, Dmitry; Blauwkamp, Timothy A; Castelle, Cindy J; Amirebrahimi, Mojgan; Thomas, Brian C; Burstein, David; Tringe, Susannah G; Williams, Kenneth H; Banfield, Jillian F
2015-04-01
Accurate evaluation of microbial communities is essential for understanding global biogeochemical processes and can guide bioremediation and medical treatments. Metagenomics is most commonly used to analyze microbial diversity and metabolic potential, but assemblies of the short reads generated by current sequencing platforms may fail to recover heterogeneous strain populations and rare organisms. Here we used short (150-bp) and long (multi-kb) synthetic reads to evaluate strain heterogeneity and study microorganisms at low abundance in complex microbial communities from terrestrial sediments. The long-read data revealed multiple (probably dozens of) closely related species and strains from previously undescribed Deltaproteobacteria and Aminicenantes (candidate phylum OP8). Notably, these are the most abundant organisms in the communities, yet short-read assemblies achieved only partial genome coverage, mostly in the form of short scaffolds (N50 = ∼ 2200 bp). Genome architecture and metabolic potential for these lineages were reconstructed using a new synteny-based method. Analysis of long-read data also revealed thousands of species whose abundances were <0.1% in all samples. Most of the organisms in this "long tail" of rare organisms belong to phyla that are also represented by abundant organisms. Genes encoding glycosyl hydrolases are significantly more abundant than expected in rare genomes, suggesting that rare species may augment the capability for carbon turnover and confer resilience to changing environmental conditions. Overall, the study showed that a diversity of closely related strains and rare organisms account for a major portion of the communities. These are probably common features of many microbial communities and can be effectively studied using a combination of long and short reads. © 2015 Sharon et al.; Published by Cold Spring Harbor Laboratory Press.
Determinants of body weight status in Malaysia: an ethnic comparison.
Tan, Andrew K G; Yen, Steven T; Feisul, Mustapha I
2012-04-01
To investigate the roles of sociodemographic and health lifestyle factors in affecting body mass index (BMI) across ethnic groups in Malaysia. Data are obtained from 2,436 observations from the Malaysia Non-Communicable Disease Surveillance-1. The multi-ethnic sample is segmented into Malay, Chinese, and Indian/other ethnicities. Ordered probit analysis is conducted and marginal effects of sociodemographic and health lifestyle variables on BMI calculated. Malays between 41 and 58 years are more likely to be overweight or obese than their 31-40 years counterparts, while the opposite is true among Chinese. Retirees of Chinese and Indian/other ethnicities are less likely to be obese and more likely to have normal BMI than those between 31 and 40 years. Primary educated Chinese are more likely to be overweight or obese, while tertiary-educated Malays are less likely to suffer from similar weight issues as compared to those with only junior high school education. Affluent Malays and Chinese are more likely to be overweight than their low-middle income cohorts. Family illness history is likely to cause overweightness or obesity, irrespective of ethnicity. Malay cigarette smokers have lower overweight and obesity probabilities than non-cigarette smokers. There exists a need for flexible policies to address cross-ethnic differences in the sociodemographic and health-lifestyle covariates of BMI.
Influence of age on androgen deprivation therapy-associated Alzheimer’s disease
NASA Astrophysics Data System (ADS)
Nead, Kevin T.; Gaskin, Greg; Chester, Cariad; Swisher-McClure, Samuel; Dudley, Joel T.; Leeper, Nicholas J.; Shah, Nigam H.
2016-10-01
We recently found an association between androgen deprivation therapy (ADT) and Alzheimer’s disease. As Alzheimer’s disease is a disease of advanced age, we hypothesize that older individuals on ADT may be at greatest risk. We conducted a retrospective multi-institutional analysis among 16,888 individuals with prostate cancer using an informatics approach. We tested the effect of ADT on Alzheimer’s disease using Kaplan-Meier age stratified analyses in a propensity score matched cohort. We found a lower cumulative probability of remaining Alzheimer’s disease-free between non-ADT users age ≥70 versus those age <70 years (p < 0.001) and between ADT versus non-ADT users ≥70 years (p = 0.034). The 5-year probability of developing Alzheimer’s disease was 2.9%, 1.9% and 0.5% among ADT users ≥70, non-ADT users ≥70 and individuals <70 years, respectively. Compared to younger individuals older men on ADT may have the greatest absolute Alzheimer’s disease risk. Future work should investigate the ADT Alzheimer’s disease association in advanced age populations given the greater potential clinical impact.
A predictive model of hospitalization risk among disabled medicaid enrollees.
McAna, John F; Crawford, Albert G; Novinger, Benjamin W; Sidorov, Jaan; Din, Franklin M; Maio, Vittorio; Louis, Daniel Z; Goldfarb, Neil I
2013-05-01
To identify Medicaid patients, based on 1 year of administrative data, who were at high risk of admission to a hospital in the next year, and who were most likely to benefit from outreach and targeted interventions. Observational cohort study for predictive modeling. Claims, enrollment, and eligibility data for 2007 from a state Medicaid program were used to provide the independent variables for a logistic regression model to predict inpatient stays in 2008 for fully covered, continuously enrolled, disabled members. The model was developed using a 50% random sample from the state and was validated against the other 50%. Further validation was carried out by applying the parameters from the model to data from a second state's disabled Medicaid population. The strongest predictors in the model developed from the first 50% sample were over age 65 years, inpatient stay(s) in 2007, and higher Charlson Comorbidity Index scores. The areas under the receiver operating characteristic curve for the model based on the 50% state sample and its application to the 2 other samples ranged from 0.79 to 0.81. Models developed independently for all 3 samples were as high as 0.86. The results show a consistent trend of more accurate prediction of hospitalization with increasing risk score. This is a fairly robust method for targeting Medicaid members with a high probability of future avoidable hospitalizations for possible case management or other interventions. Comparison with a second state's Medicaid program provides additional evidence for the usefulness of the model.
Bayesian network models for error detection in radiotherapy plans
NASA Astrophysics Data System (ADS)
Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.
2015-04-01
The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.
Survey design for lakes and reservoirs in the United States to assess contaminants in fish tissue.
Olsen, Anthony R; Snyder, Blaine D; Stahl, Leanne L; Pitt, Jennifer L
2009-03-01
The National Lake Fish Tissue Study (NLFTS) was the first survey of fish contamination in lakes and reservoirs in the 48 conterminous states based on a probability survey design. This study included the largest set (268) of persistent, bioaccumulative, and toxic (PBT) chemicals ever studied in predator and bottom-dwelling fish species. The U.S. Environmental Protection Agency (USEPA) implemented the study in cooperation with states, tribal nations, and other federal agencies, with field collection occurring at 500 lakes and reservoirs over a four-year period (2000-2003). The sampled lakes and reservoirs were selected using a spatially balanced unequal probability survey design from 270,761 lake objects in USEPA's River Reach File Version 3 (RF3). The survey design selected 900 lake objects, with a reserve sample of 900, equally distributed across six lake area categories. A total of 1,001 lake objects were evaluated to identify 500 lake objects that met the study's definition of a lake and could be accessed for sampling. Based on the 1,001 evaluated lakes, it was estimated that a target population of 147,343 (+/-7% with 95% confidence) lakes and reservoirs met the NLFTS definition of a lake. Of the estimated 147,343 target lakes, 47% were estimated not to be sampleable either due to landowner access denial (35%) or due to physical barriers (12%). It was estimated that a sampled population of 78,664 (+/-12% with 95% confidence) lakes met the NLFTS lake definition, had either predator or bottom-dwelling fish present, and could be sampled.
Ryan, D; Shephard, S; Kelly, F L
2016-09-01
This study investigates temporal stability in the scale microchemistry of brown trout Salmo trutta in feeder streams of a large heterogeneous lake catchment and rates of change after migration into the lake. Laser-ablation inductively coupled plasma mass spectrometry was used to quantify the elemental concentrations of Na, Mg, Mn, Cu, Zn, Ba and Sr in archived (1997-2002) scales of juvenile S. trutta collected from six major feeder streams of Lough Mask, County Mayo, Ireland. Water-element Ca ratios within these streams were determined for the fish sampling period and for a later period (2013-2015). Salmo trutta scale Sr and Ba concentrations were significantly (P < 0·05) correlated with stream water sample Sr:Ca and Ba:Ca ratios respectively from both periods, indicating multi-annual stability in scale and water-elemental signatures. Discriminant analysis of scale chemistries correctly classified 91% of sampled juvenile S. trutta to their stream of origin using a cross-validated classification model. This model was used to test whether assumed post-depositional change in scale element concentrations reduced correct natal stream classification of S. trutta in successive years after migration into Lough Mask. Fish residing in the lake for 1-3 years could be reliably classified to their most likely natal stream, but the probability of correct classification diminished strongly with longer lake residence. Use of scale chemistry to identify natal streams of lake S. trutta should focus on recent migrants, but may not require contemporary water chemistry data. © 2016 The Fisheries Society of the British Isles.
Smart, Adam S; Tingley, Reid; Weeks, Andrew R; van Rooyen, Anthony R; McCarthy, Michael A
2015-10-01
Effective management of alien species requires detecting populations in the early stages of invasion. Environmental DNA (eDNA) sampling can detect aquatic species at relatively low densities, but few studies have directly compared detection probabilities of eDNA sampling with those of traditional sampling methods. We compare the ability of a traditional sampling technique (bottle trapping) and eDNA to detect a recently established invader, the smooth newt Lissotriton vulgaris vulgaris, at seven field sites in Melbourne, Australia. Over a four-month period, per-trap detection probabilities ranged from 0.01 to 0.26 among sites where L. v. vulgaris was detected, whereas per-sample eDNA estimates were much higher (0.29-1.0). Detection probabilities of both methods varied temporally (across days and months), but temporal variation appeared to be uncorrelated between methods. Only estimates of spatial variation were strongly correlated across the two sampling techniques. Environmental variables (water depth, rainfall, ambient temperature) were not clearly correlated with detection probabilities estimated via trapping, whereas eDNA detection probabilities were negatively correlated with water depth, possibly reflecting higher eDNA concentrations at lower water levels. Our findings demonstrate that eDNA sampling can be an order of magnitude more sensitive than traditional methods, and illustrate that traditional- and eDNA-based surveys can provide independent information on species distributions when occupancy surveys are conducted over short timescales.
NASA Astrophysics Data System (ADS)
D'Isanto, A.; Polsterer, K. L.
2018-01-01
Context. The need to analyze the available large synoptic multi-band surveys drives the development of new data-analysis methods. Photometric redshift estimation is one field of application where such new methods improved the results, substantially. Up to now, the vast majority of applied redshift estimation methods have utilized photometric features. Aims: We aim to develop a method to derive probabilistic photometric redshift directly from multi-band imaging data, rendering pre-classification of objects and feature extraction obsolete. Methods: A modified version of a deep convolutional network was combined with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) were applied as performance criteria. We have adopted a feature based random forest and a plain mixture density network to compare performances on experiments with data from SDSS (DR9). Results: We show that the proposed method is able to predict redshift PDFs independently from the type of source, for example galaxies, quasars or stars. Thereby the prediction performance is better than both presented reference methods and is comparable to results from the literature. Conclusions: The presented method is extremely general and allows us to solve of any kind of probabilistic regression problems based on imaging data, for example estimating metallicity or star formation rate of galaxies. This kind of methodology is tremendously important for the next generation of surveys.
Bohnert, Amy S B; German, Danielle; Knowlton, Amy R; Latkin, Carl A
2010-03-01
Social support is a multi-dimensional construct that is important to drug use cessation. The present study identified types of supportive friends among the social network members in a community-based sample and examined the relationship of supporter-type classes with supporter, recipient, and supporter-recipient relationship characteristics. We hypothesized that the most supportive network members and their support recipients would be less likely to be current heroin/cocaine users. Participants (n=1453) were recruited from low-income neighborhoods with a high prevalence of drug use. Participants identified their friends via a network inventory, and all nominated friends were included in a latent class analysis and grouped based on their probability of providing seven types of support. These latent classes were included as the dependent variable in a multi-level regression of supporter drug use, recipient drug use, and other characteristics. The best-fitting latent class model identified five support patterns: friends who provided Little/No Support, Low/Moderate Support, High Support, Socialization Support, and Financial Support. In bivariate models, friends in the High, Low/Moderate, and Financial Support were less likely to use heroin or cocaine and had less conflict with and were more trusted by the support recipient than friends in the Low/No Support class. Individuals with supporters in those same support classes compared to the Low/No Support class were less likely to use heroin or cocaine, or to be homeless or female. Multivariable models suggested similar trends. Those with current heroin/cocaine use were less likely to provide or receive comprehensive support from friends. Published by Elsevier Ireland Ltd.
L1-norm locally linear representation regularization multi-source adaptation learning.
Tao, Jianwen; Wen, Shiting; Hu, Wenjun
2015-09-01
In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hoffmann, Aswin L.; den Hertog, Dick; Siem, Alex Y. D.; Kaanders, Johannes H. A. M.; Huizenga, Henk
2008-11-01
Finding fluence maps for intensity-modulated radiation therapy (IMRT) can be formulated as a multi-criteria optimization problem for which Pareto optimal treatment plans exist. To account for the dose-per-fraction effect of fractionated IMRT, it is desirable to exploit radiobiological treatment plan evaluation criteria based on the linear-quadratic (LQ) cell survival model as a means to balance the radiation benefits and risks in terms of biologic response. Unfortunately, the LQ-model-based radiobiological criteria are nonconvex functions, which make the optimization problem hard to solve. We apply the framework proposed by Romeijn et al (2004 Phys. Med. Biol. 49 1991-2013) to find transformations of LQ-model-based radiobiological functions and establish conditions under which transformed functions result in equivalent convex criteria that do not change the set of Pareto optimal treatment plans. The functions analysed are: the LQ-Poisson-based model for tumour control probability (TCP) with and without inter-patient heterogeneity in radiation sensitivity, the LQ-Poisson-based relative seriality s-model for normal tissue complication probability (NTCP), the equivalent uniform dose (EUD) under the LQ-Poisson model and the fractionation-corrected Probit-based model for NTCP according to Lyman, Kutcher and Burman. These functions differ from those analysed before in that they cannot be decomposed into elementary EUD or generalized-EUD functions. In addition, we show that applying increasing and concave transformations to the convexified functions is beneficial for the piecewise approximation of the Pareto efficient frontier.
Characteristics of the First Child Predict the Parents' Probability of Having Another Child
ERIC Educational Resources Information Center
Jokela, Markus
2010-01-01
In a sample of 7,695 families in the prospective, nationally representative British Millennium Cohort Study, this study examined whether characteristics of the 1st-born child predicted parents' timing and probability of having another child within 5 years after the 1st child's birth. Infant temperament was assessed with the Carey Infant…
Average probability that a "cold hit" in a DNA database search results in an erroneous attribution.
Song, Yun S; Patil, Anand; Murphy, Erin E; Slatkin, Montgomery
2009-01-01
We consider a hypothetical series of cases in which the DNA profile of a crime-scene sample is found to match a known profile in a DNA database (i.e., a "cold hit"), resulting in the identification of a suspect based only on genetic evidence. We show that the average probability that there is another person in the population whose profile matches the crime-scene sample but who is not in the database is approximately 2(N - d)p(A), where N is the number of individuals in the population, d is the number of profiles in the database, and p(A) is the average match probability (AMP) for the population. The AMP is estimated by computing the average of the probabilities that two individuals in the population have the same profile. We show further that if a priori each individual in the population is equally likely to have left the crime-scene sample, then the average probability that the database search attributes the crime-scene sample to a wrong person is (N - d)p(A).
NASA Astrophysics Data System (ADS)
Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen
2018-05-01
To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-30
... participant has a low, medium, or high probability of retiring early. The determination is based on the year a... the expected retirement age after the probability of early retirement has been determined using Table I. These tables establish, by probability category, the expected retirement age based on both the...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-02
... has a low, medium, or high probability of retiring early. The determination is based on the year a... the expected retirement age after the probability of early retirement has been determined using Table I. These tables establish, by probability category, the expected retirement age based on both the...
We evaluated a pilot aquatic invasive species (AIS) early detection monitoring program in Lake Superior that was designed to detect newly-introduced fishes. We established survey protocols for three major ports (Duluth-Superior, Sault Ste. Marie, Thunder Bay), and designed an ada...
Childhood personality types: vulnerability and adaptation over time.
De Clercq, Barbara; Rettew, David; Althoff, Robert R; De Bolle, Marleen
2012-06-01
Substantial evidence suggests that a Five-Factor Model personality assessment generates a valid description of childhood individual differences and relates to a range of psychological outcomes. Less is known, however, about naturally occurring profiles of personality and their links to psychopathology. The current study explores whether childhood personality characteristics tend to cluster in particular personality profiles that show unique associations with psychopathology and quality of life across time. Latent class analysis was conducted on maternal rated general personality of a Flemish childhood community sample (N = 477; mean age 10.6 years). The associations of latent class membership probability with psychopathology and quality of life 2 years later were examined, using a multi-informant perspective. Four distinguishable latent classes were found, representing a Moderate, a Protected, an Undercontrolled and a Vulnerable childhood personality type. Each of these types showed unique associations with childhood outcomes across raters. Four different personality types can be delineated at young age and have a significant value in understanding vulnerability and adaptation over time. © 2011 The Authors. Journal of Child Psychology and Psychiatry © 2011 Association for Child and Adolescent Mental Health.
The Performance Analysis Based on SAR Sample Covariance Matrix
Erten, Esra
2012-01-01
Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR) context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given. PMID:22736976
Nongpiur, Monisha E; Haaland, Benjamin A; Perera, Shamira A; Friedman, David S; He, Mingguang; Sakata, Lisandro M; Baskaran, Mani; Aung, Tin
2014-01-01
To develop a score along with an estimated probability of disease for detecting angle closure based on anterior segment optical coherence tomography (AS OCT) imaging. Cross-sectional study. A total of 2047 subjects 50 years of age and older were recruited from a community polyclinic in Singapore. All subjects underwent standardized ocular examination including gonioscopy and imaging by AS OCT (Carl Zeiss Meditec). Customized software (Zhongshan Angle Assessment Program) was used to measure AS OCT parameters. Complete data were available for 1368 subjects. Data from the right eyes were used for analysis. A stepwise logistic regression model with Akaike information criterion was used to generate a score that then was converted to an estimated probability of the presence of gonioscopic angle closure, defined as the inability to visualize the posterior trabecular meshwork for at least 180 degrees on nonindentation gonioscopy. Of the 1368 subjects, 295 (21.6%) had gonioscopic angle closure. The angle closure score was calculated from the shifted linear combination of the AS OCT parameters. The score can be converted to an estimated probability of having angle closure using the relationship: estimated probability = e(score)/(1 + e(score)), where e is the natural exponential. The score performed well in a second independent sample of 178 angle-closure subjects and 301 normal controls, with an area under the receiver operating characteristic curve of 0.94. A score derived from a single AS OCT image, coupled with an estimated probability, provides an objective platform for detection of angle closure. Copyright © 2014 Elsevier Inc. All rights reserved.
Evaluation for Bearing Wear States Based on Online Oil Multi-Parameters Monitoring
Hu, Hai-Feng
2018-01-01
As bearings are critical components of a mechanical system, it is important to characterize their wear states and evaluate health conditions. In this paper, a novel approach for analyzing the relationship between online oil multi-parameter monitoring samples and bearing wear states has been proposed based on an improved gray k-means clustering model (G-KCM). First, an online monitoring system with multiple sensors for bearings is established, obtaining oil multi-parameter data and vibration signals for bearings through the whole lifetime. Secondly, a gray correlation degree distance matrix is generated using a gray correlation model (GCM) to express the relationship of oil monitoring samples at different times and then a KCM is applied to cluster the matrix. Analysis and experimental results show that there is an obvious correspondence that state changing coincides basically in time between the lubricants’ multi-parameters and the bearings’ wear states. It also has shown that online oil samples with multi-parameters have early wear failure prediction ability for bearings superior to vibration signals. It is expected to realize online oil monitoring and evaluation for bearing health condition and to provide a novel approach for early identification of bearing-related failure modes. PMID:29621175
Leng, Yonggang; Fan, Shengbo
2018-01-01
Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method. PMID:29693577
Evaluation for Bearing Wear States Based on Online Oil Multi-Parameters Monitoring.
Wang, Si-Yuan; Yang, Ding-Xin; Hu, Hai-Feng
2018-04-05
As bearings are critical components of a mechanical system, it is important to characterize their wear states and evaluate health conditions. In this paper, a novel approach for analyzing the relationship between online oil multi-parameter monitoring samples and bearing wear states has been proposed based on an improved gray k-means clustering model (G-KCM). First, an online monitoring system with multiple sensors for bearings is established, obtaining oil multi-parameter data and vibration signals for bearings through the whole lifetime. Secondly, a gray correlation degree distance matrix is generated using a gray correlation model (GCM) to express the relationship of oil monitoring samples at different times and then a KCM is applied to cluster the matrix. Analysis and experimental results show that there is an obvious correspondence that state changing coincides basically in time between the lubricants' multi-parameters and the bearings' wear states. It also has shown that online oil samples with multi-parameters have early wear failure prediction ability for bearings superior to vibration signals. It is expected to realize online oil monitoring and evaluation for bearing health condition and to provide a novel approach for early identification of bearing-related failure modes.
Perpendicular distance sampling: an alternative method for sampling downed coarse woody debris
Michael S. Williams; Jeffrey H. Gove
2003-01-01
Coarse woody debris (CWD) plays an important role in many forest ecosystem processes. In recent years, a number of new methods have been proposed to sample CWD. These methods select individual logs into the sample using some form of unequal probability sampling. One concern with most of these methods is the difficulty in estimating the volume of each log. A new method...
A multi-scale convolutional neural network for phenotyping high-content cellular images.
Godinez, William J; Hossain, Imtiaz; Lazic, Stanley E; Davies, John W; Zhang, Xian
2017-07-01
Identifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters. Here, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs. The network specifications and solver definitions are provided in Supplementary Software 1. william_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Survival estimates for Florida manatees from the photo-identification of individuals
Langtimm, C.A.; Beck, C.A.; Edwards, H.H.; Fick-Child, K. J.; Ackerman, B.B.; Barton, S.L.; Hartley, W.C.
2004-01-01
We estimated adult survival probabilities for the endangered Florida manatee (Trichechus manatus latirostris) in four regional populations using photo-identification data and open-population capture-recapture statistical models. The mean annual adult survival probability over the most recent 10-yr period of available estimates was as follows: Northwest - 0.956 (SE 0.007), Upper St. Johns River - 0.960 (0.011), Atlantic Coast - 0.937 (0.008), and Southwest - 0.908 (0.019). Estimates of temporal variance independent of sampling error, calculated from the survival estimates, indicated constant survival in the Upper St. Johns River, true temporal variability in the Northwest and Atlantic Coast, and large sampling variability obscuring estimates for the Southwest. Calf and subadult survival probabilities were estimated for the Upper St. Johns River from the only available data for known-aged individuals: 0.810 (95% CI 0.727-0.873) for 1st year calves, 0.915 (0.827-0.960) for 2nd year calves, and 0.969 (0.946-0.982) for manatee 3 yr or older. These estimates of survival probabilities and temporal variance, in conjunction with estimates of reproduction probabilities from photoidentification data can be used to model manatee population dynamics, estimate population growth rates, and provide an integrated measure of regional status.
Population variability complicates the accurate detection of climate change responses.
McCain, Christy; Szewczyk, Tim; Bracy Knight, Kevin
2016-06-01
The rush to assess species' responses to anthropogenic climate change (CC) has underestimated the importance of interannual population variability (PV). Researchers assume sampling rigor alone will lead to an accurate detection of response regardless of the underlying population fluctuations of the species under consideration. Using population simulations across a realistic, empirically based gradient in PV, we show that moderate to high PV can lead to opposite and biased conclusions about CC responses. Between pre- and post-CC sampling bouts of modeled populations as in resurvey studies, there is: (i) A 50% probability of erroneously detecting the opposite trend in population abundance change and nearly zero probability of detecting no change. (ii) Across multiple years of sampling, it is nearly impossible to accurately detect any directional shift in population sizes with even moderate PV. (iii) There is up to 50% probability of detecting a population extirpation when the species is present, but in very low natural abundances. (iv) Under scenarios of moderate to high PV across a species' range or at the range edges, there is a bias toward erroneous detection of range shifts or contractions. Essentially, the frequency and magnitude of population peaks and troughs greatly impact the accuracy of our CC response measurements. Species with moderate to high PV (many small vertebrates, invertebrates, and annual plants) may be inaccurate 'canaries in the coal mine' for CC without pertinent demographic analyses and additional repeat sampling. Variation in PV may explain some idiosyncrasies in CC responses detected so far and urgently needs more careful consideration in design and analysis of CC responses. © 2016 John Wiley & Sons Ltd.
Ferret, Yann; Caillault, Aurélie; Sebda, Shéhérazade; Duez, Marc; Grardel, Nathalie; Duployez, Nicolas; Villenet, Céline; Figeac, Martin; Preudhomme, Claude; Salson, Mikaël; Giraud, Mathieu
2016-05-01
High-throughput sequencing (HTS) is considered a technical revolution that has improved our knowledge of lymphoid and autoimmune diseases, changing our approach to leukaemia both at diagnosis and during follow-up. As part of an immunoglobulin/T cell receptor-based minimal residual disease (MRD) assessment of acute lymphoblastic leukaemia patients, we assessed the performance and feasibility of the replacement of the first steps of the approach based on DNA isolation and Sanger sequencing, using a HTS protocol combined with bioinformatics analysis and visualization using the Vidjil software. We prospectively analysed the diagnostic and relapse samples of 34 paediatric patients, thus identifying 125 leukaemic clones with recombinations on multiple loci (TRG, TRD, IGH and IGK), including Dd2/Dd3 and Intron/KDE rearrangements. Sequencing failures were halved (14% vs. 34%, P = 0.0007), enabling more patients to be monitored. Furthermore, more markers per patient could be monitored, reducing the probability of false negative MRD results. The whole analysis, from sample receipt to clinical validation, was shorter than our current diagnostic protocol, with equal resources. V(D)J recombination was successfully assigned by the software, even for unusual recombinations. This study emphasizes the progress that HTS with adapted bioinformatics tools can bring to the diagnosis of leukaemia patients. © 2016 John Wiley & Sons Ltd.
Ha, Jung-Hwa; Yoon, Hyunsook; Lim, Yeon Ok; Heo, Sun-Young
2016-03-01
Although previous research based on data from the U.S. suggests that parents' widowhood is associated with increased emotional support from children, little is known about the impact of late-life widowhood on intergenerational relationships in other cultures. Using data of Korean older adults, this paper examined: (1) the effect of widowhood on both positive and negative aspects of parent-child relationships and (2) whether these effects are moderated by older adults' expectations about children's filial responsibilities and the geographic proximity to their children. Analyses are based on data from the Hallym Aging Study, a stratified multi-stage probability sample of older adults living in the cities of Seoul and Chuncheon in Korea. Compared to married older adults, widowed persons in this sample reported higher levels of ambivalence, lower levels of positive interactions, and higher levels of negative interactions with their children. Parents' notion about filial responsibilities did not have a significant moderating effect, whereas geographic proximity to children was a significant moderator. Findings suggest that widowhood is associated with greater strain in intergenerational relationships in Korea. Helping widowed older adults forge constructive relationships with their children may enhance both bereaved older adults' and their children's well-being in this cultural milieu.
A comprehensive risk assessment framework for offsite transportation of inflammable hazardous waste.
Das, Arup; Gupta, A K; Mazumder, T N
2012-08-15
A framework for risk assessment due to offsite transportation of hazardous wastes is designed based on the type of event that can be triggered from an accident of a hazardous waste carrier. The objective of this study is to design a framework for computing the risk to population associated with offsite transportation of inflammable and volatile wastes. The framework is based on traditional definition of risk and is designed for conditions where accident databases are not available. The probability based variable in risk assessment framework is substituted by a composite accident index proposed in this study. The framework computes the impacts due to a volatile cloud explosion based on TNO Multi-energy model. The methodology also estimates the vulnerable population in terms of disability adjusted life years (DALY) which takes into consideration the demographic profile of the population and the degree of injury on mortality and morbidity sustained. The methodology is illustrated using a case study of a pharmaceutical industry in the Kolkata metropolitan area. Copyright © 2012 Elsevier B.V. All rights reserved.
Charvat, Hadrien; Sasazuki, Shizuka; Inoue, Manami; Iwasaki, Motoki; Sawada, Norie; Shimazu, Taichi; Yamaji, Taiki; Tsugane, Shoichiro
2013-11-01
The present work aims to provide 10-year estimates of the probability of cancer occurrence in the Japanese population based on age, sex, and the pattern of adherence to five healthy lifestyle habits. The study population consisted of 74,935 participants in the Japan Public Health Center-Based Prospective Study (aged 45 to 74 years) who answered a 5-year follow-up questionnaire about various lifestyle habits between 1995 and 1999. The relationship between five previously identified healthy lifestyle habits (never smoking, moderate or no alcohol consumption, adequate physical activity, moderate salt intake, and appropriate body mass index) and cancer occurrence was assessed using a sex-specific parametric survival model. Compared to individuals not adhering to any of the five habits, never-smoking men had a nearly 30% reduction in the 10-year probability of cancer occurrence (e.g., 20.5% vs. 28.7% at age 70), and never-smoking women had a 16% reduction (e.g., 10.5% vs. 12.5% at age 70). Adherence to all five habits was estimated to reduce the 10-year probability of cancer occurrence by 1/2 in men and 1/3 in women. By quantifying the impact of lifestyle habits on the probability of cancer occurrence, this study emphasizes the importance of lifestyle improvement. © 2013.
Farnsworth, G.L.; Nichols, J.D.; Sauer, J.R.; Fancy, S.G.; Pollock, K.H.; Shriner, S.A.; Simons, T.R.; Ralph, C. John; Rich, Terrell D.
2005-01-01
Point counts are a standard sampling procedure for many bird species, but lingering concerns still exist about the quality of information produced from the method. It is well known that variation in observer ability and environmental conditions can influence the detection probability of birds in point counts, but many biologists have been reluctant to abandon point counts in favor of more intensive approaches to counting. However, over the past few years a variety of statistical and methodological developments have begun to provide practical ways of overcoming some of the problems with point counts. We describe some of these approaches, and show how they can be integrated into standard point count protocols to greatly enhance the quality of the information. Several tools now exist for estimation of detection probability of birds during counts, including distance sampling, double observer methods, time-depletion (removal) methods, and hybrid methods that combine these approaches. Many counts are conducted in habitats that make auditory detection of birds much more likely than visual detection. As a framework for understanding detection probability during such counts, we propose separating two components of the probability a bird is detected during a count into (1) the probability a bird vocalizes during the count and (2) the probability this vocalization is detected by an observer. In addition, we propose that some measure of the area sampled during a count is necessary for valid inferences about bird populations. This can be done by employing fixed-radius counts or more sophisticated distance-sampling models. We recommend any studies employing point counts be designed to estimate detection probability and to include a measure of the area sampled.
Santos, Nuno; Santos, Catarina; Valente, Teresa; Gortázar, Christian; Almeida, Virgílio; Correia-Neves, Margarida
2015-01-01
Environmental contamination with Mycobacterium tuberculosis complex (MTC) has been considered crucial for bovine tuberculosis persistence in multi-host-pathogen systems. However, MTC contamination has been difficult to detect due to methodological issues. In an attempt to overcome this limitation we developed an improved protocol for the detection of MTC DNA. MTC DNA concentration was estimated by the Most Probable Number (MPN) method. Making use of this protocol we showed that MTC contamination is widespread in different types of environmental samples from the Iberian Peninsula, which supports indirect transmission as a contributing mechanism for the maintenance of bovine tuberculosis in this multi-host-pathogen system. The proportion of MTC DNA positive samples was higher in the bovine tuberculosis-infected than in presumed negative area (0.32 and 0.18, respectively). Detection varied with the type of environmental sample and was more frequent in sediment from dams and less frequent in water also from dams (0.22 and 0.05, respectively). The proportion of MTC-positive samples was significantly higher in spring (p<0.001), but MTC DNA concentration per sample was higher in autumn and lower in summer. The average MTC DNA concentration in positive samples was 0.82 MPN/g (CI95 0.70–0.98 MPN/g). We were further able to amplify a DNA sequence specific of Mycobacterium bovis/caprae in 4 environmental samples from the bTB-infected area. PMID:26561038
NASA Astrophysics Data System (ADS)
Pankratov, Oleg; Kuvshinov, Alexey
2016-01-01
Despite impressive progress in the development and application of electromagnetic (EM) deterministic inverse schemes to map the 3-D distribution of electrical conductivity within the Earth, there is one question which remains poorly addressed—uncertainty quantification of the recovered conductivity models. Apparently, only an inversion based on a statistical approach provides a systematic framework to quantify such uncertainties. The Metropolis-Hastings (M-H) algorithm is the most popular technique for sampling the posterior probability distribution that describes the solution of the statistical inverse problem. However, all statistical inverse schemes require an enormous amount of forward simulations and thus appear to be extremely demanding computationally, if not prohibitive, if a 3-D set up is invoked. This urges development of fast and scalable 3-D modelling codes which can run large-scale 3-D models of practical interest for fractions of a second on high-performance multi-core platforms. But, even with these codes, the challenge for M-H methods is to construct proposal functions that simultaneously provide a good approximation of the target density function while being inexpensive to be sampled. In this paper we address both of these issues. First we introduce a variant of the M-H method which uses information about the local gradient and Hessian of the penalty function. This, in particular, allows us to exploit adjoint-based machinery that has been instrumental for the fast solution of deterministic inverse problems. We explain why this modification of M-H significantly accelerates sampling of the posterior probability distribution. In addition we show how Hessian handling (inverse, square root) can be made practicable by a low-rank approximation using the Lanczos algorithm. Ultimately we discuss uncertainty analysis based on stochastic inversion results. In addition, we demonstrate how this analysis can be performed within a deterministic approach. In the second part, we summarize modern trends in the development of efficient 3-D EM forward modelling schemes with special emphasis on recent advances in the integral equation approach.
2008-01-01
Years 2005 through 2009: VOCE = .016 * Trips NOxE = .015 * Trips PM10E = .0022 * Trips COE = .262 * Trips Appendix A: Air Quality January 2008...Final EA for the Construction of a Three-Bay Multi-Aircraft Hangar Page A-9 Tinker Air Force Base, Oklahoma Years 2010 and beyond: VOCE = .012...Trips NOxE = .013 * Trips PM10E = .0022 * Trips COE = .262 * Trips To convert from pounds per day to tons per year: VOC (tons/year) = VOCE
Support vector machines-based fault diagnosis for turbo-pump rotor
NASA Astrophysics Data System (ADS)
Yuan, Sheng-Fa; Chu, Fu-Lei
2006-05-01
Most artificial intelligence methods used in fault diagnosis are based on empirical risk minimisation principle and have poor generalisation when fault samples are few. Support vector machines (SVM) is a new general machine-learning tool based on structural risk minimisation principle that exhibits good generalisation even when fault samples are few. Fault diagnosis based on SVM is discussed. Since basic SVM is originally designed for two-class classification, while most of fault diagnosis problems are multi-class cases, a new multi-class classification of SVM named 'one to others' algorithm is presented to solve the multi-class recognition problems. It is a binary tree classifier composed of several two-class classifiers organised by fault priority, which is simple, and has little repeated training amount, and the rate of training and recognition is expedited. The effectiveness of the method is verified by the application to the fault diagnosis for turbo pump rotor.
Cowley, Laura E; Maguire, Sabine; Farewell, Daniel M; Quinn-Scoggins, Harriet D; Flynn, Matthew O; Kemp, Alison M
2018-05-09
The validated Predicting Abusive Head Trauma (PredAHT) tool estimates the probability of abusive head trauma (AHT) based on combinations of six clinical features: head/neck bruising; apnea; seizures; rib/long-bone fractures; retinal hemorrhages. We aimed to determine the acceptability of PredAHT to child protection professionals. We conducted qualitative semi-structured interviews with 56 participants: clinicians (25), child protection social workers (10), legal practitioners (9, including 4 judges), police officers (8), and pathologists (4), purposively sampled across southwest United Kingdom. Interviews were recorded, transcribed and imported into NVivo for thematic analysis (38% double-coded). We explored participants' evaluations of PredAHT, their opinions about the optimal way to present the calculated probabilities, and their interpretation of probabilities in the context of suspected AHT. Clinicians, child protection social workers and police thought PredAHT would be beneficial as an objective adjunct to their professional judgment, to give them greater confidence in their decisions. Lawyers and pathologists appreciated its value for prompting multidisciplinary investigations, but were uncertain of its usefulness in court. Perceived disadvantages included: possible over-reliance and false reassurance from a low score. Interpretations regarding which percentages equate to 'low', 'medium' or 'high' likelihood of AHT varied; participants preferred a precise % probability over these general terms. Participants would use PredAHT with provisos: if they received multi-agency training to define accepted risk thresholds for consistent interpretation; with knowledge of its development; if it was accepted by colleagues. PredAHT may therefore increase professionals' confidence in their decision-making when investigating suspected AHT, but may be of less value in court. Copyright © 2018 Elsevier Ltd. All rights reserved.
From large-eddy simulation to multi-UAVs sampling of shallow cumulus clouds
NASA Astrophysics Data System (ADS)
Lamraoui, Fayçal; Roberts, Greg; Burnet, Frédéric
2016-04-01
In-situ sampling of clouds that can provide simultaneous measurements at satisfying spatio-temporal resolutions to capture 3D small scale physical processes continues to present challenges. This project (SKYSCANNER) aims at bringing together cloud sampling strategies using a swarm of unmanned aerial vehicles (UAVs) based on Large-eddy simulation (LES). The multi-UAV-based field campaigns with a personalized sampling strategy for individual clouds and cloud fields will significantly improve the understanding of the unresolved cloud physical processes. An extensive set of LES experiments for case studies from ARM-SGP site have been performed using MesoNH model at high resolutions down to 10 m. The carried out simulations led to establishing a macroscopic model that quantifies the interrelationship between micro- and macrophysical properties of shallow convective clouds. Both the geometry and evolution of individual clouds are critical to multi-UAV cloud sampling and path planning. The preliminary findings of the current project reveal several linear relationships that associate many cloud geometric parameters to cloud related meteorological variables. In addition, the horizontal wind speed indicates a proportional impact on cloud number concentration as well as triggering and prolonging the occurrence of cumulus clouds. In the framework of the joint collaboration that involves a Multidisciplinary Team (including institutes specializing in aviation, robotics and atmospheric science), this model will be a reference point for multi-UAVs sampling strategies and path planning.
Use of Internet panels to conduct surveys.
Hays, Ron D; Liu, Honghu; Kapteyn, Arie
2015-09-01
The use of Internet panels to collect survey data is increasing because it is cost-effective, enables access to large and diverse samples quickly, takes less time than traditional methods to obtain data for analysis, and the standardization of the data collection process makes studies easy to replicate. A variety of probability-based panels have been created, including Telepanel/CentERpanel, Knowledge Networks (now GFK KnowledgePanel), the American Life Panel, the Longitudinal Internet Studies for the Social Sciences panel, and the Understanding America Study panel. Despite the advantage of having a known denominator (sampling frame), the probability-based Internet panels often have low recruitment participation rates, and some have argued that there is little practical difference between opting out of a probability sample and opting into a nonprobability (convenience) Internet panel. This article provides an overview of both probability-based and convenience panels, discussing potential benefits and cautions for each method, and summarizing the approaches used to weight panel respondents in order to better represent the underlying population. Challenges of using Internet panel data are discussed, including false answers, careless responses, giving the same answer repeatedly, getting multiple surveys from the same respondent, and panelists being members of multiple panels. More is to be learned about Internet panels generally and about Web-based data collection, as well as how to evaluate data collected using mobile devices and social-media platforms.
Plennevaux, Eric; Moureau, Annick; Arredondo-García, José L; Villar, Luis; Pitisuttithum, Punnee; Tran, Ngoc H; Bonaparte, Matthew; Chansinghakul, Danaya; Coronel, Diana L; L’Azou, Maïna; Ochiai, R Leon; Toh, Myew-Ling; Noriega, Fernando; Bouckenooghe, Alain
2018-01-01
Abstract Background We previously reported that vaccination with the tetravalent dengue vaccine (CYD-TDV; Dengvaxia) may bias the diagnosis of dengue based on immunoglobulin M (IgM) and immunoglobulin G (IgG) assessments. Methods We undertook a post hoc pooled analysis of febrile episodes that occurred during the active surveillance phase (the 25 months after the first study injection) of 2 pivotal phase III, placebo-controlled CYD-TDV efficacy studies that involved ≥31000 children aged 2–16 years across 10 countries in Asia and Latin America. Virologically confirmed dengue (VCD) episode was defined with a positive test for dengue nonstructural protein 1 antigen or dengue polymerase chain reaction. Probable dengue episode was serologically defined as (1) IgM-positive acute- or convalescent-phase sample, or (2) IgG-positive acute-phase sample and ≥4-fold IgG increase between acute- and convalescent-phase samples. Results There were 1284 VCD episodes (575 and 709 in the CYD-TDV and placebo groups, respectively) and 17673 other febrile episodes (11668 and 6005, respectively). Compared with VCD, the sensitivity and specificity of probable dengue definition were 93.1% and 77.2%, respectively. Overall positive and negative predictive values were 22.9% and 99.5%, respectively, reflecting the much lower probability of correctly confirming probable dengue in a population including a vaccinated cohort. Vaccination-induced bias toward false-positive diagnosis was more pronounced among individuals seronegative at baseline. Conclusions Caution will be required when interpreting IgM and IgG data obtained during routine surveillance in those vaccinated with CYD-TDV. There is an urgent need for new practical, dengue-specific diagnostic algorithms now that CYD-TDV is approved in a number of dengue-endemic countries. Clinical Trials Registration NCT01373281 and NCT01374516. PMID:29300876
Plennevaux, Eric; Moureau, Annick; Arredondo-García, José L; Villar, Luis; Pitisuttithum, Punnee; Tran, Ngoc H; Bonaparte, Matthew; Chansinghakul, Danaya; Coronel, Diana L; L'Azou, Maïna; Ochiai, R Leon; Toh, Myew-Ling; Noriega, Fernando; Bouckenooghe, Alain
2018-04-03
We previously reported that vaccination with the tetravalent dengue vaccine (CYD-TDV; Dengvaxia) may bias the diagnosis of dengue based on immunoglobulin M (IgM) and immunoglobulin G (IgG) assessments. We undertook a post hoc pooled analysis of febrile episodes that occurred during the active surveillance phase (the 25 months after the first study injection) of 2 pivotal phase III, placebo-controlled CYD-TDV efficacy studies that involved ≥31000 children aged 2-16 years across 10 countries in Asia and Latin America. Virologically confirmed dengue (VCD) episode was defined with a positive test for dengue nonstructural protein 1 antigen or dengue polymerase chain reaction. Probable dengue episode was serologically defined as (1) IgM-positive acute- or convalescent-phase sample, or (2) IgG-positive acute-phase sample and ≥4-fold IgG increase between acute- and convalescent-phase samples. There were 1284 VCD episodes (575 and 709 in the CYD-TDV and placebo groups, respectively) and 17673 other febrile episodes (11668 and 6005, respectively). Compared with VCD, the sensitivity and specificity of probable dengue definition were 93.1% and 77.2%, respectively. Overall positive and negative predictive values were 22.9% and 99.5%, respectively, reflecting the much lower probability of correctly confirming probable dengue in a population including a vaccinated cohort. Vaccination-induced bias toward false-positive diagnosis was more pronounced among individuals seronegative at baseline. Caution will be required when interpreting IgM and IgG data obtained during routine surveillance in those vaccinated with CYD-TDV. There is an urgent need for new practical, dengue-specific diagnostic algorithms now that CYD-TDV is approved in a number of dengue-endemic countries. NCT01373281 and NCT01374516.
The Association between Social Media Use and Eating Concerns among U.S. Young Adults
Sidani, Jaime E.; Shensa, Ariel; Hoffman, Beth; Hanmer, Janel; Primack, Brian A.
2016-01-01
Background Although the etiology of eating concerns is multi-factorial, exposure to media messages is considered to be a contributor. While traditional media, such as television and magazines, have been examined extensively in relation to eating concerns risk, the influence of social media has received relatively less attention. Objective To examine the association between social media use and eating concerns in a large, nationally representative sample of young adults. Design Cross-sectional survey. Participants/setting Participants were 1765 young adults ages 19-32 years, who were randomly selected from a national probability-based online non-volunteer panel. Outcome measures An eating concerns scale was adapted from two validated measures: the SCOFF Questionnaire and the Eating Disorder Screen for Primary Care (ESP). Social media use (including Facebook, Twitter, Google+, YouTube, LinkedIn, Instagram, Pinterest, Tumblr, Vine, Snapchat, and Reddit) was assessed using both volume (time per day) and frequency (visits per week). Statistical analyses To examine associations between eating concerns and social media use, ordered logistic regression was used, controlling for all covariates. Results Compared to those in the lowest quartile, participants in the highest quartiles for social media volume and frequency had significantly greater odds of having eating concerns (adjusted odds ratio [AOR] = 2.18, 95% CI = 1.50 - 3.17 and AOR = 2.55, 95% CI = 1.72 - 3.78, respectively). There were significant positive overall linear associations between the social media use variables and eating concerns (P < 0.001). Conclusions The results from this study indicate a strong and consistent association between social media use and eating concerns in a nationally-representative sample of young adults ages 19 to 32 years. This association was apparent whether social media use was measured as volume or frequency. Further research should assess the temporality of these associations. It would also be useful to examine more closely the influence of specific characteristics of social media use—including content-related and contextual features. PMID:27161027
NASA Astrophysics Data System (ADS)
Lijesh K., P.; Kumar, Deepak; Muzakkir S., M.; Hirani, Harish
2018-05-01
A Fluid Film Bearings (FFBs) operating in hydrodynamic boundary regime can provide moderate load carrying capacity, negligible wear and friction. However in extreme operating conditions i.e. at high load and low speed, asperities of journal and bearing surfaces come in contact with each other resulting in high wear and friction. During the contact of the asperities, the temperature of the lubricant increases due to frictional heating, resulting in reduction of the viscosity of lubricant. Variation of lubricant viscosity results in low load carrying capacity of the FFB and therefore resulting in detoriation of FFB performance. In the present work it is hypothesized that, by adding multi-functional Multi Wall Carbon Nano-Tubes (MWCNT) (having high thermal conductivity and anti-friction properties) as nano-additive in the base mineral oil, the aforementioned problems can be overcome. To validate the proposed hypothesis, five different samples of lubricant is considered: Sample 1: Base oil, Sample 2: Base oil +0.05% MWCNT, Sample 3: Base oil +0.05% MWCNT +0.5%surfactant, Sample 4: Base oil +0.1% MWCNT +0.5% surfactant, and Sample 5: Base oil +0.15% MWCNT +0.5%surfactant. To evaluate the performance of the developed lubricants, experiments were performed on the reduced scale conformal block on disc test setup. The experimental condition and dimension of the block and disc were decide for the Sommerfeld number equal to 0.0025, which indicates mixed lubrication regime. The performance of lubricant is evaluated by measuring the frictional force and temperature rise of the lubricant during the experiment.
Vrooman, Henri A; Cocosco, Chris A; van der Lijn, Fedde; Stokking, Rik; Ikram, M Arfan; Vernooij, Meike W; Breteler, Monique M B; Niessen, Wiro J
2007-08-01
Conventional k-Nearest-Neighbor (kNN) classification, which has been successfully applied to classify brain tissue in MR data, requires training on manually labeled subjects. This manual labeling is a laborious and time-consuming procedure. In this work, a new fully automated brain tissue classification procedure is presented, in which kNN training is automated. This is achieved by non-rigidly registering the MR data with a tissue probability atlas to automatically select training samples, followed by a post-processing step to keep the most reliable samples. The accuracy of the new method was compared to rigid registration-based training and to conventional kNN-based segmentation using training on manually labeled subjects for segmenting gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) in 12 data sets. Furthermore, for all classification methods, the performance was assessed when varying the free parameters. Finally, the robustness of the fully automated procedure was evaluated on 59 subjects. The automated training method using non-rigid registration with a tissue probability atlas was significantly more accurate than rigid registration. For both automated training using non-rigid registration and for the manually trained kNN classifier, the difference with the manual labeling by observers was not significantly larger than inter-observer variability for all tissue types. From the robustness study, it was clear that, given an appropriate brain atlas and optimal parameters, our new fully automated, non-rigid registration-based method gives accurate and robust segmentation results. A similarity index was used for comparison with manually trained kNN. The similarity indices were 0.93, 0.92 and 0.92, for CSF, GM and WM, respectively. It can be concluded that our fully automated method using non-rigid registration may replace manual segmentation, and thus that automated brain tissue segmentation without laborious manual training is feasible.
NASA Astrophysics Data System (ADS)
Nagol, J. R.; Chung, C.; Dempewolf, J.; Maurice, S.; Mbungu, W.; Tumbo, S.
2015-12-01
Timely mapping and monitoring of crops like Maize, an important food security crop in Tanzania, can facilitate timely response by government and non-government organizations to food shortage or surplus conditions. Small UAVs can play an important role in linking the spaceborne remote sensing data and ground based measurement to improve the calibration and validation of satellite based estimates of in-season crop metrics. In Tanzania most of the growing season is often obscured by clouds. UAV data, if collected within a stratified statistical sampling framework, can also be used to directly in lieu of spaceborne data to infer mid-season yield estimates at regional scales.Here we present an object based approach to estimate crop metrics like crop type, area, and height using multi-temporal UAV based imagery. The methods were tested at three 1km2 plots in Kilosa, Njombe, and Same districts in Tanzania. At these sites both ground based and UAV based data were collected on a monthly time-step during the year 2015 growing season. SenseFly eBee drone with RGB and NIR-R-G camera was used to collect data. Crop type classification accuracies of above 85% were easily achieved.
Tveito, Torill H.; Reme, Silje E.; Eriksen, Hege R.
2017-01-01
Background Disability benefits and sick leave benefits represents huge costs in western countries. The pathways and prognostic factors for receiving these benefits seen in recent years are complex and manifold. We postulate that mental health and IQ, both alone and concurrent, influence subsequent employment status, disability benefits and mortality. Methods A cohort of 918 888 Norwegian men was followed for 16 years from the age of 20 to 55. Risk for health benefits, emigration, and mortality were studied. Indicators of mental health and IQ at military enrolment were used as potential risk factors. Multi-state models were used to analyze transitions between employment, sick leave, time limited benefits, disability benefits, emigration, and mortality. Results During follow up, there were a total of 3 908 397 transitions between employment and different health benefits, plus 12 607 deaths. Men with low IQ (below 85), without any mental health problems at military enrolment, had an increased probability of receiving disability benefits before the age of 35 (HRR = 4.06, 95% CI: 3.88–4.26) compared to men with average IQ (85 to 115) and no mental health problems. For men with both low IQ and mental health problems, there was an excessive probability of receiving disability benefits before the age of 35 (HRR = 14.37, 95% CI: 13.59–15.19), as well as an increased probability for time limited benefits and death before the age of 35 compared to men with average IQ (85 to 115) and no mental health problems. Conclusion Low IQ and mental health problems are strong predictors of future disability benefits and early mortality for young men. PMID:28683088
Lie, Stein Atle; Tveito, Torill H; Reme, Silje E; Eriksen, Hege R
2017-01-01
Disability benefits and sick leave benefits represents huge costs in western countries. The pathways and prognostic factors for receiving these benefits seen in recent years are complex and manifold. We postulate that mental health and IQ, both alone and concurrent, influence subsequent employment status, disability benefits and mortality. A cohort of 918 888 Norwegian men was followed for 16 years from the age of 20 to 55. Risk for health benefits, emigration, and mortality were studied. Indicators of mental health and IQ at military enrolment were used as potential risk factors. Multi-state models were used to analyze transitions between employment, sick leave, time limited benefits, disability benefits, emigration, and mortality. During follow up, there were a total of 3 908 397 transitions between employment and different health benefits, plus 12 607 deaths. Men with low IQ (below 85), without any mental health problems at military enrolment, had an increased probability of receiving disability benefits before the age of 35 (HRR = 4.06, 95% CI: 3.88-4.26) compared to men with average IQ (85 to 115) and no mental health problems. For men with both low IQ and mental health problems, there was an excessive probability of receiving disability benefits before the age of 35 (HRR = 14.37, 95% CI: 13.59-15.19), as well as an increased probability for time limited benefits and death before the age of 35 compared to men with average IQ (85 to 115) and no mental health problems. Low IQ and mental health problems are strong predictors of future disability benefits and early mortality for young men.
Estimation of the limit of detection using information theory measures.
Fonollosa, Jordi; Vergara, Alexander; Huerta, Ramón; Marco, Santiago
2014-01-31
Definitions of the limit of detection (LOD) based on the probability of false positive and/or false negative errors have been proposed over the past years. Although such definitions are straightforward and valid for any kind of analytical system, proposed methodologies to estimate the LOD are usually simplified to signals with Gaussian noise. Additionally, there is a general misconception that two systems with the same LOD provide the same amount of information on the source regardless of the prior probability of presenting a blank/analyte sample. Based upon an analogy between an analytical system and a binary communication channel, in this paper we show that the amount of information that can be extracted from an analytical system depends on the probability of presenting the two different possible states. We propose a new definition of LOD utilizing information theory tools that deals with noise of any kind and allows the introduction of prior knowledge easily. Unlike most traditional LOD estimation approaches, the proposed definition is based on the amount of information that the chemical instrumentation system provides on the chemical information source. Our findings indicate that the benchmark of analytical systems based on the ability to provide information about the presence/absence of the analyte (our proposed approach) is a more general and proper framework, while converging to the usual values when dealing with Gaussian noise. Copyright © 2013 Elsevier B.V. All rights reserved.
Mechanical failure probability of glasses in Earth orbit
NASA Technical Reports Server (NTRS)
Kinser, Donald L.; Wiedlocher, David E.
1992-01-01
Results of five years of earth-orbital exposure on mechanical properties of glasses indicate that radiation effects on mechanical properties of glasses, for the glasses examined, are less than the probable error of measurement. During the 5 year exposure, seven micrometeorite or space debris impacts occurred on the samples examined. These impacts were located in locations which were not subjected to effective mechanical testing, hence limited information on their influence upon mechanical strength was obtained. Combination of these results with micrometeorite and space debris impact frequency obtained by other experiments permits estimates of the failure probability of glasses exposed to mechanical loading under earth-orbit conditions. This probabilistic failure prediction is described and illustrated with examples.
An empirical probability model of detecting species at low densities.
Delaney, David G; Leung, Brian
2010-06-01
False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
Fuzzy multinomial logistic regression analysis: A multi-objective programming approach
NASA Astrophysics Data System (ADS)
Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan
2017-05-01
Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.
Persistence of canine distemper virus in the Greater Yellowstone ecosystem's carnivore community.
Almberg, Emily S; Cross, Paul C; Smith, Douglas W
2010-10-01
Canine distemper virus (CDV) is an acute, highly immunizing pathogen that should require high densities and large populations of hosts for long-term persistence, yet CDV persists among terrestrial carnivores with small, patchily distributed groups. We used CDV in the Greater Yellowstone ecosystem's (GYE) wolves (Canis lupus) and coyotes (Canis latrans) as a case study for exploring how metapopulation structure, host demographics, and multi-host transmission affect the critical community size and spatial scale required for CDV persistence. We illustrate how host spatial connectivity and demographic turnover interact to affect both local epidemic dynamics, such as the length and variation in inter-epidemic periods, and pathogen persistence using stochastic, spatially explicit susceptible-exposed-infectious-recovered simulation models. Given the apparent absence of other known persistence mechanisms (e.g., a carrier or environmental state, densely populated host, chronic infection, or a vector), we suggest that CDV requires either large spatial scales or multi-host transmission for persistence. Current GYE wolf populations are probably too small to support endemic CDV. Coyotes are a plausible reservoir host, but CDV would still require 50000-100000 individuals for moderate persistence (> 50% over 10 years), which would equate to an area of 1-3 times the size of the GYE (60000-200000 km2). Coyotes, and carnivores in general, are not uniformly distributed; therefore, this is probably a gross underestimate of the spatial scale of CDV persistence. However, the presence of a second competent host species can greatly increase the probability of long-term CDV persistence at much smaller spatial scales. Although no management of CDV is currently recommended for the GYE, wolf managers in the region should expect periodic but unpredictable CDV-related population declines as often as every 2-5 years. Awareness and monitoring of such outbreaks will allow corresponding adjustments in management activities such as regulated public harvest, creating a smooth transition to state wolf management and conservation after > 30 years of being protected by the Endangered Species Act.
NASA Technical Reports Server (NTRS)
Morris, Carl N.
1987-01-01
Motivated by the LANDSAT problem of estimating the probability of crop or geological types based on multi-channel satellite imagery data, Morris and Kostal (1983), Hill, Hinkley, Kostal, and Morris (1984), and Morris, Hinkley, and Johnston (1985) developed an empirical Bayes approach to this problem. Here, researchers return to those developments, making certain improvements and extensions, but restricting attention to the binary case of only two attributes.
Goeree, Ron; Blackhouse, Gord; Bowen, James M; O'Reilly, Daria; Sutherland, Simone; Hopkins, Robert; Chow, Benjamin; Freeman, Michael; Provost, Yves; Dennie, Carole; Cohen, Eric; Marcuzzi, Dan; Iwanochko, Robert; Moody, Alan; Paul, Narinder; Parker, John D
2013-10-01
Conventional coronary angiography (CCA) is the standard diagnostic for coronary artery disease (CAD), but multi-detector computed tomography coronary angiography (CTCA) is a non-invasive alternative. A multi-center coverage with evidence development study was undertaken and combined with an economic model to estimate the cost-effectiveness of CTCA followed by CCA vs CCA alone. Alternative assumptions were tested in patient scenario and sensitivity analyses. CCA was found to dominate CTCA, however, CTCA was relatively more cost-effective in females, in advancing age, in patients with lower pre-test probabilities of CAD, the higher the sensitivity of CTCA and the lower the probability of undergoing a confirmatory CCA following a positive CTCA. RESULTS were very sensitive to alternative patient populations and modeling assumptions. Careful consideration of patient characteristics, procedures to improve the diagnostic yield of CTCA and selective use of CCA following CTCA will impact whether CTCA is cost-effective or dominates CCA.
2012-01-01
Background Among trauma patients relatively high prevalence rates of posttraumatic stress disorder (PTSD) have been found. To identify opportunities for prevention and early treatment, predictors and course of PTSD need to be investigated. Long-term follow-up studies of injury patients may help gain more insight into the course of PTSD and subgroups at risk for PTSD. The aim of our long-term prospective cohort study was to assess the prevalence rate and predictors, including pre-hospital trauma care (assistance of physician staffed Emergency Medical Services (EMS) at the scene of the accident), of probable PTSD in a sample of major trauma patients at one and two years after injury. The second aim was to assess the long-term course of probable PTSD following injury. Methods A prospective cohort study was conducted of 332 major trauma patients with an Injury Severity Score (ISS) of 16 or higher. We used data from the hospital trauma registry and self-assessment surveys that included the Impact of Event Scale (IES) to measure probable PTSD symptoms. An IES-score of 35 or higher was used as indication for the presence of probable PTSD. Results One year after injury measurements of 226 major trauma patients were obtained (response rate 68%). Of these patients 23% had an IES-score of 35 or higher, indicating probable PTSD. At two years after trauma the prevalence rate of probable PTSD was 20%. Female gender and co-morbid disease were strong predictors of probable PTSD one year following injury, whereas minor to moderate head injury and injury of the extremities (AIS less than 3) were strong predictors of this disorder at two year follow-up. Of the patients with probable PTSD at one year follow-up 79% had persistent PTSD symptoms a year later. Conclusions Up to two years after injury probable PTSD is highly prevalent in a population of patients with major trauma. The majority of patients suffered from prolonged effects of PTSD, underlining the importance of prevention, early detection, and treatment of injury-related PTSD. PMID:23270522
Sensitivity images for multi-view ultrasonic array inspection
NASA Astrophysics Data System (ADS)
Budyn, Nicolas; Bevan, Rhodri; Croxford, Anthony J.; Zhang, Jie; Wilcox, Paul D.; Kashubin, Artem; Cawley, Peter
2018-04-01
The multi-view total focusing method (TFM) is an imaging technique for ultrasonic full matrix array data that typically exploits ray paths with zero, one or two internal reflections in the inspected object and for all combinations of longitudinal and transverse modes. The fusion of this vast quantity of views is expected to increase the reliability of ultrasonic inspection; however, it is not trivial to determine which views and which areas are the most suited for the detection of a given type and orientation of defect. This work introduces sensitivity images that give the expected response of a defect in any part of the inspected object and for any view. These images are based on a ray-based analytical forward model. They can be used to determine which views and which areas lead to the highest probability of detection of the defect. They can also be used for quantitatively analyzing the effects of the parameters of the inspection (probe angle and position, for example) on the overall probability of detection. Finally, they can be used to rescale TFM images so that the different views have comparable amplitudes. This methodology is applied to experimental data and discussed.